XLByte* usage

Started by Andres Freundabout 13 years ago16 messages
#1Andres Freund
andres@2ndquadrant.com

Hi,

Now that XLRecPtr's are plain 64bit integers what are we supposed to use
in code comparing and manipulating them? There already is plenty example
of both, but I would like new code to go into one direction not two...

I personally find direct comparisons/manipulations far easier to read
than the XLByte* equivalents.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#2Heikki Linnakangas
hlinnakangas@vmware.com
In reply to: Andres Freund (#1)
Re: XLByte* usage

On 16.12.2012 16:16, Andres Freund wrote:

Now that XLRecPtr's are plain 64bit integers what are we supposed to use
in code comparing and manipulating them? There already is plenty example
of both, but I would like new code to go into one direction not two...

I personally find direct comparisons/manipulations far easier to read
than the XLByte* equivalents.

I've still used XLByte* macros, but I agree that plain < = > are easier
to read. +1 for using < = > in new code.

- Heikki

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Heikki Linnakangas (#2)
Re: XLByte* usage

On Mon, Dec 17, 2012 at 2:00 PM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:

On 16.12.2012 16:16, Andres Freund wrote:

Now that XLRecPtr's are plain 64bit integers what are we supposed to use
in code comparing and manipulating them? There already is plenty example
of both, but I would like new code to go into one direction not two...

I personally find direct comparisons/manipulations far easier to read
than the XLByte* equivalents.

I've still used XLByte* macros, but I agree that plain < = > are easier to
read. +1 for using < = > in new code.

Do we ever see us changing this from 64-bit integers to something else
? If so, a macro would be much better.

Thanks,
Pavan

--
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Heikki Linnakangas
hlinnakangas@vmware.com
In reply to: Pavan Deolasee (#3)
Re: XLByte* usage

On 17.12.2012 11:04, Pavan Deolasee wrote:

On Mon, Dec 17, 2012 at 2:00 PM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:

On 16.12.2012 16:16, Andres Freund wrote:

Now that XLRecPtr's are plain 64bit integers what are we supposed to use
in code comparing and manipulating them? There already is plenty example
of both, but I would like new code to go into one direction not two...

I personally find direct comparisons/manipulations far easier to read
than the XLByte* equivalents.

I've still used XLByte* macros, but I agree that plain< => are easier to
read. +1 for using< => in new code.

Do we ever see us changing this from 64-bit integers to something else
? If so, a macro would be much better.

I don't see us changing it again any time soon. Maybe in 20 years time
people will start overflowing 2^64 bytes of WAL generated in the
lifetime of a database, but I don't think we need to start preparing for
that yet.

- Heikki

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Heikki Linnakangas (#4)
Re: XLByte* usage

Heikki Linnakangas <hlinnakangas@vmware.com> writes:

On 17.12.2012 11:04, Pavan Deolasee wrote:

On Mon, Dec 17, 2012 at 2:00 PM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:

I've still used XLByte* macros, but I agree that plain <=> are easier to
read. +1 for using <=> in new code.

Do we ever see us changing this from 64-bit integers to something else
? If so, a macro would be much better.

I don't see us changing it again any time soon. Maybe in 20 years time
people will start overflowing 2^64 bytes of WAL generated in the
lifetime of a database, but I don't think we need to start preparing for
that yet.

Note that to get to 2^64 in twenty years, an installation would have had
to have generated an average of 29GB of WAL per second, 24x7 for the
entire twenty years, with never a dump-and-reload. We're still a few
orders of magnitude away from needing to think about this.

But, if the day ever comes when 64 bits doesn't seem like enough, I bet
we'd move to 128-bit integers, which will surely be available on all
platforms by then. So +1 for using plain comparisons --- in fact, I'd
vote for running around and ripping out the macros altogether. I had
already been thinking of fixing the places that are still using memset
to initialize XLRecPtrs to "invalid".

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Andres Freund
andres@2ndquadrant.com
In reply to: Tom Lane (#5)
Re: XLByte* usage

On 2012-12-17 12:47:41 -0500, Tom Lane wrote:

Heikki Linnakangas <hlinnakangas@vmware.com> writes:

On 17.12.2012 11:04, Pavan Deolasee wrote:

On Mon, Dec 17, 2012 at 2:00 PM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:

I've still used XLByte* macros, but I agree that plain <=> are easier to
read. +1 for using <=> in new code.

Do we ever see us changing this from 64-bit integers to something else
? If so, a macro would be much better.

I don't see us changing it again any time soon. Maybe in 20 years time
people will start overflowing 2^64 bytes of WAL generated in the
lifetime of a database, but I don't think we need to start preparing for
that yet.

Note that to get to 2^64 in twenty years, an installation would have had
to have generated an average of 29GB of WAL per second, 24x7 for the
entire twenty years, with never a dump-and-reload. We're still a few
orders of magnitude away from needing to think about this.

Agreed. And it seems achieving such rates would require architectural
changes that would make manually changing all those comparisons the
tiniest problem.

But, if the day ever comes when 64 bits doesn't seem like enough, I bet
we'd move to 128-bit integers, which will surely be available on all
platforms by then. So +1 for using plain comparisons --- in fact, I'd
vote for running around and ripping out the macros altogether. I had
already been thinking of fixing the places that are still using memset
to initialize XLRecPtrs to "invalid".

I thought about that and had guessed you would be against it because it
would cause useless diversion of the branches? Otherwise I am all for
it.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Tom Lane (#5)
Re: XLByte* usage

On Mon, Dec 17, 2012 at 11:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Heikki Linnakangas <hlinnakangas@vmware.com> writes:

On 17.12.2012 11:04, Pavan Deolasee wrote:

On Mon, Dec 17, 2012 at 2:00 PM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:

I've still used XLByte* macros, but I agree that plain <=> are easier to
read. +1 for using <=> in new code.

Do we ever see us changing this from 64-bit integers to something else
? If so, a macro would be much better.

I don't see us changing it again any time soon. Maybe in 20 years time
people will start overflowing 2^64 bytes of WAL generated in the
lifetime of a database, but I don't think we need to start preparing for
that yet.

Note that to get to 2^64 in twenty years, an installation would have had
to have generated an average of 29GB of WAL per second, 24x7 for the
entire twenty years, with never a dump-and-reload. We're still a few
orders of magnitude away from needing to think about this.

I probably did not mean increasing that to beyond 64-bit. OTOH I
wondered if we would ever want to steal a few bits from the LSN field,
given the numbers you just put out. But it was more of a question than
objection.

Thanks,
Pavan

--
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Pavan Deolasee (#7)
Re: XLByte* usage

On Mon, Dec 17, 2012 at 11:26 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I probably did not mean increasing that to beyond 64-bit. OTOH I
wondered if we would ever want to steal a few bits from the LSN field,
given the numbers you just put out. But it was more of a question than
objection.

BTW, now that XLogRecPtr is uint64, can't we change the pd_lsn field
to use the same type ? At least the following comment in bufpage.h
looks outdated or at the minimum needs some explanation as why LSN in
the page header needs to split into two 32-bit values.

123 /* for historical reasons, the LSN is stored as two 32-bit values. */
124 typedef struct
125 {
126 uint32 xlogid; /* high bits */
127 uint32 xrecoff; /* low bits */
128 } PageXLogRecPtr;

Thanks,
Pavan

--
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#6)
Re: XLByte* usage

Andres Freund <andres@2ndquadrant.com> writes:

On 2012-12-17 12:47:41 -0500, Tom Lane wrote:

But, if the day ever comes when 64 bits doesn't seem like enough, I bet
we'd move to 128-bit integers, which will surely be available on all
platforms by then. So +1 for using plain comparisons --- in fact, I'd
vote for running around and ripping out the macros altogether. I had
already been thinking of fixing the places that are still using memset
to initialize XLRecPtrs to "invalid".

I thought about that and had guessed you would be against it because it
would cause useless diversion of the branches? Otherwise I am all for
it.

That's the only argument I can see against doing it --- but Heikki's
patch was already pretty invasive in the same areas this would touch,
so I'm thinking this won't make back-patching much worse. The
notational simplification seems worth it.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Andres Freund
andres@2ndquadrant.com
In reply to: Pavan Deolasee (#8)
Re: XLByte* usage

On 2012-12-17 23:45:51 +0530, Pavan Deolasee wrote:

On Mon, Dec 17, 2012 at 11:26 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I probably did not mean increasing that to beyond 64-bit. OTOH I
wondered if we would ever want to steal a few bits from the LSN field,
given the numbers you just put out. But it was more of a question than
objection.

BTW, now that XLogRecPtr is uint64, can't we change the pd_lsn field
to use the same type ? At least the following comment in bufpage.h
looks outdated or at the minimum needs some explanation as why LSN in
the page header needs to split into two 32-bit values.

123 /* for historical reasons, the LSN is stored as two 32-bit values. */
124 typedef struct
125 {
126 uint32 xlogid; /* high bits */
127 uint32 xrecoff; /* low bits */
128 } PageXLogRecPtr;

pg_upgrade'ability. The individual bytes aren't necessarily laid out the
same with two such uint32s as with one uint64.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Tom Lane
tgl@sss.pgh.pa.us
In reply to: Pavan Deolasee (#8)
Re: XLByte* usage

Pavan Deolasee <pavan.deolasee@gmail.com> writes:

BTW, now that XLogRecPtr is uint64, can't we change the pd_lsn field
to use the same type ?

No, at least not without breaking on-disk compatibility on little-endian
machines.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Andres Freund
andres@2ndquadrant.com
In reply to: Tom Lane (#9)
3 attachment(s)
Re: XLByte* usage

On 2012-12-17 13:16:47 -0500, Tom Lane wrote:

Andres Freund <andres@2ndquadrant.com> writes:

On 2012-12-17 12:47:41 -0500, Tom Lane wrote:

But, if the day ever comes when 64 bits doesn't seem like enough, I bet
we'd move to 128-bit integers, which will surely be available on all
platforms by then. So +1 for using plain comparisons --- in fact, I'd
vote for running around and ripping out the macros altogether. I had
already been thinking of fixing the places that are still using memset
to initialize XLRecPtrs to "invalid".

I thought about that and had guessed you would be against it because it
would cause useless diversion of the branches? Otherwise I am all for
it.

That's the only argument I can see against doing it --- but Heikki's
patch was already pretty invasive in the same areas this would touch,
so I'm thinking this won't make back-patching much worse.

I thought a while about this for while and decided its worth trying to
this before the next review round of xlogreader. Even if it causes some
breakage there. Doing it this way round seems less likely to introduce
bugs, especially if somebody else would go round and do this after the
next xlogreader review round but before committing it.

Attached is
1) removal of MemSet(&ptr, 0, sizeof(XLogRecPtr)
2) removal of XLByte(EQ|LT|LE|Advance)
3) removal of the dead NextLogPage I noticed along the way

In 2) unfortunately one has to make decision in which way to simplify
negated XLByte(LT|LE) expressions. I tried to make that choice very
careful and when over every change several times after that, so I hope
there aren't any bad changes, but more eyeballs are needed.

The notational simplification seems worth it.

After doing this: Definitely. Imo some of the conditions are way much
easier to read now. Perhaps I am just bad in reading negations though...

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001-Use-InvalidXLogRecPtr-instead-of-MemSet-ptr-0-sizeof.patchtext/x-patch; charset=us-asciiDownload
>From 4f5d486a830f8522f37200f739d928de0ed97051 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 17 Dec 2012 21:04:04 +0100
Subject: [PATCH 1/3] Use = InvalidXLogRecPtr instead of MemSet(&ptr, 0,
 sizeof(XLogRecPtr)) consistently

---
 src/backend/access/transam/xlog.c   | 8 ++++----
 src/backend/replication/walsender.c | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 2deb7e5..1354aa6 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -5976,8 +5976,8 @@ StartupXLOG(void)
 
 					LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);
 
-					MemSet(&ControlFile->backupStartPoint, 0, sizeof(XLogRecPtr));
-					MemSet(&ControlFile->backupEndPoint, 0, sizeof(XLogRecPtr));
+					ControlFile->backupStartPoint = InvalidXLogRecPtr;
+					ControlFile->backupEndPoint = InvalidXLogRecPtr;
 					ControlFile->backupEndRequired = false;
 					UpdateControlFile();
 
@@ -7336,7 +7336,7 @@ CreateCheckPoint(int flags)
 	ControlFile->checkPointCopy = checkPoint;
 	ControlFile->time = (pg_time_t) time(NULL);
 	/* crash recovery should always recover to the end of WAL */
-	MemSet(&ControlFile->minRecoveryPoint, 0, sizeof(XLogRecPtr));
+	ControlFile->minRecoveryPoint = InvalidXLogRecPtr;
 	ControlFile->minRecoveryPointTLI = 0;
 	UpdateControlFile();
 	LWLockRelease(ControlFileLock);
@@ -8148,7 +8148,7 @@ xlog_redo(XLogRecPtr lsn, XLogRecord *record)
 				ControlFile->minRecoveryPoint = lsn;
 				ControlFile->minRecoveryPointTLI = ThisTimeLineID;
 			}
-			MemSet(&ControlFile->backupStartPoint, 0, sizeof(XLogRecPtr));
+			ControlFile->backupStartPoint = InvalidXLogRecPtr;
 			ControlFile->backupEndRequired = false;
 			UpdateControlFile();
 
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index aec57f5..b450b14 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -1115,7 +1115,7 @@ InitWalSenderSlot(void)
 			 * Found a free slot. Reserve it for us.
 			 */
 			walsnd->pid = MyProcPid;
-			MemSet(&walsnd->sentPtr, 0, sizeof(XLogRecPtr));
+			walsnd->sentPtr = InvalidXLogRecPtr;
 			walsnd->state = WALSNDSTATE_STARTUP;
 			SpinLockRelease(&walsnd->mutex);
 			/* don't need the lock anymore */
-- 
1.7.12.289.g0ce9864.dirty

0002-Remove-XLByte-LT-LE-EQ-and-XLByteAdvance-macros-they.patchtext/x-patch; charset=us-asciiDownload
>From b8203865cdb9ffc0fc981174acc2b37b69bd3727 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 17 Dec 2012 23:28:00 +0100
Subject: [PATCH 2/3] Remove XLByte(LT|LE|EQ) and XLByteAdvance macros, they
 aren't required anymore

---
 src/backend/access/gin/ginxlog.c           |  20 ++--
 src/backend/access/gist/gist.c             |  14 +--
 src/backend/access/gist/gistget.c          |   2 +-
 src/backend/access/gist/gistvacuum.c       |   2 +-
 src/backend/access/gist/gistxlog.c         |   4 +-
 src/backend/access/heap/heapam.c           |  22 ++---
 src/backend/access/nbtree/nbtxlog.c        |  16 ++--
 src/backend/access/spgist/spgxlog.c        |  36 ++++----
 src/backend/access/transam/clog.c          |   2 +-
 src/backend/access/transam/slru.c          |   2 +-
 src/backend/access/transam/timeline.c      |   4 +-
 src/backend/access/transam/twophase.c      |   2 +-
 src/backend/access/transam/xlog.c          | 144 ++++++++++++++---------------
 src/backend/commands/sequence.c            |   2 +-
 src/backend/replication/syncrep.c          |  12 +--
 src/backend/replication/walreceiver.c      |  12 +--
 src/backend/replication/walreceiverfuncs.c |   2 +-
 src/backend/replication/walsender.c        |  28 +++---
 src/bin/pg_basebackup/receivelog.c         |   2 +-
 src/include/access/xlogdefs.h              |  14 ---
 20 files changed, 164 insertions(+), 178 deletions(-)

diff --git a/src/backend/access/gin/ginxlog.c b/src/backend/access/gin/ginxlog.c
index 0ff66c8..fc1f0a5 100644
--- a/src/backend/access/gin/ginxlog.c
+++ b/src/backend/access/gin/ginxlog.c
@@ -177,7 +177,7 @@ ginRedoInsert(XLogRecPtr lsn, XLogRecord *record)
 		return;					/* page was deleted, nothing to do */
 	page = (Page) BufferGetPage(buffer);
 
-	if (!XLByteLE(lsn, PageGetLSN(page)))
+	if (lsn > PageGetLSN(page))
 	{
 		if (data->isData)
 		{
@@ -393,7 +393,7 @@ ginRedoVacuumPage(XLogRecPtr lsn, XLogRecord *record)
 		return;
 	page = (Page) BufferGetPage(buffer);
 
-	if (!XLByteLE(lsn, PageGetLSN(page)))
+	if (lsn > PageGetLSN(page))
 	{
 		if (GinPageIsData(page))
 		{
@@ -448,7 +448,7 @@ ginRedoDeletePage(XLogRecPtr lsn, XLogRecord *record)
 		if (BufferIsValid(dbuffer))
 		{
 			page = BufferGetPage(dbuffer);
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				Assert(GinPageIsData(page));
 				GinPageGetOpaque(page)->flags = GIN_DELETED;
@@ -467,7 +467,7 @@ ginRedoDeletePage(XLogRecPtr lsn, XLogRecord *record)
 		if (BufferIsValid(pbuffer))
 		{
 			page = BufferGetPage(pbuffer);
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				Assert(GinPageIsData(page));
 				Assert(!GinPageIsLeaf(page));
@@ -487,7 +487,7 @@ ginRedoDeletePage(XLogRecPtr lsn, XLogRecord *record)
 		if (BufferIsValid(lbuffer))
 		{
 			page = BufferGetPage(lbuffer);
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				Assert(GinPageIsData(page));
 				GinPageGetOpaque(page)->rightlink = data->rightLink;
@@ -518,7 +518,7 @@ ginRedoUpdateMetapage(XLogRecPtr lsn, XLogRecord *record)
 		return;					/* assume index was deleted, nothing to do */
 	metapage = BufferGetPage(metabuffer);
 
-	if (!XLByteLE(lsn, PageGetLSN(metapage)))
+	if (lsn > PageGetLSN(metapage))
 	{
 		memcpy(GinPageGetMeta(metapage), &data->metadata, sizeof(GinMetaPageData));
 		PageSetLSN(metapage, lsn);
@@ -540,7 +540,7 @@ ginRedoUpdateMetapage(XLogRecPtr lsn, XLogRecord *record)
 			{
 				Page		page = BufferGetPage(buffer);
 
-				if (!XLByteLE(lsn, PageGetLSN(page)))
+				if (lsn > PageGetLSN(page))
 				{
 					OffsetNumber l,
 								off = (PageIsEmpty(page)) ? FirstOffsetNumber :
@@ -590,7 +590,7 @@ ginRedoUpdateMetapage(XLogRecPtr lsn, XLogRecord *record)
 			{
 				Page		page = BufferGetPage(buffer);
 
-				if (!XLByteLE(lsn, PageGetLSN(page)))
+				if (lsn > PageGetLSN(page))
 				{
 					GinPageGetOpaque(page)->rightlink = data->newRightlink;
 
@@ -677,7 +677,7 @@ ginRedoDeleteListPages(XLogRecPtr lsn, XLogRecord *record)
 		return;					/* assume index was deleted, nothing to do */
 	metapage = BufferGetPage(metabuffer);
 
-	if (!XLByteLE(lsn, PageGetLSN(metapage)))
+	if (lsn > PageGetLSN(metapage))
 	{
 		memcpy(GinPageGetMeta(metapage), &data->metadata, sizeof(GinMetaPageData));
 		PageSetLSN(metapage, lsn);
@@ -703,7 +703,7 @@ ginRedoDeleteListPages(XLogRecPtr lsn, XLogRecord *record)
 		{
 			Page		page = BufferGetPage(buffer);
 
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				GinPageGetOpaque(page)->flags = GIN_DELETED;
 
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 9c6625b..700e97a 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -561,8 +561,7 @@ gistdoinsert(Relation r, IndexTuple itup, Size freespace, GISTSTATE *giststate)
 		}
 
 		if (stack->blkno != GIST_ROOT_BLKNO &&
-			XLByteLT(stack->parent->lsn,
-					 GistPageGetOpaque(stack->page)->nsn))
+			stack->parent->lsn < GistPageGetOpaque(stack->page)->nsn)
 		{
 			/*
 			 * Concurrent split detected. There's no guarantee that the
@@ -620,7 +619,7 @@ gistdoinsert(Relation r, IndexTuple itup, Size freespace, GISTSTATE *giststate)
 					xlocked = true;
 					stack->page = (Page) BufferGetPage(stack->buffer);
 
-					if (!XLByteEQ(PageGetLSN(stack->page), stack->lsn))
+					if (PageGetLSN(stack->page) != stack->lsn)
 					{
 						/* the page was changed while we unlocked it, retry */
 						continue;
@@ -708,8 +707,8 @@ gistdoinsert(Relation r, IndexTuple itup, Size freespace, GISTSTATE *giststate)
 					 */
 				}
 				else if (GistFollowRight(stack->page) ||
-						 XLByteLT(stack->parent->lsn,
-								  GistPageGetOpaque(stack->page)->nsn))
+						 stack->parent->lsn <
+								  GistPageGetOpaque(stack->page)->nsn)
 				{
 					/*
 					 * The page was split while we momentarily unlocked the
@@ -794,7 +793,7 @@ gistFindPath(Relation r, BlockNumber child, OffsetNumber *downlinkoffnum)
 		if (GistFollowRight(page))
 			elog(ERROR, "concurrent GiST page split was incomplete");
 
-		if (top->parent && XLByteLT(top->parent->lsn, GistPageGetOpaque(page)->nsn) &&
+		if (top->parent && top->parent->lsn < GistPageGetOpaque(page)->nsn &&
 			GistPageGetOpaque(page)->rightlink != InvalidBlockNumber /* sanity check */ )
 		{
 			/*
@@ -864,7 +863,8 @@ gistFindCorrectParent(Relation r, GISTInsertStack *child)
 	parent->page = (Page) BufferGetPage(parent->buffer);
 
 	/* here we don't need to distinguish between split and page update */
-	if (child->downlinkoffnum == InvalidOffsetNumber || !XLByteEQ(parent->lsn, PageGetLSN(parent->page)))
+	if (child->downlinkoffnum == InvalidOffsetNumber ||
+		parent->lsn != PageGetLSN(parent->page))
 	{
 		/* parent is changed, look child in right links until found */
 		OffsetNumber i,
diff --git a/src/backend/access/gist/gistget.c b/src/backend/access/gist/gistget.c
index 2253e7c..0e1fd80 100644
--- a/src/backend/access/gist/gistget.c
+++ b/src/backend/access/gist/gistget.c
@@ -263,7 +263,7 @@ gistScanPage(IndexScanDesc scan, GISTSearchItem *pageItem, double *myDistances,
 	 */
 	if (!XLogRecPtrIsInvalid(pageItem->data.parentlsn) &&
 		(GistFollowRight(page) ||
-		 XLByteLT(pageItem->data.parentlsn, opaque->nsn)) &&
+		 pageItem->data.parentlsn < opaque->nsn) &&
 		opaque->rightlink != InvalidBlockNumber /* sanity check */ )
 	{
 		/* There was a page split, follow right link to add pages */
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index f2a7a87..3fbcc6f 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -114,7 +114,7 @@ pushStackIfSplited(Page page, GistBDItem *stack)
 	GISTPageOpaque opaque = GistPageGetOpaque(page);
 
 	if (stack->blkno != GIST_ROOT_BLKNO && !XLogRecPtrIsInvalid(stack->parentlsn) &&
-		(GistFollowRight(page) || XLByteLT(stack->parentlsn, opaque->nsn)) &&
+		(GistFollowRight(page) || stack->parentlsn < opaque->nsn) &&
 		opaque->rightlink != InvalidBlockNumber /* sanity check */ )
 	{
 		/* split page detected, install right link to the stack */
diff --git a/src/backend/access/gist/gistxlog.c b/src/backend/access/gist/gistxlog.c
index f9c8fcb..f802c23 100644
--- a/src/backend/access/gist/gistxlog.c
+++ b/src/backend/access/gist/gistxlog.c
@@ -64,7 +64,7 @@ gistRedoClearFollowRight(XLogRecPtr lsn, XLogRecord *record, int block_index,
 	 * of this record, because the updated NSN is not included in the full
 	 * page image.
 	 */
-	if (!XLByteLT(lsn, PageGetLSN(page)))
+	if (lsn >= PageGetLSN(page))
 	{
 		GistPageGetOpaque(page)->nsn = lsn;
 		GistClearFollowRight(page);
@@ -119,7 +119,7 @@ gistRedoPageUpdateRecord(XLogRecPtr lsn, XLogRecord *record)
 	page = (Page) BufferGetPage(buffer);
 
 	/* nothing more to do if change already applied */
-	if (XLByteLE(lsn, PageGetLSN(page)))
+	if (lsn <= PageGetLSN(page))
 	{
 		UnlockReleaseBuffer(buffer);
 		return;
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 186fb87..ac8407b 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -4700,7 +4700,7 @@ heap_xlog_clean(XLogRecPtr lsn, XLogRecord *record)
 	LockBufferForCleanup(buffer);
 	page = (Page) BufferGetPage(buffer);
 
-	if (XLByteLE(lsn, PageGetLSN(page)))
+	if (lsn <= PageGetLSN(page))
 	{
 		UnlockReleaseBuffer(buffer);
 		return;
@@ -4770,7 +4770,7 @@ heap_xlog_freeze(XLogRecPtr lsn, XLogRecord *record)
 		return;
 	page = (Page) BufferGetPage(buffer);
 
-	if (XLByteLE(lsn, PageGetLSN(page)))
+	if (lsn <= PageGetLSN(page))
 	{
 		UnlockReleaseBuffer(buffer);
 		return;
@@ -4854,7 +4854,7 @@ heap_xlog_visible(XLogRecPtr lsn, XLogRecord *record)
 		 * XLOG record's LSN, we mustn't mark the page all-visible, because
 		 * the subsequent update won't be replayed to clear the flag.
 		 */
-		if (!XLByteLE(lsn, PageGetLSN(page)))
+		if (lsn > PageGetLSN(page))
 		{
 			PageSetAllVisible(page);
 			MarkBufferDirty(buffer);
@@ -4891,7 +4891,7 @@ heap_xlog_visible(XLogRecPtr lsn, XLogRecord *record)
 		 * we did for the heap page.  If this results in a dropped bit, no
 		 * real harm is done; and the next VACUUM will fix it.
 		 */
-		if (!XLByteLE(lsn, PageGetLSN(BufferGetPage(vmbuffer))))
+		if (lsn > PageGetLSN(BufferGetPage(vmbuffer)))
 			visibilitymap_set(reln, xlrec->block, lsn, vmbuffer,
 							  xlrec->cutoff_xid);
 
@@ -4977,7 +4977,7 @@ heap_xlog_delete(XLogRecPtr lsn, XLogRecord *record)
 		return;
 	page = (Page) BufferGetPage(buffer);
 
-	if (XLByteLE(lsn, PageGetLSN(page)))		/* changes are applied */
+	if (lsn <= PageGetLSN(page))		/* changes are applied */
 	{
 		UnlockReleaseBuffer(buffer);
 		return;
@@ -5072,7 +5072,7 @@ heap_xlog_insert(XLogRecPtr lsn, XLogRecord *record)
 			return;
 		page = (Page) BufferGetPage(buffer);
 
-		if (XLByteLE(lsn, PageGetLSN(page)))	/* changes are applied */
+		if (lsn <= PageGetLSN(page))	/* changes are applied */
 		{
 			UnlockReleaseBuffer(buffer);
 			return;
@@ -5207,7 +5207,7 @@ heap_xlog_multi_insert(XLogRecPtr lsn, XLogRecord *record)
 			return;
 		page = (Page) BufferGetPage(buffer);
 
-		if (XLByteLE(lsn, PageGetLSN(page)))	/* changes are applied */
+		if (lsn <= PageGetLSN(page))	/* changes are applied */
 		{
 			UnlockReleaseBuffer(buffer);
 			return;
@@ -5349,7 +5349,7 @@ heap_xlog_update(XLogRecPtr lsn, XLogRecord *record, bool hot_update)
 		goto newt;
 	page = (Page) BufferGetPage(obuffer);
 
-	if (XLByteLE(lsn, PageGetLSN(page)))		/* changes are applied */
+	if (lsn <= PageGetLSN(page))		/* changes are applied */
 	{
 		if (samepage)
 		{
@@ -5449,7 +5449,7 @@ newt:;
 			return;
 		page = (Page) BufferGetPage(nbuffer);
 
-		if (XLByteLE(lsn, PageGetLSN(page)))	/* changes are applied */
+		if (lsn <= PageGetLSN(page))	/* changes are applied */
 		{
 			UnlockReleaseBuffer(nbuffer);
 			if (BufferIsValid(obuffer))
@@ -5549,7 +5549,7 @@ heap_xlog_lock(XLogRecPtr lsn, XLogRecord *record)
 		return;
 	page = (Page) BufferGetPage(buffer);
 
-	if (XLByteLE(lsn, PageGetLSN(page)))		/* changes are applied */
+	if (lsn <= PageGetLSN(page))		/* changes are applied */
 	{
 		UnlockReleaseBuffer(buffer);
 		return;
@@ -5612,7 +5612,7 @@ heap_xlog_inplace(XLogRecPtr lsn, XLogRecord *record)
 		return;
 	page = (Page) BufferGetPage(buffer);
 
-	if (XLByteLE(lsn, PageGetLSN(page)))		/* changes are applied */
+	if (lsn <= PageGetLSN(page))		/* changes are applied */
 	{
 		UnlockReleaseBuffer(buffer);
 		return;
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index 9f850ab..c91408d 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -229,7 +229,7 @@ btree_xlog_insert(bool isleaf, bool ismeta,
 		{
 			page = (Page) BufferGetPage(buffer);
 
-			if (XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn <= PageGetLSN(page))
 			{
 				UnlockReleaseBuffer(buffer);
 			}
@@ -381,7 +381,7 @@ btree_xlog_split(bool onleft, bool isroot,
 			Page		lpage = (Page) BufferGetPage(lbuf);
 			BTPageOpaque lopaque = (BTPageOpaque) PageGetSpecialPointer(lpage);
 
-			if (!XLByteLE(lsn, PageGetLSN(lpage)))
+			if (lsn > PageGetLSN(lpage))
 			{
 				OffsetNumber off;
 				OffsetNumber maxoff = PageGetMaxOffsetNumber(lpage);
@@ -459,7 +459,7 @@ btree_xlog_split(bool onleft, bool isroot,
 		{
 			Page		page = (Page) BufferGetPage(buffer);
 
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				BTPageOpaque pageop = (BTPageOpaque) PageGetSpecialPointer(page);
 
@@ -537,7 +537,7 @@ btree_xlog_vacuum(XLogRecPtr lsn, XLogRecord *record)
 	LockBufferForCleanup(buffer);
 	page = (Page) BufferGetPage(buffer);
 
-	if (XLByteLE(lsn, PageGetLSN(page)))
+	if (lsn <= PageGetLSN(page))
 	{
 		UnlockReleaseBuffer(buffer);
 		return;
@@ -757,7 +757,7 @@ btree_xlog_delete(XLogRecPtr lsn, XLogRecord *record)
 		return;
 	page = (Page) BufferGetPage(buffer);
 
-	if (XLByteLE(lsn, PageGetLSN(page)))
+	if (lsn <= PageGetLSN(page))
 	{
 		UnlockReleaseBuffer(buffer);
 		return;
@@ -820,7 +820,7 @@ btree_xlog_delete_page(uint8 info, XLogRecPtr lsn, XLogRecord *record)
 		{
 			page = (Page) BufferGetPage(buffer);
 			pageop = (BTPageOpaque) PageGetSpecialPointer(page);
-			if (XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn <= PageGetLSN(page))
 			{
 				UnlockReleaseBuffer(buffer);
 			}
@@ -867,7 +867,7 @@ btree_xlog_delete_page(uint8 info, XLogRecPtr lsn, XLogRecord *record)
 		if (BufferIsValid(buffer))
 		{
 			page = (Page) BufferGetPage(buffer);
-			if (XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn <= PageGetLSN(page))
 			{
 				UnlockReleaseBuffer(buffer);
 			}
@@ -895,7 +895,7 @@ btree_xlog_delete_page(uint8 info, XLogRecPtr lsn, XLogRecord *record)
 			if (BufferIsValid(buffer))
 			{
 				page = (Page) BufferGetPage(buffer);
-				if (XLByteLE(lsn, PageGetLSN(page)))
+				if (lsn <= PageGetLSN(page))
 				{
 					UnlockReleaseBuffer(buffer);
 				}
diff --git a/src/backend/access/spgist/spgxlog.c b/src/backend/access/spgist/spgxlog.c
index 2a874a2..9a7aaf7 100644
--- a/src/backend/access/spgist/spgxlog.c
+++ b/src/backend/access/spgist/spgxlog.c
@@ -139,7 +139,7 @@ spgRedoAddLeaf(XLogRecPtr lsn, XLogRecord *record)
 				SpGistInitBuffer(buffer,
 					 SPGIST_LEAF | (xldata->storesNulls ? SPGIST_NULLS : 0));
 
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				/* insert new tuple */
 				if (xldata->offnumLeaf != xldata->offnumHeadLeaf)
@@ -187,7 +187,7 @@ spgRedoAddLeaf(XLogRecPtr lsn, XLogRecord *record)
 		if (BufferIsValid(buffer))
 		{
 			page = BufferGetPage(buffer);
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				SpGistInnerTuple tuple;
 
@@ -251,7 +251,7 @@ spgRedoMoveLeafs(XLogRecPtr lsn, XLogRecord *record)
 				SpGistInitBuffer(buffer,
 					 SPGIST_LEAF | (xldata->storesNulls ? SPGIST_NULLS : 0));
 
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				int			i;
 
@@ -280,7 +280,7 @@ spgRedoMoveLeafs(XLogRecPtr lsn, XLogRecord *record)
 		if (BufferIsValid(buffer))
 		{
 			page = BufferGetPage(buffer);
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				spgPageIndexMultiDelete(&state, page, toDelete, xldata->nMoves,
 						state.isBuild ? SPGIST_PLACEHOLDER : SPGIST_REDIRECT,
@@ -305,7 +305,7 @@ spgRedoMoveLeafs(XLogRecPtr lsn, XLogRecord *record)
 		if (BufferIsValid(buffer))
 		{
 			page = BufferGetPage(buffer);
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				SpGistInnerTuple tuple;
 
@@ -353,7 +353,7 @@ spgRedoAddNode(XLogRecPtr lsn, XLogRecord *record)
 			if (BufferIsValid(buffer))
 			{
 				page = BufferGetPage(buffer);
-				if (!XLByteLE(lsn, PageGetLSN(page)))
+				if (lsn > PageGetLSN(page))
 				{
 					PageIndexTupleDelete(page, xldata->offnum);
 					if (PageAddItem(page, (Item) innerTuple, innerTuple->size,
@@ -399,7 +399,7 @@ spgRedoAddNode(XLogRecPtr lsn, XLogRecord *record)
 				if (xldata->newPage)
 					SpGistInitBuffer(buffer, 0);
 
-				if (!XLByteLE(lsn, PageGetLSN(page)))
+				if (lsn > PageGetLSN(page))
 				{
 					addOrReplaceTuple(page, (Item) innerTuple,
 									  innerTuple->size, xldata->offnumNew);
@@ -430,7 +430,7 @@ spgRedoAddNode(XLogRecPtr lsn, XLogRecord *record)
 			if (BufferIsValid(buffer))
 			{
 				page = BufferGetPage(buffer);
-				if (!XLByteLE(lsn, PageGetLSN(page)))
+				if (lsn > PageGetLSN(page))
 				{
 					SpGistDeadTuple dt;
 
@@ -495,7 +495,7 @@ spgRedoAddNode(XLogRecPtr lsn, XLogRecord *record)
 			if (BufferIsValid(buffer))
 			{
 				page = BufferGetPage(buffer);
-				if (!XLByteLE(lsn, PageGetLSN(page)))
+				if (lsn > PageGetLSN(page))
 				{
 					SpGistInnerTuple innerTuple;
 
@@ -552,7 +552,7 @@ spgRedoSplitTuple(XLogRecPtr lsn, XLogRecord *record)
 			if (xldata->newPage)
 				SpGistInitBuffer(buffer, 0);
 
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				addOrReplaceTuple(page, (Item) postfixTuple,
 								  postfixTuple->size, xldata->offnumPostfix);
@@ -574,7 +574,7 @@ spgRedoSplitTuple(XLogRecPtr lsn, XLogRecord *record)
 		if (BufferIsValid(buffer))
 		{
 			page = BufferGetPage(buffer);
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				PageIndexTupleDelete(page, xldata->offnumPrefix);
 				if (PageAddItem(page, (Item) prefixTuple, prefixTuple->size,
@@ -670,7 +670,7 @@ spgRedoPickSplit(XLogRecPtr lsn, XLogRecord *record)
 			if (BufferIsValid(srcBuffer))
 			{
 				srcPage = BufferGetPage(srcBuffer);
-				if (!XLByteLE(lsn, PageGetLSN(srcPage)))
+				if (lsn > PageGetLSN(srcPage))
 				{
 					/*
 					 * We have it a bit easier here than in doPickSplit(),
@@ -737,7 +737,7 @@ spgRedoPickSplit(XLogRecPtr lsn, XLogRecord *record)
 			if (BufferIsValid(destBuffer))
 			{
 				destPage = (Page) BufferGetPage(destBuffer);
-				if (XLByteLE(lsn, PageGetLSN(destPage)))
+				if (lsn <= PageGetLSN(destPage))
 					destPage = NULL;	/* don't do any page updates */
 			}
 			else
@@ -790,7 +790,7 @@ spgRedoPickSplit(XLogRecPtr lsn, XLogRecord *record)
 				SpGistInitBuffer(buffer,
 								 (xldata->storesNulls ? SPGIST_NULLS : 0));
 
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				addOrReplaceTuple(page, (Item) innerTuple, innerTuple->size,
 								  xldata->offnumInner);
@@ -842,7 +842,7 @@ spgRedoPickSplit(XLogRecPtr lsn, XLogRecord *record)
 			{
 				page = BufferGetPage(buffer);
 
-				if (!XLByteLE(lsn, PageGetLSN(page)))
+				if (lsn > PageGetLSN(page))
 				{
 					SpGistInnerTuple parent;
 
@@ -900,7 +900,7 @@ spgRedoVacuumLeaf(XLogRecPtr lsn, XLogRecord *record)
 		if (BufferIsValid(buffer))
 		{
 			page = BufferGetPage(buffer);
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				spgPageIndexMultiDelete(&state, page,
 										toDead, xldata->nDead,
@@ -971,7 +971,7 @@ spgRedoVacuumRoot(XLogRecPtr lsn, XLogRecord *record)
 		if (BufferIsValid(buffer))
 		{
 			page = BufferGetPage(buffer);
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				/* The tuple numbers are in order */
 				PageIndexMultiDelete(page, toDelete, xldata->nDelete);
@@ -1017,7 +1017,7 @@ spgRedoVacuumRedirect(XLogRecPtr lsn, XLogRecord *record)
 		if (BufferIsValid(buffer))
 		{
 			page = BufferGetPage(buffer);
-			if (!XLByteLE(lsn, PageGetLSN(page)))
+			if (lsn > PageGetLSN(page))
 			{
 				SpGistPageOpaque opaque = SpGistPageGetOpaque(page);
 				int			i;
diff --git a/src/backend/access/transam/clog.c b/src/backend/access/transam/clog.c
index e3fd56d..2d274cf 100644
--- a/src/backend/access/transam/clog.c
+++ b/src/backend/access/transam/clog.c
@@ -365,7 +365,7 @@ TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, i
 	{
 		int			lsnindex = GetLSNIndex(slotno, xid);
 
-		if (XLByteLT(ClogCtl->shared->group_lsn[lsnindex], lsn))
+		if (ClogCtl->shared->group_lsn[lsnindex] < lsn)
 			ClogCtl->shared->group_lsn[lsnindex] = lsn;
 	}
 }
diff --git a/src/backend/access/transam/slru.c b/src/backend/access/transam/slru.c
index b8f60d6..ec2509b 100644
--- a/src/backend/access/transam/slru.c
+++ b/src/backend/access/transam/slru.c
@@ -685,7 +685,7 @@ SlruPhysicalWritePage(SlruCtl ctl, int pageno, int slotno, SlruFlush fdata)
 		{
 			XLogRecPtr	this_lsn = shared->group_lsn[lsnindex++];
 
-			if (XLByteLT(max_lsn, this_lsn))
+			if (max_lsn < this_lsn)
 				max_lsn = this_lsn;
 		}
 
diff --git a/src/backend/access/transam/timeline.c b/src/backend/access/transam/timeline.c
index b33d230..432cc14 100644
--- a/src/backend/access/transam/timeline.c
+++ b/src/backend/access/transam/timeline.c
@@ -522,8 +522,8 @@ tliOfPointInHistory(XLogRecPtr ptr, List *history)
 	foreach(cell, history)
 	{
 		TimeLineHistoryEntry *tle = (TimeLineHistoryEntry *) lfirst(cell);
-		if ((XLogRecPtrIsInvalid(tle->begin) || XLByteLE(tle->begin, ptr)) &&
-			(XLogRecPtrIsInvalid(tle->end) || XLByteLT(ptr, tle->end)))
+		if ((XLogRecPtrIsInvalid(tle->begin) || tle->begin <= ptr) &&
+			(XLogRecPtrIsInvalid(tle->end) || ptr < tle->end))
 		{
 			/* found it */
 			return tle->tli;
diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c
index 3a0b190..a7e90e4 100644
--- a/src/backend/access/transam/twophase.c
+++ b/src/backend/access/transam/twophase.c
@@ -1559,7 +1559,7 @@ CheckPointTwoPhase(XLogRecPtr redo_horizon)
 		PGXACT	   *pgxact = &ProcGlobal->allPgXact[gxact->pgprocno];
 
 		if (gxact->valid &&
-			XLByteLE(gxact->prepare_lsn, redo_horizon))
+			gxact->prepare_lsn <= redo_horizon)
 			xids[nxids++] = pgxact->xid;
 	}
 
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 1354aa6..12254cd 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -925,9 +925,9 @@ begin:;
 	 * affect the contents of the XLOG record, so we'll update our local copy
 	 * but not force a recomputation.
 	 */
-	if (!XLByteEQ(RedoRecPtr, Insert->RedoRecPtr))
+	if (RedoRecPtr != Insert->RedoRecPtr)
 	{
-		Assert(XLByteLT(RedoRecPtr, Insert->RedoRecPtr));
+		Assert(RedoRecPtr < Insert->RedoRecPtr);
 		RedoRecPtr = Insert->RedoRecPtr;
 
 		if (doPageWrites)
@@ -937,7 +937,7 @@ begin:;
 				if (dtbuf[i] == InvalidBuffer)
 					continue;
 				if (dtbuf_bkp[i] == false &&
-					XLByteLE(dtbuf_lsn[i], RedoRecPtr))
+					dtbuf_lsn[i] <= RedoRecPtr)
 				{
 					/*
 					 * Oops, this buffer now needs to be backed up, but we
@@ -1001,7 +1001,7 @@ begin:;
 
 		LWLockAcquire(WALWriteLock, LW_EXCLUSIVE);
 		LogwrtResult = XLogCtl->LogwrtResult;
-		if (!XLByteLE(RecPtr, LogwrtResult.Flush))
+		if (LogwrtResult.Flush < RecPtr)
 		{
 			XLogwrtRqst FlushRqst;
 
@@ -1149,9 +1149,9 @@ begin:;
 
 			SpinLockAcquire(&xlogctl->info_lck);
 			xlogctl->LogwrtResult = LogwrtResult;
-			if (XLByteLT(xlogctl->LogwrtRqst.Write, LogwrtResult.Write))
+			if (xlogctl->LogwrtRqst.Write < LogwrtResult.Write)
 				xlogctl->LogwrtRqst.Write = LogwrtResult.Write;
-			if (XLByteLT(xlogctl->LogwrtRqst.Flush, LogwrtResult.Flush))
+			if (xlogctl->LogwrtRqst.Flush < LogwrtResult.Flush)
 				xlogctl->LogwrtRqst.Flush = LogwrtResult.Flush;
 			SpinLockRelease(&xlogctl->info_lck);
 		}
@@ -1187,7 +1187,7 @@ begin:;
 
 		SpinLockAcquire(&xlogctl->info_lck);
 		/* advance global request to include new block(s) */
-		if (XLByteLT(xlogctl->LogwrtRqst.Write, WriteRqst))
+		if (xlogctl->LogwrtRqst.Write < WriteRqst)
 			xlogctl->LogwrtRqst.Write = WriteRqst;
 		/* update local result copy while I have the chance */
 		LogwrtResult = xlogctl->LogwrtResult;
@@ -1226,7 +1226,7 @@ XLogCheckBuffer(XLogRecData *rdata, bool doPageWrites,
 	*lsn = PageGetLSN(page);
 
 	if (doPageWrites &&
-		XLByteLE(PageGetLSN(page), RedoRecPtr))
+		PageGetLSN(page) <= RedoRecPtr)
 	{
 		/*
 		 * The page needs to be backed up, so set up *bkpb
@@ -1299,7 +1299,7 @@ AdvanceXLInsertBuffer(bool new_segment)
 	 * written out.
 	 */
 	OldPageRqstPtr = XLogCtl->xlblocks[nextidx];
-	if (!XLByteLE(OldPageRqstPtr, LogwrtResult.Write))
+	if (LogwrtResult.Write < OldPageRqstPtr)
 	{
 		/* nope, got work to do... */
 		XLogRecPtr	FinishedPageRqstPtr;
@@ -1312,7 +1312,7 @@ AdvanceXLInsertBuffer(bool new_segment)
 			volatile XLogCtlData *xlogctl = XLogCtl;
 
 			SpinLockAcquire(&xlogctl->info_lck);
-			if (XLByteLT(xlogctl->LogwrtRqst.Write, FinishedPageRqstPtr))
+			if (xlogctl->LogwrtRqst.Write < FinishedPageRqstPtr)
 				xlogctl->LogwrtRqst.Write = FinishedPageRqstPtr;
 			LogwrtResult = xlogctl->LogwrtResult;
 			SpinLockRelease(&xlogctl->info_lck);
@@ -1324,12 +1324,12 @@ AdvanceXLInsertBuffer(bool new_segment)
 		 * Now that we have an up-to-date LogwrtResult value, see if we still
 		 * need to write it or if someone else already did.
 		 */
-		if (!XLByteLE(OldPageRqstPtr, LogwrtResult.Write))
+		if (LogwrtResult.Write < OldPageRqstPtr)
 		{
 			/* Must acquire write lock */
 			LWLockAcquire(WALWriteLock, LW_EXCLUSIVE);
 			LogwrtResult = XLogCtl->LogwrtResult;
-			if (XLByteLE(OldPageRqstPtr, LogwrtResult.Write))
+			if (LogwrtResult.Write >= OldPageRqstPtr)
 			{
 				/* OK, someone wrote it already */
 				LWLockRelease(WALWriteLock);
@@ -1360,12 +1360,11 @@ AdvanceXLInsertBuffer(bool new_segment)
 	{
 		/* force it to a segment start point */
 		if (NewPageBeginPtr % XLogSegSize != 0)
-			XLByteAdvance(NewPageBeginPtr,
-						  XLogSegSize - NewPageBeginPtr % XLogSegSize);
+			NewPageBeginPtr += XLogSegSize - NewPageBeginPtr % XLogSegSize;
 	}
 
 	NewPageEndPtr = NewPageBeginPtr;
-	XLByteAdvance(NewPageEndPtr, XLOG_BLCKSZ);
+	NewPageEndPtr += XLOG_BLCKSZ;
 	XLogCtl->xlblocks[nextidx] = NewPageEndPtr;
 	NewPage = (XLogPageHeader) (XLogCtl->pages + nextidx * (Size) XLOG_BLCKSZ);
 
@@ -1502,14 +1501,14 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible, bool xlog_switch)
 	 */
 	curridx = Write->curridx;
 
-	while (XLByteLT(LogwrtResult.Write, WriteRqst.Write))
+	while (LogwrtResult.Write < WriteRqst.Write)
 	{
 		/*
 		 * Make sure we're not ahead of the insert process.  This could happen
 		 * if we're passed a bogus WriteRqst.Write that is past the end of the
 		 * last page that's been initialized by AdvanceXLInsertBuffer.
 		 */
-		if (!XLByteLT(LogwrtResult.Write, XLogCtl->xlblocks[curridx]))
+		if (LogwrtResult.Write >= XLogCtl->xlblocks[curridx])
 			elog(PANIC, "xlog write request %X/%X is past end of log %X/%X",
 				 (uint32) (LogwrtResult.Write >> 32), (uint32) LogwrtResult.Write,
 				 (uint32) (XLogCtl->xlblocks[curridx] >> 32),
@@ -1517,7 +1516,7 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible, bool xlog_switch)
 
 		/* Advance LogwrtResult.Write to end of current buffer page */
 		LogwrtResult.Write = XLogCtl->xlblocks[curridx];
-		ispartialpage = XLByteLT(WriteRqst.Write, LogwrtResult.Write);
+		ispartialpage = WriteRqst.Write < LogwrtResult.Write;
 
 		if (!XLByteInPrevSeg(LogwrtResult.Write, openLogSegNo))
 		{
@@ -1559,7 +1558,7 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible, bool xlog_switch)
 		 * contiguous in memory), or if we are at the end of the logfile
 		 * segment.
 		 */
-		last_iteration = !XLByteLT(LogwrtResult.Write, WriteRqst.Write);
+		last_iteration = WriteRqst.Write <= LogwrtResult.Write;
 
 		finishing_seg = !ispartialpage &&
 			(startoffset + npages * XLOG_BLCKSZ) >= XLogSegSize;
@@ -1670,8 +1669,9 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible, bool xlog_switch)
 	/*
 	 * If asked to flush, do so
 	 */
-	if (XLByteLT(LogwrtResult.Flush, WriteRqst.Flush) &&
-		XLByteLT(LogwrtResult.Flush, LogwrtResult.Write))
+	if (LogwrtResult.Flush < WriteRqst.Flush &&
+		LogwrtResult.Flush < LogwrtResult.Write)
+
 	{
 		/*
 		 * Could get here without iterating above loop, in which case we might
@@ -1713,9 +1713,9 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible, bool xlog_switch)
 
 		SpinLockAcquire(&xlogctl->info_lck);
 		xlogctl->LogwrtResult = LogwrtResult;
-		if (XLByteLT(xlogctl->LogwrtRqst.Write, LogwrtResult.Write))
+		if (xlogctl->LogwrtRqst.Write < LogwrtResult.Write)
 			xlogctl->LogwrtRqst.Write = LogwrtResult.Write;
-		if (XLByteLT(xlogctl->LogwrtRqst.Flush, LogwrtResult.Flush))
+		if (xlogctl->LogwrtRqst.Flush < LogwrtResult.Flush)
 			xlogctl->LogwrtRqst.Flush = LogwrtResult.Flush;
 		SpinLockRelease(&xlogctl->info_lck);
 	}
@@ -1738,7 +1738,7 @@ XLogSetAsyncXactLSN(XLogRecPtr asyncXactLSN)
 	SpinLockAcquire(&xlogctl->info_lck);
 	LogwrtResult = xlogctl->LogwrtResult;
 	sleeping = xlogctl->WalWriterSleeping;
-	if (XLByteLT(xlogctl->asyncXactLSN, asyncXactLSN))
+	if (xlogctl->asyncXactLSN < asyncXactLSN)
 		xlogctl->asyncXactLSN = asyncXactLSN;
 	SpinLockRelease(&xlogctl->info_lck);
 
@@ -1753,7 +1753,7 @@ XLogSetAsyncXactLSN(XLogRecPtr asyncXactLSN)
 		WriteRqstPtr -= WriteRqstPtr % XLOG_BLCKSZ;
 
 		/* if we have already flushed that far, we're done */
-		if (XLByteLE(WriteRqstPtr, LogwrtResult.Flush))
+		if (WriteRqstPtr <= LogwrtResult.Flush)
 			return;
 	}
 
@@ -1779,7 +1779,7 @@ static void
 UpdateMinRecoveryPoint(XLogRecPtr lsn, bool force)
 {
 	/* Quick check using our local copy of the variable */
-	if (!updateMinRecoveryPoint || (!force && XLByteLE(lsn, minRecoveryPoint)))
+	if (!updateMinRecoveryPoint || (!force && lsn <= minRecoveryPoint))
 		return;
 
 	LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);
@@ -1795,7 +1795,7 @@ UpdateMinRecoveryPoint(XLogRecPtr lsn, bool force)
 	 */
 	if (minRecoveryPoint == 0)
 		updateMinRecoveryPoint = false;
-	else if (force || XLByteLT(minRecoveryPoint, lsn))
+	else if (force || minRecoveryPoint < lsn)
 	{
 		/* use volatile pointer to prevent code rearrangement */
 		volatile XLogCtlData *xlogctl = XLogCtl;
@@ -1820,7 +1820,7 @@ UpdateMinRecoveryPoint(XLogRecPtr lsn, bool force)
 		newMinRecoveryPointTLI = xlogctl->replayEndTLI;
 		SpinLockRelease(&xlogctl->info_lck);
 
-		if (!force && XLByteLT(newMinRecoveryPoint, lsn))
+		if (!force && newMinRecoveryPoint < lsn)
 			elog(WARNING,
 			   "xlog min recovery request %X/%X is past current point %X/%X",
 				 (uint32) (lsn >> 32) , (uint32) lsn,
@@ -1828,7 +1828,7 @@ UpdateMinRecoveryPoint(XLogRecPtr lsn, bool force)
 				 (uint32) newMinRecoveryPoint);
 
 		/* update control file */
-		if (XLByteLT(ControlFile->minRecoveryPoint, newMinRecoveryPoint))
+		if (ControlFile->minRecoveryPoint < newMinRecoveryPoint)
 		{
 			ControlFile->minRecoveryPoint = newMinRecoveryPoint;
 			ControlFile->minRecoveryPointTLI = newMinRecoveryPointTLI;
@@ -1872,7 +1872,7 @@ XLogFlush(XLogRecPtr record)
 	}
 
 	/* Quick exit if already known flushed */
-	if (XLByteLE(record, LogwrtResult.Flush))
+	if (record <= LogwrtResult.Flush)
 		return;
 
 #ifdef WAL_DEBUG
@@ -1907,13 +1907,13 @@ XLogFlush(XLogRecPtr record)
 
 		/* read LogwrtResult and update local state */
 		SpinLockAcquire(&xlogctl->info_lck);
-		if (XLByteLT(WriteRqstPtr, xlogctl->LogwrtRqst.Write))
+		if (WriteRqstPtr < xlogctl->LogwrtRqst.Write)
 			WriteRqstPtr = xlogctl->LogwrtRqst.Write;
 		LogwrtResult = xlogctl->LogwrtResult;
 		SpinLockRelease(&xlogctl->info_lck);
 
 		/* done already? */
-		if (XLByteLE(record, LogwrtResult.Flush))
+		if (record <= LogwrtResult.Flush)
 			break;
 
 		/*
@@ -1935,7 +1935,7 @@ XLogFlush(XLogRecPtr record)
 
 		/* Got the lock; recheck whether request is satisfied */
 		LogwrtResult = XLogCtl->LogwrtResult;
-		if (XLByteLE(record, LogwrtResult.Flush))
+		if (record <= LogwrtResult.Flush)
 		{
 			LWLockRelease(WALWriteLock);
 			break;
@@ -2009,7 +2009,7 @@ XLogFlush(XLogRecPtr record)
 	 * calls from bufmgr.c are not within critical sections and so we will not
 	 * force a restart for a bad LSN on a data page.
 	 */
-	if (XLByteLT(LogwrtResult.Flush, record))
+	if (LogwrtResult.Flush < record)
 		elog(ERROR,
 		"xlog flush request %X/%X is not satisfied --- flushed only to %X/%X",
 			 (uint32) (record >> 32), (uint32) record,
@@ -2059,7 +2059,7 @@ XLogBackgroundFlush(void)
 	WriteRqstPtr -= WriteRqstPtr % XLOG_BLCKSZ;
 
 	/* if we have already flushed that far, consider async commit records */
-	if (XLByteLE(WriteRqstPtr, LogwrtResult.Flush))
+	if (WriteRqstPtr <= LogwrtResult.Flush)
 	{
 		/* use volatile pointer to prevent code rearrangement */
 		volatile XLogCtlData *xlogctl = XLogCtl;
@@ -2075,7 +2075,7 @@ XLogBackgroundFlush(void)
 	 * holding an open file handle to a logfile that's no longer in use,
 	 * preventing the file from being deleted.
 	 */
-	if (XLByteLE(WriteRqstPtr, LogwrtResult.Flush))
+	if (WriteRqstPtr <= LogwrtResult.Flush)
 	{
 		if (openLogFile >= 0)
 		{
@@ -2100,7 +2100,7 @@ XLogBackgroundFlush(void)
 	/* now wait for the write lock */
 	LWLockAcquire(WALWriteLock, LW_EXCLUSIVE);
 	LogwrtResult = XLogCtl->LogwrtResult;
-	if (!XLByteLE(WriteRqstPtr, LogwrtResult.Flush))
+	if (WriteRqstPtr > LogwrtResult.Flush)
 	{
 		XLogwrtRqst WriteRqst;
 
@@ -2136,7 +2136,7 @@ XLogNeedsFlush(XLogRecPtr record)
 	if (RecoveryInProgress())
 	{
 		/* Quick exit if already known updated */
-		if (XLByteLE(record, minRecoveryPoint) || !updateMinRecoveryPoint)
+		if (record <= minRecoveryPoint || !updateMinRecoveryPoint)
 			return false;
 
 		/*
@@ -2159,14 +2159,14 @@ XLogNeedsFlush(XLogRecPtr record)
 			updateMinRecoveryPoint = false;
 
 		/* check again */
-		if (XLByteLE(record, minRecoveryPoint) || !updateMinRecoveryPoint)
+		if (record <= minRecoveryPoint || !updateMinRecoveryPoint)
 			return false;
 		else
 			return true;
 	}
 
 	/* Quick exit if already known flushed */
-	if (XLByteLE(record, LogwrtResult.Flush))
+	if (record <= LogwrtResult.Flush)
 		return false;
 
 	/* read LogwrtResult and update local state */
@@ -2180,7 +2180,7 @@ XLogNeedsFlush(XLogRecPtr record)
 	}
 
 	/* check again */
-	if (XLByteLE(record, LogwrtResult.Flush))
+	if (record <= LogwrtResult.Flush)
 		return false;
 
 	return true;
@@ -3483,7 +3483,7 @@ retry:
 		do
 		{
 			/* Calculate pointer to beginning of next page */
-			XLByteAdvance(pagelsn, XLOG_BLCKSZ);
+			pagelsn += XLOG_BLCKSZ;
 			/* Wait for the next page to become available */
 			if (!XLogPageRead(&pagelsn, emode, false, false))
 				return NULL;
@@ -3668,7 +3668,7 @@ ValidXLogPageHeader(XLogPageHeader hdr, int emode, bool segmentonly)
 		return false;
 	}
 
-	if (!XLByteEQ(hdr->xlp_pageaddr, recaddr))
+	if (hdr->xlp_pageaddr != recaddr)
 	{
 		ereport(emode_for_corrupt_record(emode, recaddr),
 				(errmsg("unexpected pageaddr %X/%X in log segment %s, offset %u",
@@ -3779,7 +3779,7 @@ ValidXLogRecordHeader(XLogRecPtr *RecPtr, XLogRecord *record, int emode,
 		 * We can't exactly verify the prev-link, but surely it should be less
 		 * than the record's own address.
 		 */
-		if (!XLByteLT(record->xl_prev, *RecPtr))
+		if (!(record->xl_prev < *RecPtr))
 		{
 			ereport(emode_for_corrupt_record(emode, *RecPtr),
 					(errmsg("record with incorrect prev-link %X/%X at %X/%X",
@@ -3795,7 +3795,7 @@ ValidXLogRecordHeader(XLogRecPtr *RecPtr, XLogRecord *record, int emode,
 		 * check guards against torn WAL pages where a stale but valid-looking
 		 * WAL record starts on a sector boundary.
 		 */
-		if (!XLByteEQ(record->xl_prev, ReadRecPtr))
+		if (record->xl_prev != ReadRecPtr)
 		{
 			ereport(emode_for_corrupt_record(emode, *RecPtr),
 					(errmsg("record with incorrect prev-link %X/%X at %X/%X",
@@ -3868,7 +3868,7 @@ rescanLatestTimeLine(void)
 	 * next timeline was forked off from it *after* the current recovery
 	 * location.
 	 */
-	if (XLByteLT(currentTle->end, EndRecPtr))
+	if (currentTle->end < EndRecPtr)
 	{
 		ereport(LOG,
 				(errmsg("new timeline %u forked off current database system timeline %u before current recovery point %X/%X",
@@ -5445,7 +5445,7 @@ StartupXLOG(void)
 			 * backup_label around that references a WAL segment that's
 			 * already been archived.
 			 */
-			if (XLByteLT(checkPoint.redo, checkPointLoc))
+			if (checkPoint.redo < checkPointLoc)
 			{
 				if (!ReadRecord(&(checkPoint.redo), LOG, false))
 					ereport(FATAL,
@@ -5546,7 +5546,7 @@ StartupXLOG(void)
 
 	RedoRecPtr = XLogCtl->Insert.RedoRecPtr = checkPoint.redo;
 
-	if (XLByteLT(RecPtr, checkPoint.redo))
+	if (RecPtr < checkPoint.redo)
 		ereport(PANIC,
 				(errmsg("invalid redo in checkpoint record")));
 
@@ -5555,7 +5555,7 @@ StartupXLOG(void)
 	 * have been a clean shutdown and we did not have a recovery.conf file,
 	 * then assume no recovery needed.
 	 */
-	if (XLByteLT(checkPoint.redo, RecPtr))
+	if (checkPoint.redo < RecPtr)
 	{
 		if (wasShutdown)
 			ereport(PANIC,
@@ -5600,7 +5600,7 @@ StartupXLOG(void)
 		if (InArchiveRecovery)
 		{
 			/* initialize minRecoveryPoint if not set yet */
-			if (XLByteLT(ControlFile->minRecoveryPoint, checkPoint.redo))
+			if (ControlFile->minRecoveryPoint < checkPoint.redo)
 			{
 				ControlFile->minRecoveryPoint = checkPoint.redo;
 				ControlFile->minRecoveryPointTLI = checkPoint.ThisTimeLineID;
@@ -5803,7 +5803,7 @@ StartupXLOG(void)
 		 * Find the first record that logically follows the checkpoint --- it
 		 * might physically precede it, though.
 		 */
-		if (XLByteLT(checkPoint.redo, RecPtr))
+		if (checkPoint.redo < RecPtr)
 		{
 			/* back up to find the record */
 			record = ReadRecord(&(checkPoint.redo), PANIC, false);
@@ -5964,7 +5964,7 @@ StartupXLOG(void)
 				error_context_stack = errcallback.previous;
 
 				if (!XLogRecPtrIsInvalid(ControlFile->backupEndPoint) &&
-					XLByteLE(ControlFile->backupEndPoint, EndRecPtr))
+					ControlFile->backupEndPoint <= EndRecPtr)
 				{
 					/*
 					 * We have reached the end of base backup, the point where
@@ -6065,7 +6065,7 @@ StartupXLOG(void)
 	 * advanced beyond the WAL we processed.
 	 */
 	if (InRecovery &&
-		(XLByteLT(EndOfLog, minRecoveryPoint) ||
+		(EndOfLog < minRecoveryPoint ||
 		 !XLogRecPtrIsInvalid(ControlFile->backupStartPoint)))
 	{
 		if (reachedStopPoint)
@@ -6398,7 +6398,7 @@ CheckRecoveryConsistency(void)
 	 * consistent yet.
 	 */
 	if (!reachedConsistency && !ControlFile->backupEndRequired &&
-		XLByteLE(minRecoveryPoint, XLogCtl->lastReplayedEndRecPtr) &&
+		minRecoveryPoint <= XLogCtl->lastReplayedEndRecPtr &&
 		XLogRecPtrIsInvalid(ControlFile->backupStartPoint))
 	{
 		/*
@@ -6706,7 +6706,7 @@ GetRedoRecPtr(void)
 	volatile XLogCtlData *xlogctl = XLogCtl;
 
 	SpinLockAcquire(&xlogctl->info_lck);
-	Assert(XLByteLE(RedoRecPtr, xlogctl->Insert.RedoRecPtr));
+	Assert(RedoRecPtr <= xlogctl->Insert.RedoRecPtr);
 	RedoRecPtr = xlogctl->Insert.RedoRecPtr;
 	SpinLockRelease(&xlogctl->info_lck);
 
@@ -7315,7 +7315,7 @@ CreateCheckPoint(int flags)
 	 * We now have ProcLastRecPtr = start of actual checkpoint record, recptr
 	 * = end of actual checkpoint record.
 	 */
-	if (shutdown && !XLByteEQ(checkPoint.redo, ProcLastRecPtr))
+	if (shutdown && checkPoint.redo != ProcLastRecPtr)
 		ereport(PANIC,
 				(errmsg("concurrent transaction log activity while database system is shutting down")));
 
@@ -7548,7 +7548,7 @@ CreateRestartPoint(int flags)
 	 * side-effect.
 	 */
 	if (XLogRecPtrIsInvalid(lastCheckPointRecPtr) ||
-		XLByteLE(lastCheckPoint.redo, ControlFile->checkPointCopy.redo))
+		lastCheckPoint.redo <= ControlFile->checkPointCopy.redo)
 	{
 		ereport(DEBUG2,
 				(errmsg("skipping restartpoint, already performed at %X/%X",
@@ -7611,7 +7611,7 @@ CreateRestartPoint(int flags)
 	 */
 	LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);
 	if (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&
-		XLByteLT(ControlFile->checkPointCopy.redo, lastCheckPoint.redo))
+		ControlFile->checkPointCopy.redo < lastCheckPoint.redo)
 	{
 		ControlFile->prevCheckPoint = ControlFile->checkPoint;
 		ControlFile->checkPoint = lastCheckPointRecPtr;
@@ -7931,7 +7931,7 @@ checkTimeLineSwitch(XLogRecPtr lsn, TimeLineID newTLI)
 	 * new timeline.
 	 */
 	if (!XLogRecPtrIsInvalid(minRecoveryPoint) &&
-		XLByteLT(lsn, minRecoveryPoint) &&
+		lsn < minRecoveryPoint &&
 		newTLI > minRecoveryPointTLI)
 		ereport(PANIC,
 				(errmsg("unexpected timeline ID %u in checkpoint record, before reaching minimum recovery point %X/%X on timeline %u",
@@ -8130,7 +8130,7 @@ xlog_redo(XLogRecPtr lsn, XLogRecord *record)
 
 		memcpy(&startpoint, XLogRecGetData(record), sizeof(startpoint));
 
-		if (XLByteEQ(ControlFile->backupStartPoint, startpoint))
+		if (ControlFile->backupStartPoint == startpoint)
 		{
 			/*
 			 * We have reached the end of base backup, the point where
@@ -8143,7 +8143,7 @@ xlog_redo(XLogRecPtr lsn, XLogRecord *record)
 
 			LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);
 
-			if (XLByteLT(ControlFile->minRecoveryPoint, lsn))
+			if (ControlFile->minRecoveryPoint < lsn)
 			{
 				ControlFile->minRecoveryPoint = lsn;
 				ControlFile->minRecoveryPointTLI = ThisTimeLineID;
@@ -8178,7 +8178,7 @@ xlog_redo(XLogRecPtr lsn, XLogRecord *record)
 		 */
 		minRecoveryPoint = ControlFile->minRecoveryPoint;
 		minRecoveryPointTLI = ControlFile->minRecoveryPointTLI;
-		if (minRecoveryPoint != 0 && XLByteLT(minRecoveryPoint, lsn))
+		if (minRecoveryPoint != 0 && minRecoveryPoint < lsn)
 		{
 			ControlFile->minRecoveryPoint = lsn;
 			ControlFile->minRecoveryPointTLI = ThisTimeLineID;
@@ -8206,7 +8206,7 @@ xlog_redo(XLogRecPtr lsn, XLogRecord *record)
 		if (!fpw)
 		{
 			SpinLockAcquire(&xlogctl->info_lck);
-			if (XLByteLT(xlogctl->lastFpwDisableRecPtr, ReadRecPtr))
+			if (xlogctl->lastFpwDisableRecPtr < ReadRecPtr)
 				xlogctl->lastFpwDisableRecPtr = ReadRecPtr;
 			SpinLockRelease(&xlogctl->info_lck);
 		}
@@ -8571,7 +8571,7 @@ do_pg_start_backup(const char *backupidstr, bool fast, char **labelfile)
 				recptr = xlogctl->lastFpwDisableRecPtr;
 				SpinLockRelease(&xlogctl->info_lck);
 
-				if (!checkpointfpw || XLByteLE(startpoint, recptr))
+				if (!checkpointfpw || startpoint <= recptr)
 					ereport(ERROR,
 						  (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 						   errmsg("WAL generated with full_page_writes=off was replayed "
@@ -8603,7 +8603,7 @@ do_pg_start_backup(const char *backupidstr, bool fast, char **labelfile)
 			 * either because only few buffers have been dirtied yet.
 			 */
 			LWLockAcquire(WALInsertLock, LW_SHARED);
-			if (XLByteLT(XLogCtl->Insert.lastBackupStart, startpoint))
+			if (XLogCtl->Insert.lastBackupStart < startpoint)
 			{
 				XLogCtl->Insert.lastBackupStart = startpoint;
 				gotUniqueStartpoint = true;
@@ -8920,7 +8920,7 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive)
 		recptr = xlogctl->lastFpwDisableRecPtr;
 		SpinLockRelease(&xlogctl->info_lck);
 
-		if (XLByteLE(startpoint, recptr))
+		if (startpoint <= recptr)
 			ereport(ERROR,
 					(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 			   errmsg("WAL generated with full_page_writes=off was replayed "
@@ -9123,7 +9123,7 @@ GetStandbyFlushRecPtr(void)
 	receivePtr = GetWalRcvWriteRecPtr(NULL, NULL);
 	replayPtr = GetXLogReplayRecPtr();
 
-	if (XLByteLT(receivePtr, replayPtr))
+	if (receivePtr < replayPtr)
 		return replayPtr;
 	else
 		return receivePtr;
@@ -9404,7 +9404,7 @@ XLogPageRead(XLogRecPtr *RecPtr, int emode, bool fetching_ckpt,
 retry:
 	/* See if we need to retrieve more data */
 	if (readFile < 0 ||
-		(readSource == XLOG_FROM_STREAM && !XLByteLT(*RecPtr, receivedUpto)))
+		(readSource == XLOG_FROM_STREAM && receivedUpto <= *RecPtr))
 	{
 		if (StandbyMode)
 		{
@@ -9774,17 +9774,17 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,
 				 * When we are behind, XLogReceiptTime will not advance, so the
 				 * grace time allotted to conflicting queries will decrease.
 				 */
-				if (XLByteLT(RecPtr, receivedUpto))
+				if (RecPtr < receivedUpto)
 					havedata = true;
 				else
 				{
 					XLogRecPtr	latestChunkStart;
 
 					receivedUpto = GetWalRcvWriteRecPtr(&latestChunkStart, &receiveTLI);
-					if (XLByteLT(RecPtr, receivedUpto) && receiveTLI == curFileTLI)
+					if (RecPtr < receivedUpto && receiveTLI == curFileTLI)
 					{
 						havedata = true;
-						if (!XLByteLT(RecPtr, latestChunkStart))
+						if (latestChunkStart <= RecPtr)
 						{
 							XLogReceiptTime = GetCurrentTimestamp();
 							SetCurrentChunkStartTime(XLogReceiptTime);
@@ -9886,7 +9886,7 @@ emode_for_corrupt_record(int emode, XLogRecPtr RecPtr)
 
 	if (readSource == XLOG_FROM_PG_XLOG && emode == LOG)
 	{
-		if (XLByteEQ(RecPtr, lastComplaint))
+		if (RecPtr == lastComplaint)
 			emode = DEBUG1;
 		else
 			lastComplaint = RecPtr;
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 634ce3f..578ee9b 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -603,7 +603,7 @@ nextval_internal(Oid relid)
 	{
 		XLogRecPtr	redoptr = GetRedoRecPtr();
 
-		if (XLByteLE(PageGetLSN(page), redoptr))
+		if (PageGetLSN(page) <= redoptr)
 		{
 			/* last update of seq was before checkpoint */
 			fetch = log = fetch + SEQ_LOG_VALS;
diff --git a/src/backend/replication/syncrep.c b/src/backend/replication/syncrep.c
index a61725e..b2908a7 100644
--- a/src/backend/replication/syncrep.c
+++ b/src/backend/replication/syncrep.c
@@ -120,7 +120,7 @@ SyncRepWaitForLSN(XLogRecPtr XactCommitLSN)
 	 * be a low cost check.
 	 */
 	if (!WalSndCtl->sync_standbys_defined ||
-		XLByteLE(XactCommitLSN, WalSndCtl->lsn[mode]))
+		XactCommitLSN <= WalSndCtl->lsn[mode])
 	{
 		LWLockRelease(SyncRepLock);
 		return;
@@ -287,7 +287,7 @@ SyncRepQueueInsert(int mode)
 		 * Stop at the queue element that we should after to ensure the queue
 		 * is ordered by LSN.
 		 */
-		if (XLByteLT(proc->waitLSN, MyProc->waitLSN))
+		if (proc->waitLSN < MyProc->waitLSN)
 			break;
 
 		proc = (PGPROC *) SHMQueuePrev(&(WalSndCtl->SyncRepQueue[mode]),
@@ -428,12 +428,12 @@ SyncRepReleaseWaiters(void)
 	 * Set the lsn first so that when we wake backends they will release up to
 	 * this location.
 	 */
-	if (XLByteLT(walsndctl->lsn[SYNC_REP_WAIT_WRITE], MyWalSnd->write))
+	if (walsndctl->lsn[SYNC_REP_WAIT_WRITE] < MyWalSnd->write)
 	{
 		walsndctl->lsn[SYNC_REP_WAIT_WRITE] = MyWalSnd->write;
 		numwrite = SyncRepWakeQueue(false, SYNC_REP_WAIT_WRITE);
 	}
-	if (XLByteLT(walsndctl->lsn[SYNC_REP_WAIT_FLUSH], MyWalSnd->flush))
+	if (walsndctl->lsn[SYNC_REP_WAIT_FLUSH] < MyWalSnd->flush)
 	{
 		walsndctl->lsn[SYNC_REP_WAIT_FLUSH] = MyWalSnd->flush;
 		numflush = SyncRepWakeQueue(false, SYNC_REP_WAIT_FLUSH);
@@ -543,7 +543,7 @@ SyncRepWakeQueue(bool all, int mode)
 		/*
 		 * Assume the queue is ordered by LSN
 		 */
-		if (!all && XLByteLT(walsndctl->lsn[mode], proc->waitLSN))
+		if (!all && walsndctl->lsn[mode] < proc->waitLSN)
 			return numprocs;
 
 		/*
@@ -640,7 +640,7 @@ SyncRepQueueIsOrderedByLSN(int mode)
 		 * Check the queue is ordered by LSN and that multiple procs don't
 		 * have matching LSNs
 		 */
-		if (XLByteLE(proc->waitLSN, lastLSN))
+		if (proc->waitLSN <= lastLSN)
 			return false;
 
 		lastLSN = proc->waitLSN;
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 303edb7..30f675b 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -914,7 +914,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr)
 		}
 
 		/* Update state for write */
-		XLByteAdvance(recptr, byteswritten);
+		recptr += byteswritten;
 
 		recvOff += byteswritten;
 		nbytes -= byteswritten;
@@ -933,7 +933,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr)
 static void
 XLogWalRcvFlush(bool dying)
 {
-	if (XLByteLT(LogstreamResult.Flush, LogstreamResult.Write))
+	if (LogstreamResult.Flush < LogstreamResult.Write)
 	{
 		/* use volatile pointer to prevent code rearrangement */
 		volatile WalRcvData *walrcv = WalRcv;
@@ -944,7 +944,7 @@ XLogWalRcvFlush(bool dying)
 
 		/* Update shared-memory status */
 		SpinLockAcquire(&walrcv->mutex);
-		if (XLByteLT(walrcv->receivedUpto, LogstreamResult.Flush))
+		if (walrcv->receivedUpto < LogstreamResult.Flush)
 		{
 			walrcv->latestChunkStart = walrcv->receivedUpto;
 			walrcv->receivedUpto = LogstreamResult.Flush;
@@ -1016,8 +1016,8 @@ XLogWalRcvSendReply(bool force, bool requestReply)
 	 * probably OK.
 	 */
 	if (!force
-		&& XLByteEQ(writePtr, LogstreamResult.Write)
-		&& XLByteEQ(flushPtr, LogstreamResult.Flush)
+		&& writePtr == LogstreamResult.Write
+		&& flushPtr == LogstreamResult.Flush
 		&& !TimestampDifferenceExceeds(sendTime, now,
 									   wal_receiver_status_interval * 1000))
 		return;
@@ -1126,7 +1126,7 @@ ProcessWalSndrMessage(XLogRecPtr walEnd, TimestampTz sendTime)
 
 	/* Update shared-memory status */
 	SpinLockAcquire(&walrcv->mutex);
-	if (XLByteLT(walrcv->latestWalEnd, walEnd))
+	if (walrcv->latestWalEnd < walEnd)
 		walrcv->latestWalEndTime = sendTime;
 	walrcv->latestWalEnd = walEnd;
 	walrcv->lastMsgSendTime = sendTime;
diff --git a/src/backend/replication/walreceiverfuncs.c b/src/backend/replication/walreceiverfuncs.c
index a8ccfc6..e38280c 100644
--- a/src/backend/replication/walreceiverfuncs.c
+++ b/src/backend/replication/walreceiverfuncs.c
@@ -326,7 +326,7 @@ GetReplicationApplyDelay(void)
 
 	replayPtr = GetXLogReplayRecPtr();
 
-	if (XLByteEQ(receivePtr, replayPtr))
+	if (receivePtr == replayPtr)
 		return 0;
 
 	TimestampDifference(GetCurrentChunkReplayStartTime(),
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index b450b14..51de6b8 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -466,7 +466,7 @@ StartReplication(StartReplicationCmd *cmd)
 			 * WAL segment.
 			 */
 			if (!XLogRecPtrIsInvalid(switchpoint) &&
-				XLByteLT(switchpoint, cmd->startpoint))
+				switchpoint < cmd->startpoint)
 			{
 				ereport(ERROR,
 						(errmsg("requested starting point %X/%X on timeline %u is not in this server's history",
@@ -492,7 +492,7 @@ StartReplication(StartReplicationCmd *cmd)
 
 	/* If there is nothing to stream, don't even enter COPY mode */
 	if (!sendTimeLineIsHistoric ||
-		XLByteLT(cmd->startpoint, sendTimeLineValidUpto))
+		cmd->startpoint < sendTimeLineValidUpto)
 	{
 		XLogRecPtr FlushPtr;
 		/*
@@ -520,7 +520,7 @@ StartReplication(StartReplicationCmd *cmd)
 			FlushPtr = GetStandbyFlushRecPtr();
 		else
 			FlushPtr = GetFlushRecPtr();
-		if (XLByteLT(FlushPtr, cmd->startpoint))
+		if (FlushPtr < cmd->startpoint)
 		{
 			ereport(ERROR,
 					(errmsg("requested starting point %X/%X is ahead of the WAL flush position of this server %X/%X",
@@ -1249,7 +1249,7 @@ retry:
 		}
 
 		/* Update state for read */
-		XLByteAdvance(recptr, readbytes);
+		recptr += readbytes;
 
 		sendOff += readbytes;
 		nbytes -= readbytes;
@@ -1382,11 +1382,11 @@ XLogSend(bool *caughtup)
 
 			history = readTimeLineHistory(targetTLI);
 			sendTimeLineValidUpto = tliSwitchPoint(sendTimeLine, history);
-			Assert(XLByteLE(sentPtr, sendTimeLineValidUpto));
+			Assert(sentPtr <= sendTimeLineValidUpto);
 			list_free_deep(history);
 
-			/* the switchpoint should be >= current send pointer */
-			if (!XLByteLE(sentPtr, sendTimeLineValidUpto))
+			/* the current send pointer should be <= the switchpoint */
+			if (!(sentPtr <= sendTimeLineValidUpto))
 				elog(ERROR, "server switched off timeline %u at %X/%X, but walsender already streamed up to %X/%X",
 					 sendTimeLine,
 					 (uint32) (sendTimeLineValidUpto >> 32),
@@ -1402,7 +1402,7 @@ XLogSend(bool *caughtup)
 	 * If this is a historic timeline and we've reached the point where we
 	 * forked to the next timeline, stop streaming.
 	 */
-	if (sendTimeLineIsHistoric && XLByteLE(sendTimeLineValidUpto, sentPtr))
+	if (sendTimeLineIsHistoric && sendTimeLineValidUpto <= sentPtr)
 	{
 		/* close the current file. */
 		if (sendFile >= 0)
@@ -1421,13 +1421,13 @@ XLogSend(bool *caughtup)
 	 * Stream up to the point known to be flushed to disk, or to the end of
 	 * this timeline, whichever comes first.
 	 */
-	if (sendTimeLineIsHistoric && XLByteLT(sendTimeLineValidUpto, FlushPtr))
+	if (sendTimeLineIsHistoric && sendTimeLineValidUpto < FlushPtr)
 		SendRqstPtr = sendTimeLineValidUpto;
 	else
 		SendRqstPtr = FlushPtr;
 
-	Assert(XLByteLE(sentPtr, SendRqstPtr));
-	if (XLByteLE(SendRqstPtr, sentPtr))
+	Assert(sentPtr <= SendRqstPtr);
+	if (SendRqstPtr <= sentPtr)
 	{
 		*caughtup = true;
 		return;
@@ -1446,10 +1446,10 @@ XLogSend(bool *caughtup)
 	 */
 	startptr = sentPtr;
 	endptr = startptr;
-	XLByteAdvance(endptr, MAX_SEND_SIZE);
+	endptr += MAX_SEND_SIZE;
 
 	/* if we went beyond SendRqstPtr, back off */
-	if (XLByteLE(SendRqstPtr, endptr))
+	if (SendRqstPtr <= endptr)
 	{
 		endptr = SendRqstPtr;
 		if (sendTimeLineIsHistoric)
@@ -1923,7 +1923,7 @@ GetOldestWALSendPointer(void)
 		if (recptr.xlogid == 0 && recptr.xrecoff == 0)
 			continue;
 
-		if (!found || XLByteLT(recptr, oldest))
+		if (!found || recptr < oldest)
 			oldest = recptr;
 		found = true;
 	}
diff --git a/src/bin/pg_basebackup/receivelog.c b/src/bin/pg_basebackup/receivelog.c
index 8502d56..5a1c598 100644
--- a/src/bin/pg_basebackup/receivelog.c
+++ b/src/bin/pg_basebackup/receivelog.c
@@ -636,7 +636,7 @@ ReceiveXlogStream(PGconn *conn, XLogRecPtr startpos, uint32 timeline,
 			/* Write was successful, advance our position */
 			bytes_written += bytes_to_write;
 			bytes_left -= bytes_to_write;
-			XLByteAdvance(blockpos, bytes_to_write);
+			blockpos += bytes_to_write;
 			xlogoff += bytes_to_write;
 
 			/* Did we reach the end of a WAL segment? */
diff --git a/src/include/access/xlogdefs.h b/src/include/access/xlogdefs.h
index 153d0de..f3acb2f 100644
--- a/src/include/access/xlogdefs.h
+++ b/src/include/access/xlogdefs.h
@@ -29,20 +29,6 @@ typedef uint64 XLogRecPtr;
 #define XLogRecPtrIsInvalid(r)	((r) == InvalidXLogRecPtr)
 
 /*
- * Macros for comparing XLogRecPtrs
- */
-#define XLByteLT(a, b)		((a) < (b))
-#define XLByteLE(a, b)		((a) <= (b))
-#define XLByteEQ(a, b)		((a) == (b))
-
-
-/*
- * Macro for advancing a record pointer by the specified number of bytes.
- */
-#define XLByteAdvance(recptr, nbytes)						\
-		(recptr) += nbytes									\
-
-/*
  * XLogSegNo - physical log file sequence number.
  */
 typedef uint64 XLogSegNo;
-- 
1.7.12.289.g0ce9864.dirty

0003-Remove-unused-NextLogPage-macro-in-xlog_inernal.h.patchtext/x-patch; charset=us-asciiDownload
>From ba88f15f5ddcb171eda4ba7ee6098e0337c6fa6b Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 17 Dec 2012 23:30:37 +0100
Subject: [PATCH 3/3] Remove unused NextLogPage macro in xlog_inernal.h

It was unused since 061e7efb1b4c5b8a5d02122b7780531b8d5bf23d removed the last
user.
---
 src/include/access/xlog_internal.h | 11 -----------
 1 file changed, 11 deletions(-)

diff --git a/src/include/access/xlog_internal.h b/src/include/access/xlog_internal.h
index 0acebd4..1443f96 100644
--- a/src/include/access/xlog_internal.h
+++ b/src/include/access/xlog_internal.h
@@ -120,17 +120,6 @@ typedef XLogLongPageHeaderData *XLogLongPageHeader;
 		(dest) = (segno) * XLOG_SEG_SIZE + (offset)
 
 /*
- * Macros for manipulating XLOG pointers
- */
-
-/* Align a record pointer to next page */
-#define NextLogPage(recptr) \
-	do {	\
-		if ((recptr) % XLOG_BLCKSZ != 0)	\
-			XLByteAdvance(recptr, (XLOG_BLCKSZ - (recptr) % XLOG_BLCKSZ)); \
-	} while (0)
-
-/*
  * Compute ID and segment from an XLogRecPtr.
  *
  * For XLByteToSeg, do the computation at face value.  For XLByteToPrevSeg,
-- 
1.7.12.289.g0ce9864.dirty

#13Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Andres Freund (#12)
Re: XLByte* usage

Andres Freund <andres@2ndquadrant.com> writes:

In 2) unfortunately one has to make decision in which way to simplify
negated XLByte(LT|LE) expressions. I tried to make that choice very
careful and when over every change several times after that, so I hope
there aren't any bad changes, but more eyeballs are needed.

+	if (lsn > PageGetLSN(page))
+	if (lsn >= PageGetLSN(page))
+	if (lsn <= PageGetLSN(page))
+			if (max_lsn < this_lsn)

My only comment here would be that I would like it much better that the
code always use the same operators, and as we read from left to right,
that we pick < and <=.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Andres Freund
andres@2ndquadrant.com
In reply to: Dimitri Fontaine (#13)
Re: XLByte* usage

On 2012-12-18 13:14:10 +0100, Dimitri Fontaine wrote:

Andres Freund <andres@2ndquadrant.com> writes:

In 2) unfortunately one has to make decision in which way to simplify
negated XLByte(LT|LE) expressions. I tried to make that choice very
careful and when over every change several times after that, so I hope
there aren't any bad changes, but more eyeballs are needed.

+	if (lsn > PageGetLSN(page))
+	if (lsn >= PageGetLSN(page))
+	if (lsn <= PageGetLSN(page))
+			if (max_lsn < this_lsn)

My only comment here would be that I would like it much better that the
code always use the same operators, and as we read from left to right,
that we pick < and <=.

I did that in most places (check the xlog.c changes), but in this case
it didn't seem to be appropriate because because that would have meant
partially reversing the order of operands which would have looked
confusing as well, given this check is a common patter across nearly
every xlog replay function.
Imo its a good idea trying to choose < or <= as operator but its also
important to keep the order consistent inside a single function/similar
functions.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andres Freund (#12)
Re: XLByte* usage

Andres Freund escribió:

On 2012-12-17 13:16:47 -0500, Tom Lane wrote:

Andres Freund <andres@2ndquadrant.com> writes:

On 2012-12-17 12:47:41 -0500, Tom Lane wrote:

But, if the day ever comes when 64 bits doesn't seem like enough, I bet
we'd move to 128-bit integers, which will surely be available on all
platforms by then. So +1 for using plain comparisons --- in fact, I'd
vote for running around and ripping out the macros altogether. I had
already been thinking of fixing the places that are still using memset
to initialize XLRecPtrs to "invalid".

I thought about that and had guessed you would be against it because it
would cause useless diversion of the branches? Otherwise I am all for
it.

That's the only argument I can see against doing it --- but Heikki's
patch was already pretty invasive in the same areas this would touch,
so I'm thinking this won't make back-patching much worse.

I thought a while about this for while and decided its worth trying to
this before the next review round of xlogreader.

I have applied these three patches, after merging for recent changes.
Thanks.

--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Andres Freund
andres@2ndquadrant.com
In reply to: Alvaro Herrera (#15)
Re: XLByte* usage

On 2012-12-28 14:59:50 -0300, Alvaro Herrera wrote:

Andres Freund escribió:

On 2012-12-17 13:16:47 -0500, Tom Lane wrote:

Andres Freund <andres@2ndquadrant.com> writes:

On 2012-12-17 12:47:41 -0500, Tom Lane wrote:

But, if the day ever comes when 64 bits doesn't seem like enough, I bet
we'd move to 128-bit integers, which will surely be available on all
platforms by then. So +1 for using plain comparisons --- in fact, I'd
vote for running around and ripping out the macros altogether. I had
already been thinking of fixing the places that are still using memset
to initialize XLRecPtrs to "invalid".

I thought about that and had guessed you would be against it because it
would cause useless diversion of the branches? Otherwise I am all for
it.

That's the only argument I can see against doing it --- but Heikki's
patch was already pretty invasive in the same areas this would touch,
so I'm thinking this won't make back-patching much worse.

I thought a while about this for while and decided its worth trying to
this before the next review round of xlogreader.

I have applied these three patches, after merging for recent changes.
Thanks.

Thanks!

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers