Shared buffer access rule violations?

Started by Asim R Pover 7 years ago10 messages
#1Asim R P
apraveen@pivotal.io

Hi,

In order to make changes to a shared buffer, one must hold a pin on it
and the content lock in exclusive mode. This rule seems to be
followed in most of the places but there are a few exceptions.

One can find several PageInit() calls with no content lock held. See,
for example:

fill_seq_with_data()
vm_readbuf()
fsm_readbuf()

Moreover, fsm_vacuum_page() performs
"PageGetContents(page))->fp_next_slot = 0;" without content lock.

There may be more but I want to know if these can be treated as
violations before moving ahead.

Asim

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Asim R P (#1)
Re: Shared buffer access rule violations?

Asim R P <apraveen@pivotal.io> writes:

In order to make changes to a shared buffer, one must hold a pin on it
and the content lock in exclusive mode. This rule seems to be
followed in most of the places but there are a few exceptions.

One can find several PageInit() calls with no content lock held. See,
for example:

fill_seq_with_data()

That would be for a relation that no one else can even see yet, no?

vm_readbuf()
fsm_readbuf()

In these cases I'd imagine that the I/O completion interlock is what
is preventing other backends from accessing the buffer.

Moreover, fsm_vacuum_page() performs
"PageGetContents(page))->fp_next_slot = 0;" without content lock.

That field is just a hint, IIRC, and the possibility of a torn read
is explicitly not worried about.

There may be more but I want to know if these can be treated as
violations before moving ahead.

These specific things don't sound like bugs, though possibly I'm
missing something. Perhaps more comments would be in order.

regards, tom lane

#3Asim R P
apraveen@pivotal.io
In reply to: Tom Lane (#2)
Re: Shared buffer access rule violations?

On Tue, Jul 10, 2018 at 8:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Asim R P <apraveen@pivotal.io> writes:

One can find several PageInit() calls with no content lock held. See,
for example:

fill_seq_with_data()

That would be for a relation that no one else can even see yet, no?

Yes, when the sequence is being created. No, when the sequence is
being reset, in ResetSequence().

vm_readbuf()
fsm_readbuf()

In these cases I'd imagine that the I/O completion interlock is what
is preventing other backends from accessing the buffer.

What is I/O completion interlock? I see no difference in initializing
a visimap/fsm page and initializing a standard heap page. For
standard heap pages, the code currently acquires the buffer pin as
well as content lock for initialization.

Moreover, fsm_vacuum_page() performs
"PageGetContents(page))->fp_next_slot = 0;" without content lock.

That field is just a hint, IIRC, and the possibility of a torn read
is explicitly not worried about.

Yes, that's a hint. And ignoring torn page possibility doesn't result
in checksum failures because fsm_read() passes RMB_ZERO_ON_ERROR to
buffer manager. The page will be zeroed out in the event of checksum
failure.

Asim

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Asim R P (#3)
Re: Shared buffer access rule violations?

Asim R P <apraveen@pivotal.io> writes:

On Tue, Jul 10, 2018 at 8:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Asim R P <apraveen@pivotal.io> writes:

One can find several PageInit() calls with no content lock held. See,
for example:
fill_seq_with_data()

That would be for a relation that no one else can even see yet, no?

Yes, when the sequence is being created. No, when the sequence is
being reset, in ResetSequence().

ResetSequence creates a new relfilenode, which no one else will be able
to see until it commits, so the case is effectively the same as for
creation.

vm_readbuf()
fsm_readbuf()

In these cases I'd imagine that the I/O completion interlock is what
is preventing other backends from accessing the buffer.

What is I/O completion interlock?

Oh ... the RBM_ZERO_ON_ERROR action should be done under the I/O lock,
but the ReadBuffer caller isn't holding that lock anymore, so I see your
point here. Probably, nobody's noticed because it's a corner case that
shouldn't happen under normal use, but it's not safe. I think what we
want is more like

if (PageIsNew(BufferGetPage(buf)))
{
LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
if (PageIsNew(BufferGetPage(buf)))
PageInit(BufferGetPage(buf), BLCKSZ, 0);
UnlockReleaseBuffer(buf);
}

to ensure that the page is initialized once and only once, even if
several backends do this concurrently.

regards, tom lane

#5Asim R P
apraveen@pivotal.io
In reply to: Tom Lane (#4)
2 attachment(s)
Re: Shared buffer access rule violations?

Please find attached a patch to mark a shared buffer as read-write or
read-only using mprotect(). The idea is to catch violations of shared
buffer access rules. This patch was useful to detect the access
violations reported in this thread. The mprotect() calls are enabled
by -DMPROTECT_BUFFER compile time flag.

Asim

Attachments:

0001-Facility-to-detect-shared-buffer-access-violations.patchtext/x-patch; charset=US-ASCII; name=0001-Facility-to-detect-shared-buffer-access-violations.patchDownload
From ed7dcf633600b3a527ed52ffacd1b779da8b0235 Mon Sep 17 00:00:00 2001
From: Asim R P <apraveen@pivotal.io>
Date: Wed, 18 Jul 2018 18:32:40 -0700
Subject: [PATCH 1/2] Facility to detect shared buffer access violations

Using mprotect() to allow/disallow read/write access to one or more
shared buffers, this patch enables detection of shared buffer access
violations.  A new compile time flag "MPROTECT_BUFFERS"
(CFLAGS=-DMPROTECT_BUFFERS) is introduced to enable the detection.

A couple of violations have already been caught and fixed using this
facility: 130beba36d6dd46b8c527646f9f2433347cbfb11
---
 src/backend/storage/buffer/buf_init.c |  31 +++++++
 src/backend/storage/buffer/bufmgr.c   | 122 ++++++++++++++++++++++----
 src/backend/storage/ipc/shmem.c       |  18 ++++
 3 files changed, 153 insertions(+), 18 deletions(-)

diff --git a/src/backend/storage/buffer/buf_init.c b/src/backend/storage/buffer/buf_init.c
index 144a2cee6f..22e3d2821c 100644
--- a/src/backend/storage/buffer/buf_init.c
+++ b/src/backend/storage/buffer/buf_init.c
@@ -14,6 +14,11 @@
  */
 #include "postgres.h"
 
+#ifdef MPROTECT_BUFFERS
+#include <sys/mman.h>
+#include "miscadmin.h"
+#endif
+
 #include "storage/bufmgr.h"
 #include "storage/buf_internals.h"
 
@@ -24,6 +29,28 @@ LWLockMinimallyPadded *BufferIOLWLockArray = NULL;
 WritebackContext BackendWritebackContext;
 CkptSortItem *CkptBufferIds;
 
+#ifdef MPROTECT_BUFFERS
+/*
+ * Protect the entire shared buffers region such that neither read nor write
+ * is allowed.  Protection will change for specific buffers when accessed
+ * through buffer manager's interface.  The intent is to catch violation of
+ * buffer access rules.
+ */
+static void
+ProtectMemoryPoolBuffers()
+{
+	Size bufferBlocksTotalSize = mul_size((Size)NBuffers, (Size) BLCKSZ);
+	if (IsUnderPostmaster && IsNormalProcessingMode() &&
+		mprotect(BufferBlocks, bufferBlocksTotalSize, PROT_NONE))
+	{
+		ereport(ERROR,
+				(errmsg("unable to set memory level to %d, error %d, "
+						"allocation size %ud, ptr %ld", PROT_NONE,
+						errno, (unsigned int) bufferBlocksTotalSize,
+						(long int) BufferBlocks)));
+	}
+}
+#endif
 
 /*
  * Data Structures:
@@ -149,6 +176,10 @@ InitBufferPool(void)
 	/* Initialize per-backend file flush context */
 	WritebackContextInit(&BackendWritebackContext,
 						 &backend_flush_after);
+
+#ifdef MPROTECT_BUFFERS
+	ProtectMemoryPoolBuffers();
+#endif
 }
 
 /*
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 01eabe5706..2ef3c75b6a 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -31,6 +31,9 @@
 #include "postgres.h"
 
 #include <sys/file.h>
+#ifdef MPROTECT_BUFFERS
+#include <sys/mman.h>
+#endif
 #include <unistd.h>
 
 #include "access/xlog.h"
@@ -177,6 +180,75 @@ static PrivateRefCountEntry *GetPrivateRefCountEntry(Buffer buffer, bool do_move
 static inline int32 GetPrivateRefCount(Buffer buffer);
 static void ForgetPrivateRefCountEntry(PrivateRefCountEntry *ref);
 
+#ifdef MPROTECT_BUFFERS
+#define ShouldMemoryProtect(buf) (IsUnderPostmaster &&		  \
+								  IsNormalProcessingMode() && \
+								  !BufferIsLocal(buf->buf_id+1) && \
+								  !BufferIsInvalid(buf->buf_id+1))
+
+static inline void BufferMProtect(volatile BufferDesc *bufHdr, int protectionLevel)
+{
+	if (ShouldMemoryProtect(bufHdr))
+	{
+		if (mprotect(BufHdrGetBlock(bufHdr), BLCKSZ, protectionLevel))
+		{
+			ereport(ERROR,
+					(errmsg("unable to set memory level to %d, error %d, "
+							"block size %d, ptr %ld", protectionLevel,
+							errno, BLCKSZ, (long int) BufHdrGetBlock(bufHdr))));
+		}
+	}
+}
+#endif
+
+static inline void ReleaseContentLock(volatile BufferDesc *buf)
+{
+	LWLockRelease(BufferDescriptorGetContentLock(buf));
+
+#ifdef MPROTECT_BUFFERS
+	/* make the buffer read-only after releasing content lock */
+	if (!LWLockHeldByMe(BufferDescriptorGetContentLock(buf)))
+		BufferMProtect(buf, PROT_READ);
+#endif
+}
+
+
+static inline void AcquireContentLock(volatile BufferDesc *buf, LWLockMode mode)
+{
+#ifdef MPROTECT_BUFFERS
+	const bool newAcquisition =
+		!LWLockHeldByMe(BufferDescriptorGetContentLock(buf));
+
+	LWLockAcquire(BufferDescriptorGetContentLock(buf), mode);
+
+	/* new acquisition of content lock, allow read/write memory access */
+	if (newAcquisition)
+		BufferMProtect(buf, PROT_READ|PROT_WRITE);
+#else
+	LWLockAcquire(BufferDescriptorGetContentLock(buf), mode);
+#endif
+}
+
+static inline bool ConditionalAcquireContentLock(volatile BufferDesc *buf,
+												 LWLockMode mode)
+{
+#ifdef MPROTECT_BUFFERS
+	const bool newAcquisition =
+		!LWLockHeldByMe(BufferDescriptorGetContentLock(buf));
+
+	if (LWLockConditionalAcquire(BufferDescriptorGetContentLock(buf), mode))
+	{
+		/* new acquisition of lock, allow read/write memory access */
+		if (newAcquisition)
+			BufferMProtect( buf, PROT_READ | PROT_WRITE );
+		return true;
+	}
+	return false;
+#else
+	return LWLockConditionalAcquire(BufferDescriptorGetContentLock(buf), mode);
+#endif
+}
+
 /*
  * Ensure that the PrivateRefCountArray has sufficient space to store one more
  * entry. This has to be called before using NewPrivateRefCountEntry() to fill
@@ -779,8 +851,7 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
 			if (!isLocalBuf)
 			{
 				if (mode == RBM_ZERO_AND_LOCK)
-					LWLockAcquire(BufferDescriptorGetContentLock(bufHdr),
-								  LW_EXCLUSIVE);
+					AcquireContentLock(bufHdr, LW_EXCLUSIVE);
 				else if (mode == RBM_ZERO_AND_CLEANUP_LOCK)
 					LockBufferForCleanup(BufferDescriptorGetBuffer(bufHdr));
 			}
@@ -932,7 +1003,7 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
 	if ((mode == RBM_ZERO_AND_LOCK || mode == RBM_ZERO_AND_CLEANUP_LOCK) &&
 		!isLocalBuf)
 	{
-		LWLockAcquire(BufferDescriptorGetContentLock(bufHdr), LW_EXCLUSIVE);
+		AcquireContentLock((bufHdr), LW_EXCLUSIVE);
 	}
 
 	if (isLocalBuf)
@@ -1102,8 +1173,7 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
 			 * happens to be trying to split the page the first one got from
 			 * StrategyGetBuffer.)
 			 */
-			if (LWLockConditionalAcquire(BufferDescriptorGetContentLock(buf),
-										 LW_SHARED))
+			if (ConditionalAcquireContentLock(buf, LW_SHARED))
 			{
 				/*
 				 * If using a nondefault strategy, and writing the buffer
@@ -1125,7 +1195,7 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
 						StrategyRejectBuffer(strategy, buf))
 					{
 						/* Drop lock/pin and loop around for another buffer */
-						LWLockRelease(BufferDescriptorGetContentLock(buf));
+						ReleaseContentLock(buf);
 						UnpinBuffer(buf, true);
 						continue;
 					}
@@ -1138,7 +1208,7 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
 														  smgr->smgr_rnode.node.relNode);
 
 				FlushBuffer(buf, NULL);
-				LWLockRelease(BufferDescriptorGetContentLock(buf));
+				ReleaseContentLock(buf);
 
 				ScheduleBufferTagForWriteback(&BackendWritebackContext,
 											  &buf->tag);
@@ -1618,6 +1688,9 @@ PinBuffer(BufferDesc *buf, BufferAccessStrategy strategy)
 				break;
 			}
 		}
+#ifdef MPROTECT_BUFFERS
+		BufferMProtect(buf, PROT_READ);
+#endif
 	}
 	else
 	{
@@ -1672,6 +1745,9 @@ PinBuffer_Locked(BufferDesc *buf)
 	buf_state = pg_atomic_read_u32(&buf->state);
 	Assert(buf_state & BM_LOCKED);
 	buf_state += BUF_REFCOUNT_ONE;
+#ifdef MPROTECT_BUFFERS
+	BufferMProtect(buf, PROT_READ);
+#endif
 	UnlockBufHdr(buf, buf_state);
 
 	b = BufferDescriptorGetBuffer(buf);
@@ -1747,6 +1823,9 @@ UnpinBuffer(BufferDesc *buf, bool fixOwner)
 			 */
 			buf_state = LockBufHdr(buf);
 
+#ifdef MPROTECT_BUFFERS
+			BufferMProtect(buf, PROT_NONE);
+#endif
 			if ((buf_state & BM_PIN_COUNT_WAITER) &&
 				BUF_STATE_GET_REFCOUNT(buf_state) == 1)
 			{
@@ -2389,11 +2468,11 @@ SyncOneBuffer(int buf_id, bool skip_recently_used, WritebackContext *wb_context)
 	 * buffer is clean by the time we've locked it.)
 	 */
 	PinBuffer_Locked(bufHdr);
-	LWLockAcquire(BufferDescriptorGetContentLock(bufHdr), LW_SHARED);
+	AcquireContentLock(bufHdr, LW_SHARED);
 
 	FlushBuffer(bufHdr, NULL);
 
-	LWLockRelease(BufferDescriptorGetContentLock(bufHdr));
+	ReleaseContentLock(bufHdr);
 
 	tag = bufHdr->tag;
 
@@ -3217,9 +3296,9 @@ FlushRelationBuffers(Relation rel)
 			(buf_state & (BM_VALID | BM_DIRTY)) == (BM_VALID | BM_DIRTY))
 		{
 			PinBuffer_Locked(bufHdr);
-			LWLockAcquire(BufferDescriptorGetContentLock(bufHdr), LW_SHARED);
+			AcquireContentLock(bufHdr, LW_SHARED);
 			FlushBuffer(bufHdr, rel->rd_smgr);
-			LWLockRelease(BufferDescriptorGetContentLock(bufHdr));
+			ReleaseContentLock(bufHdr);
 			UnpinBuffer(bufHdr, true);
 		}
 		else
@@ -3271,9 +3350,9 @@ FlushDatabaseBuffers(Oid dbid)
 			(buf_state & (BM_VALID | BM_DIRTY)) == (BM_VALID | BM_DIRTY))
 		{
 			PinBuffer_Locked(bufHdr);
-			LWLockAcquire(BufferDescriptorGetContentLock(bufHdr), LW_SHARED);
+			AcquireContentLock(bufHdr, LW_SHARED);
 			FlushBuffer(bufHdr, NULL);
-			LWLockRelease(BufferDescriptorGetContentLock(bufHdr));
+			ReleaseContentLock(bufHdr);
 			UnpinBuffer(bufHdr, true);
 		}
 		else
@@ -3554,11 +3633,11 @@ LockBuffer(Buffer buffer, int mode)
 	buf = GetBufferDescriptor(buffer - 1);
 
 	if (mode == BUFFER_LOCK_UNLOCK)
-		LWLockRelease(BufferDescriptorGetContentLock(buf));
+		ReleaseContentLock(buf);
 	else if (mode == BUFFER_LOCK_SHARE)
-		LWLockAcquire(BufferDescriptorGetContentLock(buf), LW_SHARED);
+		AcquireContentLock(buf, LW_SHARED);
 	else if (mode == BUFFER_LOCK_EXCLUSIVE)
-		LWLockAcquire(BufferDescriptorGetContentLock(buf), LW_EXCLUSIVE);
+		AcquireContentLock(buf, LW_EXCLUSIVE);
 	else
 		elog(ERROR, "unrecognized buffer lock mode: %d", mode);
 }
@@ -3579,8 +3658,7 @@ ConditionalLockBuffer(Buffer buffer)
 
 	buf = GetBufferDescriptor(buffer - 1);
 
-	return LWLockConditionalAcquire(BufferDescriptorGetContentLock(buf),
-									LW_EXCLUSIVE);
+	return ConditionalAcquireContentLock(buf, LW_EXCLUSIVE);
 }
 
 /*
@@ -3913,6 +3991,9 @@ StartBufferIO(BufferDesc *buf, bool forInput)
 	}
 
 	buf_state |= BM_IO_IN_PROGRESS;
+#ifdef MPROTECT_BUFFERS
+    BufferMProtect(buf, forInput ? PROT_WRITE|PROT_READ : PROT_READ);
+#endif
 	UnlockBufHdr(buf, buf_state);
 
 	InProgressBuf = buf;
@@ -3958,6 +4039,11 @@ TerminateBufferIO(BufferDesc *buf, bool clear_dirty, uint32 set_flag_bits)
 
 	InProgressBuf = NULL;
 
+#ifdef MPROTECT_BUFFERS
+	/* XXX: should this be PROT_NONE if called from AbortBufferIO? */
+	if (!LWLockHeldByMe(BufferDescriptorGetContentLock(buf)))
+		BufferMProtect(buf, PROT_READ);
+#endif
 	LWLockRelease(BufferDescriptorGetIOLock(buf));
 }
 
diff --git a/src/backend/storage/ipc/shmem.c b/src/backend/storage/ipc/shmem.c
index 7893c01983..ce429626bc 100644
--- a/src/backend/storage/ipc/shmem.c
+++ b/src/backend/storage/ipc/shmem.c
@@ -65,6 +65,10 @@
 
 #include "postgres.h"
 
+#ifdef MPROTECT_BUFFERS
+#include <unistd.h>
+#endif
+
 #include "access/transam.h"
 #include "miscadmin.h"
 #include "storage/lwlock.h"
@@ -198,6 +202,20 @@ ShmemAllocNoError(Size size)
 
 	newStart = ShmemSegHdr->freeoffset;
 
+#ifdef MPROTECT_BUFFERS
+	/*
+	 * Align shared buffers start address to system page size because mprotect
+	 * can only work with system page size aligned addresses.
+	 */
+	if (size >= BLCKSZ)
+	{
+		long systemPageSize = sysconf(_SC_PAGESIZE);
+		if (systemPageSize <=1 || (systemPageSize & (systemPageSize-1)))
+			elog(ERROR, "invalid page size %ld", systemPageSize);
+		newStart =  TYPEALIGN(systemPageSize, newStart);
+	}
+#endif
+
 	newFree = newStart + size;
 	if (newFree <= ShmemSegHdr->totalsize)
 	{
-- 
2.17.1

0002-Fix-known-but-not-problematic-buffer-access-violatio.patchtext/x-patch; charset=US-ASCII; name=0002-Fix-known-but-not-problematic-buffer-access-violatio.patchDownload
From 9d5e5c61d4c954a71acca07ec37066b8ca5b4822 Mon Sep 17 00:00:00 2001
From: Asim R P <apraveen@pivotal.io>
Date: Wed, 18 Jul 2018 18:30:41 -0700
Subject: [PATCH 2/2] Fix known but not problematic buffer access violations

This is needed to make regression tests pass with
CFLAGS=-DMPROTECT_BUFFERS.
---
 src/backend/commands/sequence.c           |  3 ++-
 src/backend/storage/freespace/freespace.c | 11 ++++++++++-
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 89122d4ad7..f44fb8c563 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -348,13 +348,14 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
 
 	page = BufferGetPage(buf);
 
+	LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
+
 	PageInit(page, BufferGetPageSize(buf), sizeof(sequence_magic));
 	sm = (sequence_magic *) PageGetSpecialPointer(page);
 	sm->magic = SEQ_MAGIC;
 
 	/* Now insert sequence tuple */
 
-	LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
 
 	/*
 	 * Since VACUUM does not process sequences, we have to force the tuple to
diff --git a/src/backend/storage/freespace/freespace.c b/src/backend/storage/freespace/freespace.c
index 8d0ee7fc93..df813d7416 100644
--- a/src/backend/storage/freespace/freespace.c
+++ b/src/backend/storage/freespace/freespace.c
@@ -901,8 +901,17 @@ fsm_vacuum_page(Relation rel, FSMAddress addr,
 	 * relation.  We don't bother with a lock here, nor with marking the page
 	 * dirty if it wasn't already, since this is just a hint.
 	 */
+#ifdef MPROTECT_BUFFERS
+	/*
+	 * When mprotect() is used to detect shared buffer access violations, lock
+	 * must be acquired so that write access is allowed on this buffer.
+	 */
+	LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
+#endif
 	((FSMPage) PageGetContents(page))->fp_next_slot = 0;
-
+#ifdef MPROTECT_BUFFERS
+	LockBuffer(buf, BUFFER_LOCK_UNLOCK);
+#endif
 	ReleaseBuffer(buf);
 
 	return max_avail;
-- 
2.17.1

In reply to: Asim R P (#5)
Re: Shared buffer access rule violations?

On Tue, Aug 7, 2018 at 6:43 PM, Asim R P <apraveen@pivotal.io> wrote:

Please find attached a patch to mark a shared buffer as read-write or
read-only using mprotect(). The idea is to catch violations of shared
buffer access rules. This patch was useful to detect the access
violations reported in this thread. The mprotect() calls are enabled
by -DMPROTECT_BUFFER compile time flag.

I wonder if it would be a better idea to enable Valgrind's memcheck to
mark buffers as read-only or read-write. We've considered doing
something like that for years, but for whatever reason nobody followed
through.

Using Valgrind would have the advantage of making it possible to mark
memory as undefined or as noaccess. I can imagine doing even fancier
things by making that distinction within buffers. For example, marking
the hole in the middle of a page/buffer as undefined, while still
marking the entire buffer noaccess when there is no pin held.

--
Peter Geoghegan

#7Asim R P
apraveen@pivotal.io
In reply to: Peter Geoghegan (#6)
Re: Shared buffer access rule violations?

On Tue, Aug 7, 2018 at 7:00 PM, Peter Geoghegan <pg@bowt.ie> wrote:

I wonder if it would be a better idea to enable Valgrind's memcheck to
mark buffers as read-only or read-write. We've considered doing
something like that for years, but for whatever reason nobody followed
through.

Basic question: how do you mark buffers as read-only using memcheck
tool? Running installcheck with valgrind didn't uncover any errors:

valgrind --trace-children=yes pg_ctl -D datadir start
make installcheck-parallel

Asim & David

#8Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Asim R P (#7)
Re: Shared buffer access rule violations?

On Thu, Aug 9, 2018 at 12:59 PM Asim R P <apraveen@pivotal.io> wrote:

On Tue, Aug 7, 2018 at 7:00 PM, Peter Geoghegan <pg@bowt.ie> wrote:

I wonder if it would be a better idea to enable Valgrind's memcheck to
mark buffers as read-only or read-write. We've considered doing
something like that for years, but for whatever reason nobody followed
through.

Basic question: how do you mark buffers as read-only using memcheck
tool? Running installcheck with valgrind didn't uncover any errors:

valgrind --trace-children=yes pg_ctl -D datadir start
make installcheck-parallel

Presumably with VALGRIND_xxx macros, but is there a way to make that
work for shared memory?

I like the mprotect() patch. This could be enabled on a build farm
animal. I guess it would either fail explicitly or behave incorrectly
for VM page size > BLCKSZ depending on OS, but at least on Linux/amd64
you have to go out of your way to get pages > 4KB so that seems OK for
a debugging feature.

I would like to do something similar with DSA, to electrify
superblocks and whole segments that are freed. That would have caught
a recent bug in DSA itself and in client code.

--
Thomas Munro
http://www.enterprisedb.com

#9Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#8)
Re: Shared buffer access rule violations?

Thomas Munro <thomas.munro@enterprisedb.com> writes:

On Thu, Aug 9, 2018 at 12:59 PM Asim R P <apraveen@pivotal.io> wrote:

On Tue, Aug 7, 2018 at 7:00 PM, Peter Geoghegan <pg@bowt.ie> wrote:

I wonder if it would be a better idea to enable Valgrind's memcheck to
mark buffers as read-only or read-write.

Basic question: how do you mark buffers as read-only using memcheck
tool? Running installcheck with valgrind didn't uncover any errors:

Presumably with VALGRIND_xxx macros, but is there a way to make that
work for shared memory?

I like the mprotect() patch. This could be enabled on a build farm
animal.

I think this is a cute idea and potentially useful as an alternative
to valgrind, but I don't like the patch much. It'd be better to
set things up so that the patch adds support for catching bad accesses
with either valgrind or mprotect, according to compile options. Also,
this sort of thing is gratitously ugly:

+#ifdef MPROTECT_BUFFERS
+			BufferMProtect(buf, PROT_NONE);
+#endif

The right way IMO is to just have macro calls like

ProtectBuffer(buf, NO_ACCESS);

which expand to nothing at all if the feature isn't enabled by #ifdefs,
and otherwise to whatever we need it to do. (The access-type symbols
then need to be something that can be defined correctly for either
implementation.)

I guess it would either fail explicitly or behave incorrectly
for VM page size > BLCKSZ depending on OS, but at least on Linux/amd64
you have to go out of your way to get pages > 4KB so that seems OK for
a debugging feature.

What worries me more is that I don't think we try hard to ensure that
buffers are aligned on system page boundaries.

regards, tom lane

#10Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#9)
Re: Shared buffer access rule violations?

Hi,

On 2019-01-13 19:52:32 -0500, Tom Lane wrote:

Thomas Munro <thomas.munro@enterprisedb.com> writes:

On Thu, Aug 9, 2018 at 12:59 PM Asim R P <apraveen@pivotal.io> wrote:

On Tue, Aug 7, 2018 at 7:00 PM, Peter Geoghegan <pg@bowt.ie> wrote:

I wonder if it would be a better idea to enable Valgrind's memcheck to
mark buffers as read-only or read-write.

Basic question: how do you mark buffers as read-only using memcheck
tool? Running installcheck with valgrind didn't uncover any errors:

Presumably with VALGRIND_xxx macros, but is there a way to make that
work for shared memory?

I like the mprotect() patch. This could be enabled on a build farm
animal.

I think this is a cute idea and potentially useful as an alternative
to valgrind, but I don't like the patch much. It'd be better to
set things up so that the patch adds support for catching bad accesses
with either valgrind or mprotect, according to compile options. Also,
this sort of thing is gratitously ugly:

+#ifdef MPROTECT_BUFFERS
+			BufferMProtect(buf, PROT_NONE);
+#endif

The right way IMO is to just have macro calls like

ProtectBuffer(buf, NO_ACCESS);

which expand to nothing at all if the feature isn't enabled by #ifdefs,
and otherwise to whatever we need it to do. (The access-type symbols
then need to be something that can be defined correctly for either
implementation.)

As this has not been addressed since, and the CF has ended, I'm marking
this patch as returned with feedback. Please resubmit once that's
addressed.

I guess it would either fail explicitly or behave incorrectly
for VM page size > BLCKSZ depending on OS, but at least on Linux/amd64
you have to go out of your way to get pages > 4KB so that seems OK for
a debugging feature.

What worries me more is that I don't think we try hard to ensure that
buffers are aligned on system page boundaries.

It's probably worthwhile to just always force that, to reduce
unnecessary page misses.

Greetings,

Andres Freund