Re: New shared memory hooks proposal (was Re: pre_load_libraries)
The attached patch provides add-ins with the means to register for
shared memory and LWLocks. This greatly improves the ease with which
shared memory may be used from add-ins, while keeping the accounting and
management for that shared memory separate.
Specifically it adds named add-in shared memory contexts. From these,
memory can be allocated without affecting the memory available in other
contexts.
Usage is as follows:
from add-in functions called from preload_libraries, you may call
RegisterAddinContext(const * name, size_t size)
to register a new (logical) shared memory segment.
and
RegisterAddinLWLock(LWLockid *lock_ptr);
to request that a LWLock be allocated, placed into *lock_ptr.
The actual creation of the shared memory segment and lwlocks is
performed later as part of shared memory initialisation.
To allocate shared memory from a named context you would use
ShmemAllocFromContext(size_t size, const char *name);
To reset a shared memory context back to its original unused state (from
which new allocations may be performed), you may use
ShmemResetContext(const char *name);
This works for me (for Veil) and make check runs fine.
I have not included any documentation updates in the patch as I'm not
sure where such API changes should be documented.
All comments, questions and suggestions are welcomed.
__
Marc
Attachments:
patchestext/x-patch; charset=ANSI_X3.4-1968; name=patchesDownload
*** ./src/backend/storage/ipc/ipci.c Sat Jul 15 08:47:17 2006
--- ./new_src/backend/storage/ipc/ipci.c Tue Jul 18 15:26:22 2006
***************
*** 57,62 ****
--- 57,63 ----
{
PGShmemHeader *seghdr;
Size size;
+ Size size_b4addins;
int numSemas;
/*
***************
*** 93,98 ****
--- 94,107 ----
/* might as well round it off to a multiple of a typical page size */
size = add_size(size, 8192 - (size % 8192));
+ /* The shared memory for add-ins is treated as a separate
+ * segment but in reality is not
+ */
+ size_b4addins = size;
+ size = add_size(size, AddinShmemSize());
+ /* round it off again */
+ size = add_size(size, 8192 - (size % 8192));
+
elog(DEBUG3, "invoking IpcMemoryCreate(size=%lu)",
(unsigned long) size);
***************
*** 101,106 ****
--- 110,125 ----
*/
seghdr = PGSharedMemoryCreate(size, makePrivate, port);
+ /*
+ * Modify hdr to show segment size before add-ins
+ */
+ seghdr->totalsize = size_b4addins;
+
+ /*
+ * Set up segment header sections in each Addin context
+ */
+ InitAddinContexts((void *) ((char *) seghdr + size_b4addins));
+
InitShmemAccess(seghdr);
/*
*** ./src/backend/storage/ipc/shmem.c Fri Jul 14 07:52:22 2006
--- ./new_src/backend/storage/ipc/shmem.c Tue Jul 18 15:26:22 2006
***************
*** 61,66 ****
--- 61,75 ----
* cannot be redistributed to other tables. We could build a simple
* hash bucket garbage collector if need be. Right now, it seems
* unnecessary.
+ *
+ * (e) Add-ins can request their own logical shared memory segments
+ * by calling RegisterAddinContext() from the preload-libraries hook.
+ * Each call establishes a uniquely named add-in shared memopry
+ * context which will be set up as part of postgres intialisation.
+ * Memory can be allocated from these contexts using
+ * ShmemAllocFromContext(), and can be reset to its initial condition
+ * using ShmemResetContext().
+ *
*/
#include "postgres.h"
***************
*** 86,91 ****
--- 95,112 ----
static HTAB *ShmemIndex = NULL; /* primary index hashtable for shmem */
+ /* Structures and globals for managing add-in shared memory contexts */
+ typedef struct context {
+ char *name;
+ Size size;
+ PGShmemHeader *seg_hdr;
+ struct context *next;
+ } ContextNode;
+
+ static ContextNode *addin_contexts = NULL;
+ static Size addin_contexts_size = 0;
+
+
/*
* InitShmemAccess() --- set up basic pointers to shared memory.
***************
*** 140,146 ****
}
/*
! * ShmemAlloc -- allocate max-aligned chunk from shared memory
*
* Assumes ShmemLock and ShmemSegHdr are initialized.
*
--- 161,255 ----
}
/*
! * RegisterAddinContext -- Register the requirement for a named shared
! * memory context.
! */
! void
! RegisterAddinContext(const char *name, Size size)
! {
! char *newstr = malloc(strlen(name) + 1);
! ContextNode *node = malloc(sizeof(ContextNode));
! strcpy(newstr, name);
! node->name = newstr;
! // Round up to typical page size
! node->size = add_size(size, 8192 - (size % 8192));
! node->next = addin_contexts;
! addin_contexts = node;
!
! addin_contexts_size = add_size(addin_contexts_size, node->size);
! }
!
!
! /*
! * ContextFromName -- Return the ContextNode for the given named
! * context, or NULL if not found.
! */
! static ContextNode *
! ContextFromName(const char *name)
! {
! ContextNode *context = addin_contexts;
!
! while (context) {
! if (strcmp(name, context->name) == 0) {
! return context;
! }
! context = context->next;
! }
! return NULL;
! }
!
! /*
! * InitAddinContexts -- Initialise the registered addin shared memory
! * contexts.
! */
! void
! InitAddinContexts(void *start)
! {
! PGShmemHeader *next_segment = (PGShmemHeader *) start;
! ContextNode *context = addin_contexts;
!
! while (context) {
! context->seg_hdr = next_segment;
!
! next_segment->totalsize = context->size;
! next_segment->freeoffset = MAXALIGN(sizeof(PGShmemHeader));
!
! next_segment = (PGShmemHeader *)
! ((char *) next_segment + context->size);
! context = context->next;
! }
! }
!
! /*
! * ShmemResetContext -- Re-initialise the named addin shared memory context.
! */
! void
! ShmemResetContext(const char *name)
! {
! PGShmemHeader *segment;
! ContextNode *context = ContextFromName(name);
! if (!context) {
! ereport(ERROR,
! (errcode(ERRCODE_INTERNAL_ERROR),
! errmsg("cannot reset unknown shared memory context %s",
! name)));
! }
! segment = context->seg_hdr;
! segment->freeoffset = MAXALIGN(sizeof(PGShmemHeader));
! }
!
! /*
! * AddinShmemSize -- Report how much shared memory has been registered
! * for add-ins.
! */
! Size
! AddinShmemSize(void)
! {
! return addin_contexts_size;
! }
!
! /*
! * ShmemAllocFromContext -- allocate max-aligned chunk from shared memory
*
* Assumes ShmemLock and ShmemSegHdr are initialized.
*
***************
*** 149,163 ****
* to be compatible with malloc().
*/
void *
! ShmemAlloc(Size size)
{
! Size newStart;
! Size newFree;
! void *newSpace;
/* use volatile pointer to prevent code rearrangement */
volatile PGShmemHeader *shmemseghdr = ShmemSegHdr;
/*
* ensure all space is adequately aligned.
*/
--- 258,287 ----
* to be compatible with malloc().
*/
void *
! ShmemAllocFromContext(Size size, const char *context_name)
{
! Size newStart;
! Size newFree;
! void *newSpace;
! ContextNode *context;
/* use volatile pointer to prevent code rearrangement */
volatile PGShmemHeader *shmemseghdr = ShmemSegHdr;
+ /*
+ * if context_name is provided, allocate from the named context
+ */
+ if (context_name) {
+ context = ContextFromName(context_name);
+ if (!context) {
+ ereport(ERROR,
+ (errcode(ERRCODE_INTERNAL_ERROR),
+ errmsg("cannot reset unknown shared memory context %s",
+ context_name)));
+ }
+ shmemseghdr = context->seg_hdr;
+ }
+
/*
* ensure all space is adequately aligned.
*/
***************
*** 176,182 ****
newFree = newStart + size;
if (newFree <= shmemseghdr->totalsize)
{
! newSpace = (void *) MAKE_PTR(newStart);
shmemseghdr->freeoffset = newFree;
}
else
--- 300,306 ----
newFree = newStart + size;
if (newFree <= shmemseghdr->totalsize)
{
! newSpace = (void *) MAKE_PTRFROM((SHMEM_OFFSET) shmemseghdr, newStart);
shmemseghdr->freeoffset = newFree;
}
else
***************
*** 193,198 ****
--- 317,338 ----
}
/*
+ * ShmemAlloc -- allocate max-aligned chunk from shared memory
+ *
+ * Assumes ShmemLock and ShmemSegHdr are initialized.
+ *
+ * Returns: real pointer to memory or NULL if we are out
+ * of space. Has to return a real pointer in order
+ * to be compatible with malloc().
+ */
+
+ void *
+ ShmemAlloc(Size size)
+ {
+ return ShmemAllocFromContext(size, NULL);
+ }
+
+ /*
* ShmemIsValid -- test if an offset refers to valid shared memory
*
* Returns TRUE if the pointer is valid.
*** ./src/backend/storage/lmgr/lwlock.c Fri Jul 14 07:52:23 2006
--- ./new_src/backend/storage/lmgr/lwlock.c Tue Jul 18 15:26:22 2006
***************
*** 29,34 ****
--- 29,38 ----
#include "storage/spin.h"
+ static int NumAddinLWLocks(void);
+ static void AssignAddinLWLocks(void);
+
+
/* We use the ShmemLock spinlock to protect LWLockAssign */
extern slock_t *ShmemLock;
***************
*** 90,95 ****
--- 94,150 ----
static int *block_counts;
#endif
+ /*
+ * Structures and globals to allow add-ins to register for their own
+ * lwlocks from the preload-libraries hook.
+ */
+ typedef struct lwlocknode {
+ LWLockId *lock;
+ struct lwlocknode *next;
+ } LWLockNode;
+
+ static LWLockNode *addin_locks = NULL;
+ static int num_addin_locks = 0;
+
+
+ /*
+ * RegisterAddinLWLock() --- Allow an andd-in to request a LWLock
+ * from the preload-libraries hook.
+ */
+ void
+ RegisterAddinLWLock(LWLockId *lock)
+ {
+ LWLockNode *locknode = malloc(sizeof(LWLockNode));
+ locknode->next = addin_locks;
+ locknode->lock = lock;
+ addin_locks = locknode;
+ num_addin_locks++;
+ }
+
+ /*
+ * NumAddinLWLocks() --- Return the number of LWLocks requested by add-ins.
+ */
+ int
+ NumAddinLWLocks()
+ {
+ return num_addin_locks;
+ }
+
+ /*
+ * AssignAddinLWLocks() --- Assign LWLocks previously requested by add-ins.
+ */
+ void
+ AssignAddinLWLocks()
+ {
+ LWLockNode *node = addin_locks;
+ while (node) {
+ *(node->lock) = LWLockAssign();
+ node = node->next;
+ }
+ }
+
+
+
#ifdef LOCK_DEBUG
bool Trace_lwlocks = false;
***************
*** 177,182 ****
--- 232,240 ----
/* Leave a few extra for use by user-defined modules. */
numLocks += NUM_USER_DEFINED_LWLOCKS;
+ /* Add the number that have been explicitly requested by add-ins. */
+ numLocks += NumAddinLWLocks();
+
return numLocks;
}
***************
*** 245,250 ****
--- 303,314 ----
LWLockCounter = (int *) ((char *) LWLockArray - 2 * sizeof(int));
LWLockCounter[0] = (int) FirstLockMgrLock + NUM_LOCK_PARTITIONS;
LWLockCounter[1] = numLocks;
+
+ /*
+ * Allocate LWLocks for those add-ins that have explicitly requested
+ * them.
+ */
+ AssignAddinLWLocks();
}
*** ./src/include/storage/lwlock.h Sun May 7 17:00:17 2006
--- ./new_src/include/storage/lwlock.h Tue Jul 18 15:22:09 2006
***************
*** 76,79 ****
--- 76,81 ----
extern Size LWLockShmemSize(void);
extern void CreateLWLocks(void);
+ extern void RegisterAddinLWLock(LWLockId *lock);
+
#endif /* LWLOCK_H */
*** ./src/include/storage/pg_shmem.h Sun Mar 5 07:58:59 2006
--- ./new_src/include/storage/pg_shmem.h Tue Jul 18 15:14:05 2006
***************
*** 51,54 ****
--- 51,61 ----
extern bool PGSharedMemoryIsInUse(unsigned long id1, unsigned long id2);
extern void PGSharedMemoryDetach(void);
+
+ extern void RegisterAddinContext(const char *name, Size size);
+ extern Size AddinShmemSize(void);
+ extern void InitAddinContexts(void * start);
+ extern void *ShmemAllocFromContext(Size size, const char *name);
+ extern void ShmemResetContext(const char *name);
+
#endif /* PG_SHMEM_H */
*** ./src/include/storage/shmem.h Sun Mar 5 07:59:00 2006
--- ./new_src/include/storage/shmem.h Tue Jul 18 12:49:09 2006
***************
*** 38,43 ****
--- 38,48 ----
extern DLLIMPORT SHMEM_OFFSET ShmemBase;
+ /* coerce an offset into a pointer in a specified address space. This
+ * macro (only) is not confined to the primary shared memory region */
+ #define MAKE_PTRFROM(base,xx_offs)\
+ (base+((unsigned long)(xx_offs)))
+
/* coerce an offset into a pointer in this process's address space */
#define MAKE_PTR(xx_offs)\
(ShmemBase+((unsigned long)(xx_offs)))
Import Notes
Reply to msg id not found: 20060718225607.B6CC93803CD@mx3.hub.orgReference msg id not found: 20060718225607.B6CC93803CD@mx3.hub.org
On Tue, 2006-07-18 at 16:36 -0700, Marc Munro wrote:
The attached patch provides add-ins with the means to register for
shared memory and LWLocks. This greatly improves the ease with which
shared memory may be used from add-ins, while blah blah blah....
I have tried to be patient but this is my first patch and the suspense
is killing me.
I'd like some idea of what is likely to happen to this patch. If it's a
non-starter I'd like to know. Ditto if it needs work. I realise
everyone is busy right now, so I'm also prepared to be told to be
patient; I'd just like to know it hasn't been overlooked and forgotten
in the sudden rush of more interesting patches.
Just to re-iterate, this patch enables Veil
http://pgfoundry.org/projects/veil/ to be a good postgres citizen,
removing its motivation to steal shared memory.
__
Marc
Marc Munro <marc@bloodnok.com> writes:
The attached patch provides add-ins with the means to register for
shared memory and LWLocks. This greatly improves the ease with which
shared memory may be used from add-ins, while blah blah blah....
I have tried to be patient but this is my first patch and the suspense
is killing me.
Sorry about that, but we're at the end of the development cycle and
there's a long list of patches awaiting review :-(. Yours seems
unlikely to affect any other work, so it's not high on the priority
list to get pushed in.
I'd like some idea of what is likely to happen to this patch.
We'll look at it, and you will have an opportunity to fix anything
anyone doesn't like. Most likely this will happen in early August.
If you have schedule constraints (like you're on vacation in August)
you should let us know.
regards, tom lane
I updated the style of your patch, and added a little to your comment
block about how to use this capability. I don't think any additional
documentation is necessary.
Thanks.
---------------------------------------------------------------------------
Marc Munro wrote:
-- Start of PGP signed section.
The attached patch provides add-ins with the means to register for
shared memory and LWLocks. This greatly improves the ease with which
shared memory may be used from add-ins, while keeping the accounting and
management for that shared memory separate.Specifically it adds named add-in shared memory contexts. From these,
memory can be allocated without affecting the memory available in other
contexts.Usage is as follows:
from add-in functions called from preload_libraries, you may call
RegisterAddinContext(const * name, size_t size)
to register a new (logical) shared memory segment.and
RegisterAddinLWLock(LWLockid *lock_ptr);
to request that a LWLock be allocated, placed into *lock_ptr.The actual creation of the shared memory segment and lwlocks is
performed later as part of shared memory initialisation.To allocate shared memory from a named context you would use
ShmemAllocFromContext(size_t size, const char *name);To reset a shared memory context back to its original unused state (from
which new allocations may be performed), you may use
ShmemResetContext(const char *name);This works for me (for Veil) and make check runs fine.
I have not included any documentation updates in the patch as I'm not
sure where such API changes should be documented.All comments, questions and suggestions are welcomed.
__
Marc
[ Attachment, skipping... ]
-- End of PGP section, PGP failed!
--
Bruce Momjian bruce@momjian.us
EnterpriseDB http://www.enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
Attachments:
/rtmp/difftext/x-diffDownload
Index: src/backend/storage/ipc/ipci.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/storage/ipc/ipci.c,v
retrieving revision 1.86
diff -c -c -r1.86 ipci.c
*** src/backend/storage/ipc/ipci.c 15 Jul 2006 15:47:17 -0000 1.86
--- src/backend/storage/ipc/ipci.c 1 Aug 2006 19:01:09 -0000
***************
*** 57,62 ****
--- 57,63 ----
{
PGShmemHeader *seghdr;
Size size;
+ Size size_b4addins;
int numSemas;
/*
***************
*** 93,98 ****
--- 94,108 ----
/* might as well round it off to a multiple of a typical page size */
size = add_size(size, 8192 - (size % 8192));
+ /*
+ * The shared memory for add-ins is treated as a separate
+ * segment, but in reality it is not.
+ */
+ size_b4addins = size;
+ size = add_size(size, AddinShmemSize());
+ /* round it off again */
+ size = add_size(size, 8192 - (size % 8192));
+
elog(DEBUG3, "invoking IpcMemoryCreate(size=%lu)",
(unsigned long) size);
***************
*** 101,106 ****
--- 111,126 ----
*/
seghdr = PGSharedMemoryCreate(size, makePrivate, port);
+ /*
+ * Modify hdr to show segment size before add-ins
+ */
+ seghdr->totalsize = size_b4addins;
+
+ /*
+ * Set up segment header sections in each Addin context
+ */
+ InitAddinContexts((void *) ((char *) seghdr + size_b4addins));
+
InitShmemAccess(seghdr);
/*
Index: src/backend/storage/ipc/shmem.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/storage/ipc/shmem.c,v
retrieving revision 1.94
diff -c -c -r1.94 shmem.c
*** src/backend/storage/ipc/shmem.c 22 Jul 2006 23:04:39 -0000 1.94
--- src/backend/storage/ipc/shmem.c 1 Aug 2006 19:01:09 -0000
***************
*** 61,66 ****
--- 61,75 ----
* cannot be redistributed to other tables. We could build a simple
* hash bucket garbage collector if need be. Right now, it seems
* unnecessary.
+ *
+ * (e) Add-ins can request their own logical shared memory segments
+ * by calling RegisterAddinContext() from the preload-libraries hook.
+ * Each call establishes a uniquely named add-in shared memopry
+ * context which will be set up as part of postgres intialisation.
+ * Memory can be allocated from these contexts using
+ * ShmemAllocFromContext(), and can be reset to its initial condition
+ * using ShmemResetContext(). Also, RegisterAddinLWLock(LWLockid *lock_ptr)
+ * can be used to request that a LWLock be allocated, placed into *lock_ptr.
*/
#include "postgres.h"
***************
*** 86,91 ****
--- 95,113 ----
static HTAB *ShmemIndex = NULL; /* primary index hashtable for shmem */
+ /* Structures and globals for managing add-in shared memory contexts */
+ typedef struct context
+ {
+ char *name;
+ Size size;
+ PGShmemHeader *seg_hdr;
+ struct context *next;
+ } ContextNode;
+
+ static ContextNode *addin_contexts = NULL;
+ static Size addin_contexts_size = 0;
+
+
/*
* InitShmemAccess() --- set up basic pointers to shared memory.
***************
*** 135,146 ****
* (This doesn't really belong here, but not worth moving.)
*/
ShmemVariableCache = (VariableCache)
! ShmemAlloc(sizeof(*ShmemVariableCache));
memset(ShmemVariableCache, 0, sizeof(*ShmemVariableCache));
}
/*
! * ShmemAlloc -- allocate max-aligned chunk from shared memory
*
* Assumes ShmemLock and ShmemSegHdr are initialized.
*
--- 157,260 ----
* (This doesn't really belong here, but not worth moving.)
*/
ShmemVariableCache = (VariableCache)
! ShmemAlloc(sizeof(*ShmemVariableCache));
memset(ShmemVariableCache, 0, sizeof(*ShmemVariableCache));
}
/*
! * RegisterAddinContext -- Register the requirement for a named shared
! * memory context.
! */
! void
! RegisterAddinContext(const char *name, Size size)
! {
! char *newstr = malloc(strlen(name) + 1);
! ContextNode *node = malloc(sizeof(ContextNode));
!
! strcpy(newstr, name);
! node->name = newstr;
!
! /* Round up to typical page size */
! node->size = add_size(size, 8192 - (size % 8192));
! node->next = addin_contexts;
!
! addin_contexts = node;
! addin_contexts_size = add_size(addin_contexts_size, node->size);
! }
!
!
! /*
! * ContextFromName -- Return the ContextNode for the given named
! * context, or NULL if not found.
! */
! static ContextNode *
! ContextFromName(const char *name)
! {
! ContextNode *context = addin_contexts;
!
! while (context)
! {
! if (strcmp(name, context->name) == 0)
! return context;
! context = context->next;
! }
! return NULL;
! }
!
! /*
! * InitAddinContexts -- Initialise the registered addin shared memory
! * contexts.
! */
! void
! InitAddinContexts(void *start)
! {
! PGShmemHeader *next_segment = (PGShmemHeader *) start;
! ContextNode *context = addin_contexts;
!
! while (context)
! {
! context->seg_hdr = next_segment;
!
! next_segment->totalsize = context->size;
! next_segment->freeoffset = MAXALIGN(sizeof(PGShmemHeader));
!
! next_segment = (PGShmemHeader *)
! ((char *) next_segment + context->size);
! context = context->next;
! }
! }
!
! /*
! * ShmemResetContext -- Re-initialise the named addin shared memory context.
! */
! void
! ShmemResetContext(const char *name)
! {
! PGShmemHeader *segment;
! ContextNode *context = ContextFromName(name);
!
! if (!context)
! ereport(ERROR,
! (errcode(ERRCODE_INTERNAL_ERROR),
! errmsg("cannot reset unknown shared memory context %s",
! name)));
!
! segment = context->seg_hdr;
! segment->freeoffset = MAXALIGN(sizeof(PGShmemHeader));
! }
!
! /*
! * AddinShmemSize -- Report how much shared memory has been registered
! * for add-ins.
! */
! Size
! AddinShmemSize(void)
! {
! return addin_contexts_size;
! }
!
! /*
! * ShmemAllocFromContext -- allocate max-aligned chunk from shared memory
*
* Assumes ShmemLock and ShmemSegHdr are initialized.
*
***************
*** 149,163 ****
* to be compatible with malloc().
*/
void *
! ShmemAlloc(Size size)
{
! Size newStart;
! Size newFree;
! void *newSpace;
/* use volatile pointer to prevent code rearrangement */
volatile PGShmemHeader *shmemseghdr = ShmemSegHdr;
/*
* ensure all space is adequately aligned.
*/
--- 263,292 ----
* to be compatible with malloc().
*/
void *
! ShmemAllocFromContext(Size size, const char *context_name)
{
! Size newStart;
! Size newFree;
! void *newSpace;
! ContextNode *context;
/* use volatile pointer to prevent code rearrangement */
volatile PGShmemHeader *shmemseghdr = ShmemSegHdr;
+ /*
+ * if context_name is provided, allocate from the named context
+ */
+ if (context_name)
+ {
+ context = ContextFromName(context_name);
+ if (!context)
+ ereport(ERROR,
+ (errcode(ERRCODE_INTERNAL_ERROR),
+ errmsg("cannot reset unknown shared memory context %s",
+ context_name)));
+ shmemseghdr = context->seg_hdr;
+ }
+
/*
* ensure all space is adequately aligned.
*/
***************
*** 176,182 ****
newFree = newStart + size;
if (newFree <= shmemseghdr->totalsize)
{
! newSpace = (void *) MAKE_PTR(newStart);
shmemseghdr->freeoffset = newFree;
}
else
--- 305,311 ----
newFree = newStart + size;
if (newFree <= shmemseghdr->totalsize)
{
! newSpace = (void *) MAKE_PTRFROM((SHMEM_OFFSET) shmemseghdr, newStart);
shmemseghdr->freeoffset = newFree;
}
else
***************
*** 193,198 ****
--- 322,343 ----
}
/*
+ * ShmemAlloc -- allocate max-aligned chunk from shared memory
+ *
+ * Assumes ShmemLock and ShmemSegHdr are initialized.
+ *
+ * Returns: real pointer to memory or NULL if we are out
+ * of space. Has to return a real pointer in order
+ * to be compatible with malloc().
+ */
+
+ void *
+ ShmemAlloc(Size size)
+ {
+ return ShmemAllocFromContext(size, NULL);
+ }
+
+ /*
* ShmemIsValid -- test if an offset refers to valid shared memory
*
* Returns TRUE if the pointer is valid.
Index: src/backend/storage/lmgr/lwlock.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/storage/lmgr/lwlock.c,v
retrieving revision 1.42
diff -c -c -r1.42 lwlock.c
*** src/backend/storage/lmgr/lwlock.c 24 Jul 2006 16:32:45 -0000 1.42
--- src/backend/storage/lmgr/lwlock.c 1 Aug 2006 19:01:09 -0000
***************
*** 29,34 ****
--- 29,38 ----
#include "storage/spin.h"
+ static int NumAddinLWLocks(void);
+ static void AssignAddinLWLocks(void);
+
+
/* We use the ShmemLock spinlock to protect LWLockAssign */
extern slock_t *ShmemLock;
***************
*** 90,95 ****
--- 94,155 ----
static int *block_counts;
#endif
+ /*
+ * Structures and globals to allow add-ins to register for their own
+ * lwlocks from the preload-libraries hook.
+ */
+ typedef struct LWLockNode
+ {
+ LWLockId *lock;
+ struct LWLockNode *next;
+ } LWLockNode;
+
+ static LWLockNode *addin_locks = NULL;
+ static int num_addin_locks = 0;
+
+
+ /*
+ * RegisterAddinLWLock() --- Allow an andd-in to request a LWLock
+ * from the preload-libraries hook.
+ */
+ void
+ RegisterAddinLWLock(LWLockId *lock)
+ {
+ LWLockNode *locknode = malloc(sizeof(LWLockNode));
+
+ locknode->next = addin_locks;
+ locknode->lock = lock;
+
+ addin_locks = locknode;
+ num_addin_locks++;
+ }
+
+ /*
+ * NumAddinLWLocks() --- Return the number of LWLocks requested by add-ins.
+ */
+ int
+ NumAddinLWLocks()
+ {
+ return num_addin_locks;
+ }
+
+ /*
+ * AssignAddinLWLocks() --- Assign LWLocks previously requested by add-ins.
+ */
+ void
+ AssignAddinLWLocks()
+ {
+ LWLockNode *node = addin_locks;
+
+ while (node)
+ {
+ *(node->lock) = LWLockAssign();
+ node = node->next;
+ }
+ }
+
+
+
#ifdef LOCK_DEBUG
bool Trace_lwlocks = false;
***************
*** 174,179 ****
--- 234,242 ----
/* Leave a few extra for use by user-defined modules. */
numLocks += NUM_USER_DEFINED_LWLOCKS;
+ /* Add the number that have been explicitly requested by add-ins. */
+ numLocks += NumAddinLWLocks();
+
return numLocks;
}
***************
*** 241,246 ****
--- 304,315 ----
LWLockCounter = (int *) ((char *) LWLockArray - 2 * sizeof(int));
LWLockCounter[0] = (int) NumFixedLWLocks;
LWLockCounter[1] = numLocks;
+
+ /*
+ * Allocate LWLocks for those add-ins that have explicitly requested
+ * them.
+ */
+ AssignAddinLWLocks();
}
Index: src/include/storage/lwlock.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/storage/lwlock.h,v
retrieving revision 1.30
diff -c -c -r1.30 lwlock.h
*** src/include/storage/lwlock.h 23 Jul 2006 23:08:46 -0000 1.30
--- src/include/storage/lwlock.h 1 Aug 2006 19:01:10 -0000
***************
*** 92,95 ****
--- 92,97 ----
extern Size LWLockShmemSize(void);
extern void CreateLWLocks(void);
+ extern void RegisterAddinLWLock(LWLockId *lock);
+
#endif /* LWLOCK_H */
Index: src/include/storage/pg_shmem.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/storage/pg_shmem.h,v
retrieving revision 1.18
diff -c -c -r1.18 pg_shmem.h
*** src/include/storage/pg_shmem.h 5 Mar 2006 15:58:59 -0000 1.18
--- src/include/storage/pg_shmem.h 1 Aug 2006 19:01:10 -0000
***************
*** 51,54 ****
--- 51,61 ----
extern bool PGSharedMemoryIsInUse(unsigned long id1, unsigned long id2);
extern void PGSharedMemoryDetach(void);
+
+ extern void RegisterAddinContext(const char *name, Size size);
+ extern Size AddinShmemSize(void);
+ extern void InitAddinContexts(void * start);
+ extern void *ShmemAllocFromContext(Size size, const char *name);
+ extern void ShmemResetContext(const char *name);
+
#endif /* PG_SHMEM_H */
Index: src/include/storage/shmem.h
===================================================================
RCS file: /cvsroot/pgsql/src/include/storage/shmem.h,v
retrieving revision 1.47
diff -c -c -r1.47 shmem.h
*** src/include/storage/shmem.h 5 Mar 2006 15:59:00 -0000 1.47
--- src/include/storage/shmem.h 1 Aug 2006 19:01:10 -0000
***************
*** 38,43 ****
--- 38,48 ----
extern DLLIMPORT SHMEM_OFFSET ShmemBase;
+ /* coerce an offset into a pointer in a specified address space. This
+ * macro (only) is not confined to the primary shared memory region */
+ #define MAKE_PTRFROM(base,xx_offs)\
+ (base+((unsigned long)(xx_offs)))
+
/* coerce an offset into a pointer in this process's address space */
#define MAKE_PTR(xx_offs)\
(ShmemBase+((unsigned long)(xx_offs)))
Marc Munro <marc@bloodnok.com> writes:
The attached patch provides add-ins with the means to register for
shared memory and LWLocks.
I finally got around to reviewing this patch, and realized that it's got
a pretty fundamental design flaw: it isn't useful under Windows (or any
other EXEC_BACKEND platform), because there isn't any provision for
backends to locate structs allocated by other backends by means of
searching in shared memory. AFAICS the code can only do something
useful in a platform where allocations made in the postmaster process
can be found by backends via fork inheritance of pointers.
The right way to handle shmem allocations is to use ShmemInitStruct
to either allocate a shared struct for the first time or attach to a
previously made instance of the struct. (This "struct" could be a
memory allocation arena itself, but that's not the core code's problem.)
Now we could extend the patch so that each "addin" has its own
ShmemIndex within its private workspace, but I think that's probably
overkill. My inclination is to rip out ShmemAllocFromContext and expect
addins to use ShmemInitStruct the same as everyone else. The hook
callable at shared_preload_libraries time should just serve to add
the necessary amount to the computed size of the shared memory segment.
RegisterAddinLWLock is broken in the same way: it could only be used
safely if the registered lock ID were remembered in shared memory,
but since shared memory doesn't exist at the time it's supposed to be
called, there's no way to do that. Again, it'd seem to work as long as
the lock ID value were inherited via fork, but that's gonna fail on
EXEC_BACKEND builds. I think we should probably take this out in favor
of something that just increments a counter that replaces
NUM_USER_DEFINED_LWLOCKS, and expect people to use LWLockAssign() at an
appropriate time while initializing their shared memory areas.
It strikes me that there's a race condition here, which we've not seen
in previous use because backends expect all standard shared memory
structs to have already been initialized by the postmaster. An add-on
will instead have to operate under the regime of "first backend wanting
to use the struct must init it". Although ShmemInitStruct returns a
"found" bool telling you you've got to init it, there's no interlock
ensuring that you can do so before someone else comes along and tries to
use the struct (which he'll assume is done because he sees found = true).
And, per above discussion, an add-on can't solve this for itself using
an add-on LWLock, because it really has to acquire its add-on locks
while initializing that same shmem struct, which is where it's going to
keep the locks' identity :-(
So I think we need to provide a standard LWLock reserved for the purpose
of synchronizing first-time use of a shmem struct. The coding rules for
an add-on would then look like:
* in the shared_preload_libraries hook:
RequestAddinShmemSpace(size);
RequestAddinLWLocks(n);
* in a backend, to access a shared memory struct:
static mystruct *ptr = NULL;
if (!ptr)
{
bool found;
LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
ptr = ShmemInitStruct("my struct name", size, &found);
if (!ptr)
elog(ERROR, "out of shared memory");
if (!found)
{
initialize contents of shmem area;
acquire any requested LWLocks using:
ptr->mylockid = LWLockAssign();
}
LWLockRelease(AddinShmemInitLock);
}
Thoughts?
regards, tom lane
On Sat, 2006-10-14 at 14:55 -0400, Tom Lane wrote:
Marc Munro <marc@bloodnok.com> writes:
The attached patch provides add-ins with the means to register for
shared memory and LWLocks.I finally got around to reviewing this patch, and realized that it's got
a pretty fundamental design flaw: it isn't useful under Windows (or any
other EXEC_BACKEND platform), because there isn't any provision for
backends to locate structs allocated by other backends by means of
searching in shared memory. AFAICS the code can only do something
useful in a platform where allocations made in the postmaster process
can be found by backends via fork inheritance of pointers.
Rats! You are right. I had quite overlooked the windows case.
The right way to handle shmem allocations is to use ShmemInitStruct
to either allocate a shared struct for the first time or attach to a
previously made instance of the struct. (This "struct" could be a
memory allocation arena itself, but that's not the core code's problem.)
Now we could extend the patch so that each "addin" has its own
ShmemIndex within its private workspace, but I think that's probably
overkill. My inclination is to rip out ShmemAllocFromContext and expect
addins to use ShmemInitStruct the same as everyone else. The hook
callable at shared_preload_libraries time should just serve to add
the necessary amount to the computed size of the shared memory segment.
I think that works for me.
RegisterAddinLWLock is broken in the same way: it could only be used
safely if the registered lock ID were remembered in shared memory,
but since shared memory doesn't exist at the time it's supposed to be
called, there's no way to do that. Again, it'd seem to work as long as
the lock ID value were inherited via fork, but that's gonna fail on
EXEC_BACKEND builds. I think we should probably take this out in favor
of something that just increments a counter that replaces
NUM_USER_DEFINED_LWLOCKS, and expect people to use LWLockAssign() at an
appropriate time while initializing their shared memory areas.
Agreed.
It strikes me that there's a race condition here, which we've not seen
in previous use because backends expect all standard shared memory
structs to have already been initialized by the postmaster. An add-on
will instead have to operate under the regime of "first backend wanting
to use the struct must init it". Although ShmemInitStruct returns a
"found" bool telling you you've got to init it, there's no interlock
ensuring that you can do so before someone else comes along and tries to
use the struct (which he'll assume is done because he sees found = true).
And, per above discussion, an add-on can't solve this for itself using
an add-on LWLock, because it really has to acquire its add-on locks
while initializing that same shmem struct, which is where it's going to
keep the locks' identity :-(So I think we need to provide a standard LWLock reserved for the purpose
of synchronizing first-time use of a shmem struct. The coding rules for
an add-on would then look like:* in the shared_preload_libraries hook:
RequestAddinShmemSpace(size);
RequestAddinLWLocks(n);* in a backend, to access a shared memory struct:
static mystruct *ptr = NULL;
if (!ptr)
{
bool found;LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
ptr = ShmemInitStruct("my struct name", size, &found);
if (!ptr)
elog(ERROR, "out of shared memory");
if (!found)
{
initialize contents of shmem area;
acquire any requested LWLocks using:
ptr->mylockid = LWLockAssign();
}
LWLockRelease(AddinShmemInitLock);
}Thoughts?
I am content that what you suggest is the right way to go. I will work
on a new patch immediately, unless you would prefer to do this yourself.
It seems to me that these coding rules should be documented in the main
documentation, I guess in the section that describes Dynamic Loading.
Would you like me to take a stab at that?
__
Marc
Marc Munro <marc@bloodnok.com> writes:
I am content that what you suggest is the right way to go. I will work
on a new patch immediately, unless you would prefer to do this yourself.
I've already got some of the relevant changes made in my local copy,
so I might as well just go ahead and finish the coding work. But if
you'd like to do the documentation part, be my guest.
It seems to me that these coding rules should be documented in the main
documentation, I guess in the section that describes Dynamic Loading.
I'm not entirely sure where to put it, either. Subsection 32.9.1 (Dynamic
Loading) is pretty basic stuff for first-time C-function authors ---
this seems off-topic for that section. Maybe a new subsection down near
the end of 32.9? Another possibility is somewhere in the Internals
chapters, although I don't offhand see an obvious place there either.
regards, tom lane
On Sun, 2006-10-15 at 16:36 -0400, Tom Lane wrote:
I'm not entirely sure where to put it, either. Subsection 32.9.1 (Dynamic
Loading) is pretty basic stuff for first-time C-function authors ---
this seems off-topic for that section. Maybe a new subsection down near
the end of 32.9? Another possibility is somewhere in the Internals
chapters, although I don't offhand see an obvious place there either.
This patch, against xfunc.sgml, adds a new subsection 33.9.12, Shared
Memory and LWLocks in C-Language Functions, describing how shared memory
and lwlocks may be requested by C add-in functions.
I have, btw, verified that this works in the forthcoming release of
Veil.
__
Marc
Attachments:
patch_xfunc.sgmltext/x-patch; charset=ANSI_X3.4-1968; name=patch_xfunc.sgmlDownload
*** xfunc.sgml 2006-10-31 12:52:01.000000000 -0800
--- xfunc.sgml.new 2006-10-31 12:51:51.000000000 -0800
***************
*** 2909,2912 ****
--- 2909,2960 ----
</programlisting>
</para>
</sect2>
+ <sect2>
+ <title>Shared Memory and LWLocks in C-Language Functions</title>
+
+ <para>
+ Add-ins may reserve LWLocks and an allocation of shared memory on server
+ startup. The add-in's shared library must be preloaded, by specifying
+ it in
+ <xref linkend="guc-shared-preload-libraries"><indexterm><primary>shared-preload-libraries</></>,
+ and the shared memory must be reserved by calling:
+ <programlisting>
+ void RequestAddinShmemSpace(int size)
+ </programlisting>
+ from your <literal>_PG_init</> function.
+ </para>
+ <para>
+ LWLocks are reserved by calling:
+ <programlisting>
+ void RequestAddinLWLocks(int n)
+ </programlisting>
+ from <literal>_PG_init</>.
+ </para>
+ <para>
+ To avoid possible race-conditions, each backend should use the LWLock
+ <literal>AddinShmemInitLock</> when connecting to and intializing
+ its allocation of shared memory, as shown here:
+
+ <programlisting>
+ static mystruct *ptr = NULL;
+
+ if (!ptr)
+ {
+ bool found;
+
+ LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
+ ptr = ShmemInitStruct("my struct name", size, &found);
+ if (!ptr)
+ elog(ERROR, "out of shared memory");
+ if (!found)
+ {
+ initialize contents of shmem area;
+ acquire any requested LWLocks using:
+ ptr->mylockid = LWLockAssign();
+ }
+ LWLockRelease(AddinShmemInitLock);
+ }
+ </programlisting>
+ </para>
+ </sect2>
</sect1>
Patch applied. Thanks. Your documentation changes can be viewed in
five minutes using links on the developer's page,
http://www.postgresql.org/developer/testing.
---------------------------------------------------------------------------
Marc Munro wrote:
-- Start of PGP signed section.
On Sun, 2006-10-15 at 16:36 -0400, Tom Lane wrote:
I'm not entirely sure where to put it, either. Subsection 32.9.1 (Dynamic
Loading) is pretty basic stuff for first-time C-function authors ---
this seems off-topic for that section. Maybe a new subsection down near
the end of 32.9? Another possibility is somewhere in the Internals
chapters, although I don't offhand see an obvious place there either.This patch, against xfunc.sgml, adds a new subsection 33.9.12, Shared
Memory and LWLocks in C-Language Functions, describing how shared memory
and lwlocks may be requested by C add-in functions.I have, btw, verified that this works in the forthcoming release of
Veil.__
Marc
[ Attachment, skipping... ]
-- End of PGP section, PGP failed!
--
Bruce Momjian bruce@momjian.us
EnterpriseDB http://www.enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +