Safe memory allocation functions
Hi all,
For the last couple of weeks it has been mentioned a couple of times
that it would be useful to have a set of palloc APIs able to return
NULL on OOM to allow certain code paths to not ERROR and to take
another route when memory is under pressure. This has been for example
mentioned on the FPW compression thread or here:
/messages/by-id/CAB7nPqRbewhSbJ_tkAogtpcMrxYJsvKKB9p030d0TpijB4t3YA@mail.gmail.com
Attached is a patch adding the following set of functions for frontend
and backends returning NULL instead of reporting ERROR when allocation
fails:
- palloc_safe
- palloc0_safe
- repalloc_safe
This has simply needed some refactoring in aset.c to set up the new
functions by passing an additional control flag, and I didn't think
that adding a new safe version for AllocSetContextCreate was worth it.
Those APIs are not called anywhere yet, but I could for example write
a small extension for that that could be put in src/test/modules or
publish on github in my plugin repo. Also, I am not sure if this is
material for 9.5, even if the patch is not complicated, but let me
know if you are interested in it and I'll add it to the next CF.
Regards,
--
Michael
Attachments:
20150113_palloc_safe.patchtext/x-diff; charset=US-ASCII; name=20150113_palloc_safe.patchDownload+451-286
Michael Paquier <michael.paquier@gmail.com> writes:
Attached is a patch adding the following set of functions for frontend
and backends returning NULL instead of reporting ERROR when allocation
fails:
- palloc_safe
- palloc0_safe
- repalloc_safe
Unimpressed with this naming convention. "_unsafe" would be nearer
the mark ;-)
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Michael Paquier wrote
Attached is a patch adding the following set of functions for frontend
and backends returning NULL instead of reporting ERROR when allocation
fails:
- palloc_safe
- palloc0_safe
- repalloc_safe
The only thing I can contribute is paint...I'm not fond of the word "_safe"
and think "_try" would be more informative...in the spirit of try/catch as a
means of error handling/recovery.
David J.
--
View this message in context: http://postgresql.nabble.com/Safe-memory-allocation-functions-tp5833709p5833711.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
I wrote:
Michael Paquier <michael.paquier@gmail.com> writes:
Attached is a patch adding the following set of functions for frontend
and backends returning NULL instead of reporting ERROR when allocation
fails:
- palloc_safe
- palloc0_safe
- repalloc_safe
Unimpressed with this naming convention. "_unsafe" would be nearer
the mark ;-)
Less snarkily: "_noerror" would probably fit better with existing
precedents in our code.
However, there is a larger practical problem with this whole concept,
which is that experience should teach us to be very wary of the assumption
that asking for memory the system can't give us will just lead to nice
neat malloc-returns-NULL behavior. Any small perusal of the mailing list
archives will remind you that very often the end result will be SIGSEGV,
OOM kills, unrecoverable trap-on-write when the kernel realizes it can't
honor a copy-on-write promise, yadda yadda. Agreed that it's arguable
that these only occur in misconfigured systems ... but misconfiguration
appears to be the default in a depressingly large fraction of systems.
(This is another reason for "_safe" not being the mot juste :-()
In that light, I'm not really convinced that there's a safe use-case
for a behavior like this. I certainly wouldn't risk asking for a couple
of gigabytes on the theory that I could just ask for less if it fails.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Tom Lane writes:
[blah]
(This is another reason for "_safe" not being the mot juste :-()
My wording was definitely incorrect but I sure you got it: I should
have said "safe on error". noerror or error_safe would are definitely
more correct.
In that light, I'm not really convinced that there's a safe use-case
for a behavior like this. I certainly wouldn't risk asking for a couple
of gigabytes on the theory that I could just ask for less if it fails.
That's as well a matter of documentation. We could add a couple of
lines in for example xfunc.sgml to describe the limitations of such
APIs.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Jan 13, 2015 at 10:10 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
However, there is a larger practical problem with this whole concept,
which is that experience should teach us to be very wary of the assumption
that asking for memory the system can't give us will just lead to nice
neat malloc-returns-NULL behavior. Any small perusal of the mailing list
archives will remind you that very often the end result will be SIGSEGV,
OOM kills, unrecoverable trap-on-write when the kernel realizes it can't
honor a copy-on-write promise, yadda yadda. Agreed that it's arguable
that these only occur in misconfigured systems ... but misconfiguration
appears to be the default in a depressingly large fraction of systems.
(This is another reason for "_safe" not being the mot juste :-()
I don't really buy this. It's pretty incredible to think that after a
malloc() failure there is absolutely no hope of carrying on sanely.
If that were true, we wouldn't be able to ereport() out-of-memory
errors at any severity less than FATAL, but of course it doesn't work
that way. Moreover, AllocSetAlloc() contains malloc() and, if that
fails, calls malloc() again with a smaller value, without even
throwing an error.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas wrote:
On Tue, Jan 13, 2015 at 10:10 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
However, there is a larger practical problem with this whole concept,
which is that experience should teach us to be very wary of the assumption
that asking for memory the system can't give us will just lead to nice
neat malloc-returns-NULL behavior. Any small perusal of the mailing list
archives will remind you that very often the end result will be SIGSEGV,
OOM kills, unrecoverable trap-on-write when the kernel realizes it can't
honor a copy-on-write promise, yadda yadda. Agreed that it's arguable
that these only occur in misconfigured systems ... but misconfiguration
appears to be the default in a depressingly large fraction of systems.
(This is another reason for "_safe" not being the mot juste :-()I don't really buy this. It's pretty incredible to think that after a
malloc() failure there is absolutely no hope of carrying on sanely.
If that were true, we wouldn't be able to ereport() out-of-memory
errors at any severity less than FATAL, but of course it doesn't work
that way. Moreover, AllocSetAlloc() contains malloc() and, if that
fails, calls malloc() again with a smaller value, without even
throwing an error.
I understood Tom's point differently: instead of malloc() failing,
malloc() will return a supposedly usable pointer, but later usage of it
will lead to a crash of some sort. We know this does happen in reality,
because people do report it; but we also know how to fix it. And for
systems that have been correctly set up, the new behavior (using some
plan B for when malloc actually fails instead of spuriously succeeding
only to cause a later crash) will be much more convenient.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Jan 14, 2015 at 9:42 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:
Robert Haas wrote:
On Tue, Jan 13, 2015 at 10:10 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
However, there is a larger practical problem with this whole concept,
which is that experience should teach us to be very wary of the assumption
that asking for memory the system can't give us will just lead to nice
neat malloc-returns-NULL behavior. Any small perusal of the mailing list
archives will remind you that very often the end result will be SIGSEGV,
OOM kills, unrecoverable trap-on-write when the kernel realizes it can't
honor a copy-on-write promise, yadda yadda. Agreed that it's arguable
that these only occur in misconfigured systems ... but misconfiguration
appears to be the default in a depressingly large fraction of systems.
(This is another reason for "_safe" not being the mot juste :-()I don't really buy this. It's pretty incredible to think that after a
malloc() failure there is absolutely no hope of carrying on sanely.
If that were true, we wouldn't be able to ereport() out-of-memory
errors at any severity less than FATAL, but of course it doesn't work
that way. Moreover, AllocSetAlloc() contains malloc() and, if that
fails, calls malloc() again with a smaller value, without even
throwing an error.I understood Tom's point differently: instead of malloc() failing,
malloc() will return a supposedly usable pointer, but later usage of it
will lead to a crash of some sort. We know this does happen in reality,
because people do report it; but we also know how to fix it. And for
systems that have been correctly set up, the new behavior (using some
plan B for when malloc actually fails instead of spuriously succeeding
only to cause a later crash) will be much more convenient.
Hmm, I understood Tom to be opposing the idea of a palloc variant that
returns NULL on failure, and I understand you to be supporting it.
But maybe I'm confused. Anyway, I support it. I agree that there are
systems (or circumstances?) where malloc is going to succeed and then
the world will blow up later on anyway, but I don't think that means
that an out-of-memory error is the only sensible response to a palloc
failure; returning NULL seems like a sometimes-useful alternative.
I do think that "safe" is the wrong suffix. Maybe palloc_soft_fail()
or palloc_null() or palloc_no_oom() or palloc_unsafe().
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2015-01-15 08:40:34 -0500, Robert Haas wrote:
I do think that "safe" is the wrong suffix. Maybe palloc_soft_fail()
or palloc_null() or palloc_no_oom() or palloc_unsafe().
palloc_or_null()?
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Jan 15, 2015 at 8:42 AM, Andres Freund <andres@2ndquadrant.com> wrote:
On 2015-01-15 08:40:34 -0500, Robert Haas wrote:
I do think that "safe" is the wrong suffix. Maybe palloc_soft_fail()
or palloc_null() or palloc_no_oom() or palloc_unsafe().palloc_or_null()?
That'd work for me, too.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas wrote:
Hmm, I understood Tom to be opposing the idea of a palloc variant that
returns NULL on failure, and I understand you to be supporting it.
But maybe I'm confused.
Your understanding seems correct to me. I was just saying that your
description of Tom's argument to dislike the idea seemed at odds with
what he was actually saying.
Anyway, I support it. I agree that there are
systems (or circumstances?) where malloc is going to succeed and then
the world will blow up later on anyway, but I don't think that means
that an out-of-memory error is the only sensible response to a palloc
failure; returning NULL seems like a sometimes-useful alternative.I do think that "safe" is the wrong suffix. Maybe palloc_soft_fail()
or palloc_null() or palloc_no_oom() or palloc_unsafe().
I liked palloc_noerror() better myself FWIW.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Jan 16, 2015 at 12:57 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:
I do think that "safe" is the wrong suffix. Maybe palloc_soft_fail()
or palloc_null() or palloc_no_oom() or palloc_unsafe().I liked palloc_noerror() better myself FWIW.
Voting for palloc_noerror() as well.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Jan 16, 2015 at 8:47 AM, Michael Paquier
<michael.paquier@gmail.com> wrote:
Voting for palloc_noerror() as well.
And here is an updated patch using this naming, added to the next CF as well.
--
Michael
Attachments:
20150116_palloc_noerror.patchtext/x-diff; charset=US-ASCII; name=20150116_palloc_noerror.patchDownload+451-286
On 2015-01-16 08:47:10 +0900, Michael Paquier wrote:
On Fri, Jan 16, 2015 at 12:57 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:I do think that "safe" is the wrong suffix. Maybe palloc_soft_fail()
or palloc_null() or palloc_no_oom() or palloc_unsafe().I liked palloc_noerror() better myself FWIW.
Voting for palloc_noerror() as well.
I don't like that name. It very well can error out. E.g. because of the
allocation size. And we definitely do not want to ignore that case. How
about palloc_try()?
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2015-01-16 23:06:12 +0900, Michael Paquier wrote:
/* + * Wrappers for allocation functions. + */ +static void *set_alloc_internal(MemoryContext context, + Size size, bool noerror); +static void *set_realloc_internal(MemoryContext context, void *pointer, + Size size, bool noerror); + +/* * These functions implement the MemoryContext API for AllocSet contexts. */ static void *AllocSetAlloc(MemoryContext context, Size size); +static void *AllocSetAllocNoError(MemoryContext context, Size size); static void AllocSetFree(MemoryContext context, void *pointer); static void *AllocSetRealloc(MemoryContext context, void *pointer, Size size); +static void *AllocSetReallocNoError(MemoryContext context, + void *pointer, Size size); static void AllocSetInit(MemoryContext context); static void AllocSetReset(MemoryContext context); static void AllocSetDelete(MemoryContext context); @@ -264,8 +275,10 @@ static void AllocSetCheck(MemoryContext context); */ static MemoryContextMethods AllocSetMethods = { AllocSetAlloc, + AllocSetAllocNoError, AllocSetFree, AllocSetRealloc, + AllocSetReallocNoError, AllocSetInit, AllocSetReset, AllocSetDelete, @@ -517,140 +530,16 @@ AllocSetContextCreate(MemoryContext parent, }
Wouldn't it make more sense to change the MemoryContext API to return
NULLs in case of allocation failure and do the error checking in the
mcxt.c callers?
+/* wrapper routines for allocation */ +static void* palloc_internal(Size size, bool noerror); +static void* repalloc_internal(void *pointer, Size size, bool noerror); + /* * You should not do memory allocations within a critical section, because * an out-of-memory error will be escalated to a PANIC. To enforce that @@ -684,8 +688,8 @@ MemoryContextAllocZeroAligned(MemoryContext context, Size size) return ret; }-void * -palloc(Size size) +static void* +palloc_internal(Size size, bool noerror) { /* duplicates MemoryContextAlloc to avoid increased overhead */ void *ret; @@ -698,31 +702,85 @@ palloc(Size size)CurrentMemoryContext->isReset = false;
- ret = (*CurrentMemoryContext->methods->alloc) (CurrentMemoryContext, size); + if (noerror) + ret = (*CurrentMemoryContext->methods->alloc_noerror) + (CurrentMemoryContext, size); + else + ret = (*CurrentMemoryContext->methods->alloc) + (CurrentMemoryContext, size); VALGRIND_MEMPOOL_ALLOC(CurrentMemoryContext, ret, size);return ret;
}
I'd be rather surprised if these branches won't show up in
profiles. This is really rather hot code. At the very least this helper
function should be inlined. Also, calling the valgrind function on an
allocation failure surely isn't correct.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Jan 15, 2015 at 10:57 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:
Hmm, I understood Tom to be opposing the idea of a palloc variant that
returns NULL on failure, and I understand you to be supporting it.
But maybe I'm confused.Your understanding seems correct to me. I was just saying that your
description of Tom's argument to dislike the idea seemed at odds with
what he was actually saying.
OK, that may be. I'm not sure.
Anyway, I support it. I agree that there are
systems (or circumstances?) where malloc is going to succeed and then
the world will blow up later on anyway, but I don't think that means
that an out-of-memory error is the only sensible response to a palloc
failure; returning NULL seems like a sometimes-useful alternative.I do think that "safe" is the wrong suffix. Maybe palloc_soft_fail()
or palloc_null() or palloc_no_oom() or palloc_unsafe().I liked palloc_noerror() better myself FWIW.
I don't care for noerror() because it probably still will error in
some circumstances; just not for OOM.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas wrote:
On Thu, Jan 15, 2015 at 10:57 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:
I do think that "safe" is the wrong suffix. Maybe palloc_soft_fail()
or palloc_null() or palloc_no_oom() or palloc_unsafe().I liked palloc_noerror() better myself FWIW.
I don't care for noerror() because it probably still will error in
some circumstances; just not for OOM.
Yes, but that seems fine to me. We have other functions with "noerror"
flags, and they can still fail under some circumstances -- just not if
the error is the most commonly considered scenario in which they fail.
The first example I found is LookupAggNameTypeNames(); there are many
more. I don't think this causes any confusion in practice.
Another precendent we have is something like "missing_ok" as a flag name
in get_object_address() and other places; following that, we could have
this new function as "palloc_oom_ok" or something like that. But it
doesn't seem an improvement to me. (I'm pretty sure we all agree that
this must not be a flag to palloc but rather a new function.)
Of all the ones you proposed above, the one I like the most is
palloc_no_oom, but IMO palloc_noerror is still better.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2015-01-16 12:09:25 -0300, Alvaro Herrera wrote:
Robert Haas wrote:
On Thu, Jan 15, 2015 at 10:57 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:I do think that "safe" is the wrong suffix. Maybe palloc_soft_fail()
or palloc_null() or palloc_no_oom() or palloc_unsafe().I liked palloc_noerror() better myself FWIW.
I don't care for noerror() because it probably still will error in
some circumstances; just not for OOM.Yes, but that seems fine to me. We have other functions with "noerror"
flags, and they can still fail under some circumstances -- just not if
the error is the most commonly considered scenario in which they fail.
We rely on palloc erroring out on large allocations in a couple places
as a crosscheck. I don't think this argument holds much water.
The first example I found is LookupAggNameTypeNames(); there are many
more. I don't think this causes any confusion in practice.Another precendent we have is something like "missing_ok" as a flag name
in get_object_address() and other places; following that, we could have
this new function as "palloc_oom_ok" or something like that. But it
doesn't seem an improvement to me. (I'm pretty sure we all agree that
this must not be a flag to palloc but rather a new function.)Of all the ones you proposed above, the one I like the most is
palloc_no_oom, but IMO palloc_noerror is still better.
Neither seem very accurate. no_oom isn't true because they actually can
cause ooms. _noerror isn't true because they can error out - we
e.g. rely on palloc erroring out when reading toast tuples (to detect
invalid datum lengths) and during parsing of WAL as an additional
defense.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund wrote:
On 2015-01-16 12:09:25 -0300, Alvaro Herrera wrote:
Robert Haas wrote:
On Thu, Jan 15, 2015 at 10:57 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:I do think that "safe" is the wrong suffix. Maybe palloc_soft_fail()
or palloc_null() or palloc_no_oom() or palloc_unsafe().I liked palloc_noerror() better myself FWIW.
I don't care for noerror() because it probably still will error in
some circumstances; just not for OOM.Yes, but that seems fine to me. We have other functions with "noerror"
flags, and they can still fail under some circumstances -- just not if
the error is the most commonly considered scenario in which they fail.We rely on palloc erroring out on large allocations in a couple places
as a crosscheck. I don't think this argument holds much water.
I don't understand what that has to do with it. Surely we're not going
to have palloc_noerror() not error out when presented with a huge
allocation. My point is just that the "noerror" bit in palloc_noerror()
means that it doesn't fail in OOM, and that there are other causes for
it to error.
One thought I just had is that we also have MemoryContextAllocHuge; are
we going to consider a mixture of both things in the future, i.e. allow
huge allocations to return NULL when OOM? It sounds a bit useless
currently, but it doesn't seem extremely far-fetched that we will need
further flags in the future. (Or, perhaps, we will want to have code
that retries a Huge allocation that returns NULL with a smaller size,
just in case it does work.) Maybe what we need is to turn these things
into flags to a new generic function. Furthermore, I question whether
we really need a "palloc" variant -- I mean, can we live with just the
MemoryContext API instead? As with the "Huge" variant (which does not
have a corresponding palloc equivalent), possible use cases seem very
limited so there's probably not much point in providing a shortcut.
So how about something like
#define ALLOCFLAG_HUGE 0x01
#define ALLOCFLAG_NO_ERROR_ON_OOM 0x02
void *
MemoryContextAllocFlags(MemoryContext context, Size size, int flags);
and perhaps even
#define MemoryContextAllocHuge(cxt, sz) \
MemoryContextAllocFlags(cxt, sz, ALLOCFLAG_HUGE)
for source-level compatibility.
(Now we all agree that palloc() itself is a very hot spot and shouldn't
be touched at all. I don't think these new functions are used as commonly
as that, so the fact that they are slightly slower shouldn't be too
troublesome.)
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2015-01-16 12:56:18 -0300, Alvaro Herrera wrote:
Andres Freund wrote:
We rely on palloc erroring out on large allocations in a couple places
as a crosscheck. I don't think this argument holds much water.I don't understand what that has to do with it. Surely we're not going
to have palloc_noerror() not error out when presented with a huge
allocation.
Precisely. That means it *does* error out in a somewhat expected path.
My point is just that the "noerror" bit in palloc_noerror() means that
it doesn't fail in OOM, and that there are other causes for it to
error.
That description pretty much describes why it's a misnomer, no?
One thought I just had is that we also have MemoryContextAllocHuge; are
we going to consider a mixture of both things in the future, i.e. allow
huge allocations to return NULL when OOM?
I definitely think we should. I'd even say that the usecase is larger
for huge allocations. It'd e.g. be rather nice to first try sorting with
the huge 16GB work mem and then try 8GB/4/1GB if that fails.
It sounds a bit useless
currently, but it doesn't seem extremely far-fetched that we will need
further flags in the future. (Or, perhaps, we will want to have code
that retries a Huge allocation that returns NULL with a smaller size,
just in case it does work.) Maybe what we need is to turn these things
into flags to a new generic function. Furthermore, I question whether
we really need a "palloc" variant -- I mean, can we live with just the
MemoryContext API instead? As with the "Huge" variant (which does not
have a corresponding palloc equivalent), possible use cases seem very
limited so there's probably not much point in providing a shortcut.
I'm fine with not providing a palloc() equivalent, but I also am fine
with having it.
So how about something like
#define ALLOCFLAG_HUGE 0x01
#define ALLOCFLAG_NO_ERROR_ON_OOM 0x02
void *
MemoryContextAllocFlags(MemoryContext context, Size size, int flags);and perhaps even
#define MemoryContextAllocHuge(cxt, sz) \
MemoryContextAllocFlags(cxt, sz, ALLOCFLAG_HUGE)
for source-level compatibility.
I don't know, this seems a bit awkward to use. Your earlier example with
the *Huge variant that returns a smaller allocation doesn't really
convince me - that'd need a separate API anyway.
I definitely do not want to push the nofail stuff via the
MemoryContextData-> API into aset.c. Imo aset.c should always return
NULL and then mcxt.c should throw the error if in the normal palloc()
function.
(Now we all agree that palloc() itself is a very hot spot and shouldn't
be touched at all. I don't think these new functions are used as commonly
as that, so the fact that they are slightly slower shouldn't be too
troublesome.)
Yea, the speed of the new functions really shouldn't matter.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers