PATCH: decreasing memory needlessly consumed by array_agg
Hi,
this is a patch for issue reported in October 2013 in pgsql-bugs:
/messages/by-id/3839201.Nfa2RvcheX@techfox.foxi
Frank van Vugt reported that a simple query with array_agg() and large
number of groups (1e7) fails because of OOM even on machine with 32GB of
RAM.
So for example doing this:
CREATE TABLE test (a INT, b INT);
INSERT INTO test SELECT i, i FROM generate_series(1,10000000) s(i);
SELECT a, array_agg(b) FROM test GROUP BY a;
allocates huge amounts of RAM and easily forces the machine into
swapping and eventually gets killed by OOM (on my workstation with 8GB
of RAM that happens almost immediately).
Upon investigation, it seems caused by a combination of issues:
(1) per-group memory contexts - each group state uses a dedicated
memory context, which is defined like this (in accumArrayResult):
arr_context = AllocSetContextCreate(rcontext,
"accumArrayResult",
ALLOCSET_DEFAULT_MINSIZE,
ALLOCSET_DEFAULT_INITSIZE,
ALLOCSET_DEFAULT_MAXSIZE);
which actually means this
arr_context = AllocSetContextCreate(rcontext,
"accumArrayResult",
0,
(8*1024),
(8*1024*1024));
so each group will allocate at least 8kB of memory (of the first
palloc call). With 1e7 groups, that's ~80GB of RAM, even if each
group contains just 1 item.
(2) minimum block size in aset.c - The first idea I got was to decrease
the block size in the allocator. So I decreased it to 256B but I
was still getting OOM. Then I found that aset.c contains this:
if (initBlockSize < 1024)
initBlockSize = 1024;
so effectively the lowest allowed block size is 1kB. Which means
~10GB of memory for the state data (i.e. not considering overhead
of the hash table etc., which is not negligible).
Considering we're talking about 1e7 32-bit integers, i.e. 40MB
of raw data, that's still pretty excessive (250x more).
While I question whether the 1kB minimum block size makes sense, I
really think per-group memory contexts don't make much sense here. What
is the point of per-group memory contexts?
The memory will be allocated when the first row of the group is
received, and won't be allocated until the whole result set is
processed. At least that's how it works for Hash Aggregate.
However that's exactly how it would work with a single memory context,
which has the significant benefit that all the groups share the same
memory (so the minimum block size is not an issue).
That is exactly what the patch aims to do - it removes the per-group
memory contexts and reuses the main memory context of the aggregate
itself.
The patch also does one more thing - it changes how the arrays (in the
aggregate state) grow. Originally it worked like this
/* initial size */
astate->alen = 64;
/* when full, grow exponentially */
if (astate->nelems >= astate->alen)
astate->alen *= 2;
so the array length would grow like this 64 -> 128 -> 256 -> 512 ...
(note we're talking about elements, not bytes, so with with 32-bit
integers it's actually 256B -> 512B -> 1024B -> ...).
While I do understand the point of this (minimizing palloc overhead), I
find this pretty dangerous, especially in case of array_agg(). I've
modified the growth like this:
/* initial size */
astate->alen = 4;
/* when full, grow exponentially */
if (astate->nelems >= astate->alen)
astate->alen += 4;
I admit that might be a bit too aggressive, and maybe there's a better
way to do this - with better balance between safety and speed. I was
thinking about something like this:
/* initial size */
astate->alen = 4;
/* when full, grow exponentially */
if (astate->nelems >= astate->alen)
if (astate->alen < 128)
astate->alen *= 2;
else
astate->alen += 128;
i.e. initial size with exponential growth, but capped at 128B.
regards
Tomas
Attachments:
array-agg.patchtext/x-diff; name=array-agg.patchDownload+4-14
On Thu, Mar 27, 2014 at 10:00 PM, Tomas Vondra <tv@fuzzy.cz> wrote:
The patch also does one more thing - it changes how the arrays (in the
aggregate state) grow. Originally it worked like this/* initial size */
astate->alen = 64;/* when full, grow exponentially */
if (astate->nelems >= astate->alen)
astate->alen *= 2;so the array length would grow like this 64 -> 128 -> 256 -> 512 ...
(note we're talking about elements, not bytes, so with with 32-bit
integers it's actually 256B -> 512B -> 1024B -> ...).While I do understand the point of this (minimizing palloc overhead), I
find this pretty dangerous, especially in case of array_agg(). I've
modified the growth like this:/* initial size */
astate->alen = 4;/* when full, grow exponentially */
if (astate->nelems >= astate->alen)
astate->alen += 4;I admit that might be a bit too aggressive, and maybe there's a better
way to do this - with better balance between safety and speed. I was
thinking about something like this:/* initial size */
astate->alen = 4;/* when full, grow exponentially */
if (astate->nelems >= astate->alen)
if (astate->alen < 128)
astate->alen *= 2;
else
astate->alen += 128;i.e. initial size with exponential growth, but capped at 128B.
So I think this kind of thing is very sensible, but the last time I
suggested something similar, I got told "no":
/messages/by-id/CAEYLb_WLGHT7yJLaRE9PPeRt5RKd5ZJbb15tE+kpgejgQKORyA@mail.gmail.com
But I think you're right and the objections previously raised are
wrong. I suspect that the point at which we should stop doubling is
higher than 128 elements, because that's only 8kB, which really isn't
that big - and the idea that the resizing overhead takes only
amortized constant time is surely appealing. But I still think that
doubling *forever* is a bad idea, here and there. The fact that we've
written the code that way in lots of places doesn't make it the right
algorithm.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 31.3.2014 21:04, Robert Haas wrote:
On Thu, Mar 27, 2014 at 10:00 PM, Tomas Vondra <tv@fuzzy.cz> wrote:
The patch also does one more thing - it changes how the arrays (in the
aggregate state) grow. Originally it worked like this/* initial size */
astate->alen = 64;/* when full, grow exponentially */
if (astate->nelems >= astate->alen)
astate->alen *= 2;so the array length would grow like this 64 -> 128 -> 256 -> 512 ...
(note we're talking about elements, not bytes, so with with 32-bit
integers it's actually 256B -> 512B -> 1024B -> ...).While I do understand the point of this (minimizing palloc overhead), I
find this pretty dangerous, especially in case of array_agg(). I've
modified the growth like this:/* initial size */
astate->alen = 4;/* when full, grow exponentially */
if (astate->nelems >= astate->alen)
astate->alen += 4;I admit that might be a bit too aggressive, and maybe there's a better
way to do this - with better balance between safety and speed. I was
thinking about something like this:/* initial size */
astate->alen = 4;/* when full, grow exponentially */
if (astate->nelems >= astate->alen)
if (astate->alen < 128)
astate->alen *= 2;
else
astate->alen += 128;i.e. initial size with exponential growth, but capped at 128B.
So I think this kind of thing is very sensible, but the last time I
suggested something similar, I got told "no":/messages/by-id/CAEYLb_WLGHT7yJLaRE9PPeRt5RKd5ZJbb15tE+kpgejgQKORyA@mail.gmail.com
But I think you're right and the objections previously raised are
wrong. I suspect that the point at which we should stop doubling is
higher than 128 elements, because that's only 8kB, which really
isn't that big - and the idea that the resizing overhead takes only
amortized constant time is surely appealing. But I still think that
doubling *forever* is a bad idea, here and there. The fact that
we've written the code that way in lots of places doesn't make it the
right algorithm.
I've been thinking about it a bit more and maybe the doubling is not
that bad idea, after all. What I'd like to see is a solution that does
"wastes" less than some known fraction of the allocated memory, and
apparently that's what doubling does ...
Let's assume we have many buffers (arrays in array_agg), allocated in
this manner. Let's assume the buffers are independent, i.e. the doubling
is not somehow "synchronized" for the buffers.
Now, at arbitrary time the buffers should be ~75% full on average. There
will be buffers that were just doubled (50% full), buffers that will be
doubled soon (100% full) and buffers somewhere in between. But on
average the buffers should be 75%. That means we're "wasting" 25% memory
on average, which seems quite acceptable to me. We could probably use a
different growth rate (say 1.5x, resulting in 12.5% memory being
"wasted"), but I don't see this as the main problem (and I won't fight
for this part of array_agg patch).
The "current" array_agg however violates some of the assumptions
mentioned above, because it
(1) pre-allocates quite large number of items (64) at the beginning,
resulting in ~98% of memory being "wasted" initially
(2) allocates one memory context per group, with 8kB initial size, so
you're actually wasting ~99.999% of the memory
(3) thanks to the dedicated memory contexts, the doubling is pretty
much pointless up until you cross the 8kB boundary
IMNSHO these are the issues we really should fix - by lowering the
initial element count (64->4) and using a single memory context.
regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
How much of this problem can be attributed by the fact that repalloc has
to copy the data from the old array into the new one? If it's large,
perhaps we could solve it by replicating the trick we use for
InvalidationChunk. It'd be a bit messy, but the mess would be pretty
well contained, I think.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Tomas Vondra <tv@fuzzy.cz> writes:
I've been thinking about it a bit more and maybe the doubling is not
that bad idea, after all.
It is not. There's a reason why that's our standard behavior.
The "current" array_agg however violates some of the assumptions
mentioned above, because it
(1) pre-allocates quite large number of items (64) at the beginning,
resulting in ~98% of memory being "wasted" initially
(2) allocates one memory context per group, with 8kB initial size, so
you're actually wasting ~99.999% of the memory
(3) thanks to the dedicated memory contexts, the doubling is pretty
much pointless up until you cross the 8kB boundary
IMNSHO these are the issues we really should fix - by lowering the
initial element count (64->4) and using a single memory context.
The real issue here is that all those decisions are perfectly reasonable
if you expect that a large number of values will get aggregated --- and
even if you don't expect that, they're cheap insurance in simple cases.
It only gets to be a problem if you have a lot of concurrent executions
of array_agg, such as in a grouped-aggregate query. You're essentially
arguing that in the grouped-aggregate case, it's better to optimize on
the assumption that only a very small number of values will get aggregated
(per hash table entry) --- which is possibly reasonable, but the argument
that it's okay to pessimize the behavior for other cases seems pretty
flimsy from here.
Actually, though, the patch as given outright breaks things for both the
grouped and ungrouped cases, because the aggregate no longer releases
memory when it's done. That's going to result in memory bloat not
savings, in any situation where the aggregate is executed repeatedly.
I think a patch that stood a chance of getting committed would need to
detect whether the aggregate was being called in simple or grouped
contexts, and apply different behaviors in the two cases. And you
can't just remove the sub-context without providing some substitute
cleanup mechanism. Possibly you could keep the context but give it
some much-more-miserly allocation parameters in the grouped case.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 1.4.2014 19:08, Tom Lane wrote:
Tomas Vondra <tv@fuzzy.cz> writes:
I've been thinking about it a bit more and maybe the doubling is not
that bad idea, after all.It is not. There's a reason why that's our standard behavior.
The "current" array_agg however violates some of the assumptions
mentioned above, because it
(1) pre-allocates quite large number of items (64) at the beginning,
resulting in ~98% of memory being "wasted" initially
(2) allocates one memory context per group, with 8kB initial size, so
you're actually wasting ~99.999% of the memory
(3) thanks to the dedicated memory contexts, the doubling is pretty
much pointless up until you cross the 8kB boundaryIMNSHO these are the issues we really should fix - by lowering the
initial element count (64->4) and using a single memory context.The real issue here is that all those decisions are perfectly
reasonable if you expect that a large number of values will get
aggregated --- and even if you don't expect that, they're cheap
insurance in simple cases.
Yes, if you expect a large number of values it's perfectly valid. But
what if those assumptions are faulty? Is it OK to fail because of OOM
even for trivial queries breaking those assumptions?
I'd like to improve that and make this work without impacting the
queries that match the assumptions.
It only gets to be a problem if you have a lot of concurrent
executions of array_agg, such as in a grouped-aggregate query. You're
essentially arguing that in the grouped-aggregate case, it's better
to optimize on the assumption that only a very small number of values
will get aggregated (per hash table entry) --- which is possibly
reasonable, but the argument that it's okay to pessimize the behavior
for other cases seems pretty flimsy from here.
I'm not saying it's okay to pessimize the behavior of other cases. I
admit decreasing the initial size from 64 to only 4 items may be too
aggressive - let's measure the difference and tweak the number
accordingly. Heck, even 64 items is way lower than the 8kB utilized by
each per-group memory context right now.
Actually, though, the patch as given outright breaks things for both
the grouped and ungrouped cases, because the aggregate no longer
releases memory when it's done. That's going to result in memory
bloat not savings, in any situation where the aggregate is executed
repeatedly.
Really? Can you provide query for which the current and patched code
behave differently?
Looking at array_agg_finalfn (which is the final function for
array_agg), I see it does this:
/*
* Make the result. We cannot release the ArrayBuildState because
* sometimes aggregate final functions are re-executed. Rather, it
* is nodeAgg.c's responsibility to reset the aggcontext when it's
* safe to do so.
*/
result = makeMdArrayResult(state, 1, dims, lbs,
CurrentMemoryContext,
false);
i.e. it sets release=false. So I fail to see how the current code
behaves differently from the patch? If it wasn't releasing the memory
before, it's not releasing memory before.
In both cases the memory gets released when the aggcontext gets released
in nodeAgg.c (as explained by the comment in the code).
However, after looking at the code now, I think it's actually wrong to
remove the MemoryContextDelete from makeMdArrayResult(). It does not
make any difference to the array_agg (which sets release=false anyway),
but it makes difference to functions calling makeArrayResult() as that
uses release=true. That however is not called by aggregate functions,
but from regexp_split_to_array, xpath and subplans.
I think a patch that stood a chance of getting committed would need to
detect whether the aggregate was being called in simple or grouped
contexts, and apply different behaviors in the two cases. And you
can't just remove the sub-context without providing some substitute
cleanup mechanism. Possibly you could keep the context but give it
some much-more-miserly allocation parameters in the grouped case.
I don't think the patch removes any cleanup mechanism (see above), but
maybe I'm wrong.
Yes, tweaking the parameters depending on the aggregate - whether it's
simple or grouped, or maybe an estimate number of elements in a group -
seems like a good idea.
regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Tomas Vondra <tv@fuzzy.cz> writes:
On 1.4.2014 19:08, Tom Lane wrote:
Actually, though, the patch as given outright breaks things for both
the grouped and ungrouped cases, because the aggregate no longer
releases memory when it's done. That's going to result in memory
bloat not savings, in any situation where the aggregate is executed
repeatedly.
Looking at array_agg_finalfn (which is the final function for
array_agg), I see it does this:
/*
* Make the result. We cannot release the ArrayBuildState because
* sometimes aggregate final functions are re-executed. Rather, it
* is nodeAgg.c's responsibility to reset the aggcontext when it's
* safe to do so.
*/
result = makeMdArrayResult(state, 1, dims, lbs,
CurrentMemoryContext,
false);
i.e. it sets release=false. So I fail to see how the current code
behaves differently from the patch?
You're conveniently ignoring the callers that set release=true.
Reverse engineering a query that exhibits memory bloat is left
as an exercise for the reader (but in a quick look, I'll bet
ARRAY_SUBLINK subplans are one locus for problems).
It's possible that it'd work to use a subcontext only if release=true;
I've not dug through the code enough to convince myself of that.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 1.4.2014 20:56, Tom Lane wrote:
Tomas Vondra <tv@fuzzy.cz> writes:
On 1.4.2014 19:08, Tom Lane wrote:
You're conveniently ignoring the callers that set release=true.
Reverse engineering a query that exhibits memory bloat is left
as an exercise for the reader (but in a quick look, I'll bet
ARRAY_SUBLINK subplans are one locus for problems).
No, I'm not. I explicitly mentioned those cases (although you're right I
concentrated mostly on cases with release=false, because of array_agg).
It's possible that it'd work to use a subcontext only if
release=true; I've not dug through the code enough to convince myself
of that.
Maybe, though 'release' is not available in makeArrayResult() which is
where the memory context needs to be decided. So all the callers would
need to be modified to supply this parameter. But there only ~15 places
where makeArrayResult is called.
regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
Attached is v2 of the patch lowering array_agg memory requirements.
Hopefully it addresses the issues issues mentioned by TL in this thread
(not handling some of the callers appropriately etc.).
The v2 of the patch does this:
* adds 'subcontext' flag to initArrayResult* methods
If it's 'true' then a separate context is created for the
ArrayBuildState instance, otherwise it's built within the parent
context.
Currently, only the array_agg_* functions pass 'subcontext=false' so
that the array_agg() aggregate does not create separate context for
each group. All other callers use 'true' and thus keep using the
original implementation (this includes ARRAY_SUBLINK subplans, and
various other places building array incrementally).
* adds 'release' flag to makeArrayResult
This is mostly to make it consistent with makeArrayResultArr and
makeArrayResultAny. All current callers use 'release=true'.
Also, it seems that using 'subcontext=false' and then 'release=true'
would be a bug. Maybe it would be appropriate to store the
'subcontext' value into the ArrayBuildState and then throw an error
if makeArrayResult* is called with (release=true && subcontext=false).
* modifies all the places calling those functions
* decreases number of preallocated elements to 8
The original value was 64 (512B), the current value is 64B. (Not
counting the 'nulls' array). More about this later ...
Now, some performance measurements - attached is a simple SQL script
that executes a few GROUP BY queries with various numbers of groups and
group elements. I ran the tests with two dataset sizes:
small
=====
a) 1M groups, 1 item per group
b) 100k groups, 16 items per group
c) 100k groups, 64 items per group
d) 10k groups, 1024 items per group
large
=====
a) 10M groups, 1 item per group
b) 1M groups, 16 items per group
c) 1M groups, 64 items per group
d) 100k groups, 1024 items per group
So essentially the 'large' dataset uses 10x the number of groups. The
results (average from the 5 runs, in ms) look like this:
small
=====
test | master | patched | diff
-----|--------|---------|-------
a | 1419 | 834 | -41%
b | 595 | 498 | -16%
c | 2061 | 1832 | -11%
d | 2197 | 1957 | -11%
large
=====
test | master | patched | diff
-----|--------|---------|-------
a | OOM | 9144 | n/a
b | 7366 | 6257 | -15%
c | 29899 | 22940 | -23%
d | 35456 | 31347 | -12%
So it seems to give solid speedup across the whole test suite - I'm yet
to find a case where it's actually slower than what we have now. The
test cases (b) and (c) were actually created with this goal, because
both should be OK with the original array size (64 elements), but with
the new size it requires a few repalloc() calls. But even those are much
faster.
This is most likely thanks to removing the AllocSetContextCreate call
and sharing freelists across groups (although the test cases don't seem
extremely suitable for that, as all the groups grow in parallel).
I even tried to bump the initial array size back to 64 elements, but the
performance actually decreased a bit for some reason. I have no idea why
this happens ...
The test script is attached - tweak the 'size' variable for different
dataset sizes. The (insane) work_mem sizes are used to force a hash
aggregate - clearly I don't have 1TB of RAM.
regards
Tomas
On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra <tv@fuzzy.cz> wrote:
Hi,
Attached is v2 of the patch lowering array_agg memory requirements.
Hopefully it addresses the issues issues mentioned by TL in this thread
(not handling some of the callers appropriately etc.).
Hi Tomas,
When configured --with-libxml I get compilation errors:
xml.c: In function 'xml_xpathobjtoxmlarray':
xml.c:3684: error: too few arguments to function 'accumArrayResult'
xml.c:3721: error: too few arguments to function 'accumArrayResult'
xml.c: In function 'xpath':
xml.c:3933: error: too few arguments to function 'initArrayResult'
xml.c:3936: error: too few arguments to function 'makeArrayResult'
And when configured --with-perl, I get:
plperl.c: In function 'array_to_datum_internal':
plperl.c:1196: error: too few arguments to function 'accumArrayResult'
plperl.c: In function 'plperl_array_to_datum':
plperl.c:1223: error: too few arguments to function 'initArrayResult'
Cheers,
Jeff
On 15.12.2014 22:35, Jeff Janes wrote:
On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra <tv@fuzzy.cz
<mailto:tv@fuzzy.cz>> wrote:Hi,
Attached is v2 of the patch lowering array_agg memory requirements.
Hopefully it addresses the issues issues mentioned by TL in this thread
(not handling some of the callers appropriately etc.).Hi Tomas,
When configured --with-libxml I get compilation errors:
xml.c: In function 'xml_xpathobjtoxmlarray':
xml.c:3684: error: too few arguments to function 'accumArrayResult'
xml.c:3721: error: too few arguments to function 'accumArrayResult'
xml.c: In function 'xpath':
xml.c:3933: error: too few arguments to function 'initArrayResult'
xml.c:3936: error: too few arguments to function 'makeArrayResult'And when configured --with-perl, I get:
plperl.c: In function 'array_to_datum_internal':
plperl.c:1196: error: too few arguments to function 'accumArrayResult'
plperl.c: In function 'plperl_array_to_datum':
plperl.c:1223: error: too few arguments to function 'initArrayResult'Cheers,
Thanks, attached is a version that fixes this.
regards
Tomas
Attachments:
array-agg-v3.patchtext/x-diff; name=array-agg-v3.patchDownload+56-53
2014-12-16 6:27 GMT+07:00 Tomas Vondra <tv@fuzzy.cz>:
On 15.12.2014 22:35, Jeff Janes wrote:
On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra <tv@fuzzy.cz
<mailto:tv@fuzzy.cz>> wrote:Hi,
Attached is v2 of the patch lowering array_agg memory requirements.
Hopefully it addresses the issues issues mentioned by TL in thisthread
(not handling some of the callers appropriately etc.).
Hi Tomas,
When configured --with-libxml I get compilation errors:
xml.c: In function 'xml_xpathobjtoxmlarray':
xml.c:3684: error: too few arguments to function 'accumArrayResult'
xml.c:3721: error: too few arguments to function 'accumArrayResult'
xml.c: In function 'xpath':
xml.c:3933: error: too few arguments to function 'initArrayResult'
xml.c:3936: error: too few arguments to function 'makeArrayResult'And when configured --with-perl, I get:
plperl.c: In function 'array_to_datum_internal':
plperl.c:1196: error: too few arguments to function 'accumArrayResult'
plperl.c: In function 'plperl_array_to_datum':
plperl.c:1223: error: too few arguments to function 'initArrayResult'Cheers,
Thanks, attached is a version that fixes this.
Just fast-viewing the patch.
The patch is not implementing the checking for not creating new context in
initArrayResultArr. I think we should implement it also there for
consistency (and preventing future problems).
Regards,
--
Ali Akbar
2014-12-16 10:47 GMT+07:00 Ali Akbar <the.apaan@gmail.com>:
2014-12-16 6:27 GMT+07:00 Tomas Vondra <tv@fuzzy.cz>:
On 15.12.2014 22:35, Jeff Janes wrote:
On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra <tv@fuzzy.cz
<mailto:tv@fuzzy.cz>> wrote:Hi,
Attached is v2 of the patch lowering array_agg memory requirements.
Hopefully it addresses the issues issues mentioned by TL in thisthread
(not handling some of the callers appropriately etc.).
Hi Tomas,
When configured --with-libxml I get compilation errors:
xml.c: In function 'xml_xpathobjtoxmlarray':
xml.c:3684: error: too few arguments to function 'accumArrayResult'
xml.c:3721: error: too few arguments to function 'accumArrayResult'
xml.c: In function 'xpath':
xml.c:3933: error: too few arguments to function 'initArrayResult'
xml.c:3936: error: too few arguments to function 'makeArrayResult'And when configured --with-perl, I get:
plperl.c: In function 'array_to_datum_internal':
plperl.c:1196: error: too few arguments to function 'accumArrayResult'
plperl.c: In function 'plperl_array_to_datum':
plperl.c:1223: error: too few arguments to function 'initArrayResult'Cheers,
Thanks, attached is a version that fixes this.
Just fast-viewing the patch.
The patch is not implementing the checking for not creating new context in
initArrayResultArr. I think we should implement it also there for
consistency (and preventing future problems).
Looking at the modification in accumArrayResult* functions, i don't really
comfortable with:
1. Code that calls accumArrayResult* after explicitly calling
initArrayResult* must always passing subcontext, but it has no effect.
2. All existing codes that calls accumArrayResult must be changed.
Just an idea: why don't we minimize the change in API like this:
1. Adding parameter bool subcontext, only in initArrayResult* functions
but not in accumArrayResult*
2. Code that want to not creating subcontext must calls initArrayResult*
explicitly.
Other codes that calls directly to accumArrayResult can only be changed in
the call to makeArrayResult* (with release=true parameter). In places that
we don't want to create subcontext (as in array_agg_transfn), modify it to
use initArrayResult* before calling accumArrayResult*.
What do you think?
Regards,
--
Ali Akbar
2014-12-16 11:01 GMT+07:00 Ali Akbar <the.apaan@gmail.com>:
2014-12-16 10:47 GMT+07:00 Ali Akbar <the.apaan@gmail.com>:
2014-12-16 6:27 GMT+07:00 Tomas Vondra <tv@fuzzy.cz>:
Just fast-viewing the patch.The patch is not implementing the checking for not creating new context
in initArrayResultArr. I think we should implement it also there for
consistency (and preventing future problems).
Testing the performance with your query, looks promising: speedup is
between 12% ~ 15%.
Because i'm using 32-bit systems, setting work_mem to 1024GB failed:
ERROR: 1073741824 is outside the valid range for parameter "work_mem" (64
.. 2097151)
STATEMENT: SET work_mem = '1024GB';
psql:/media/truecrypt1/oss/postgresql/postgresql/../patches/array-agg.sql:20:
ERROR: 1073741824 is outside the valid range for parameter "work_mem" (64
.. 2097151)
Maybe because of that, in the large groups a test, the speedup is awesome:
master: 16,819 ms
with patch: 1,720 ms
Looks like with master, postgres resort to disk, but with the patch it fits
in memory.
Note: I hasn't tested the large dataset.
As expected, testing array_agg(anyarray), the performance is still the
same, because the subcontext hasn't implemented there (test script modified
from Tomas', attached).
I implemented the subcontext checking in initArrayResultArr by changing the
v3 patch like this:
+++ b/src/backend/utils/adt/arrayfuncs.c @@ -4797,10 +4797,11 @@ initArrayResultArr(Oid array_type, Oid element_type, MemoryContext rcontext, bool subcontext) { ArrayBuildStateArr *astate; - MemoryContext arr_context; + MemoryContext arr_context = rcontext; /* by default use the parent ctx *//* Make a temporary context to hold all the junk */ - arr_context = AllocSetContextCreate(rcontext, + if (subcontext) + arr_context = AllocSetContextCreate(rcontext, "accumArrayResultArr", ALLOCSET_DEFAULT_MINSIZE, ALLOCSET_DEFAULT_INITSIZE,
Testing the performance, it got the 12%~15% speedup. Good. (patch attached)
Looking at the modification in accumArrayResult* functions, i don't really
comfortable with:
1. Code that calls accumArrayResult* after explicitly calling
initArrayResult* must always passing subcontext, but it has no effect.
2. All existing codes that calls accumArrayResult must be changed.Just an idea: why don't we minimize the change in API like this:
1. Adding parameter bool subcontext, only in initArrayResult*
functions but not in accumArrayResult*
2. Code that want to not creating subcontext must calls
initArrayResult* explicitly.Other codes that calls directly to accumArrayResult can only be changed in
the call to makeArrayResult* (with release=true parameter). In places that
we don't want to create subcontext (as in array_agg_transfn), modify it to
use initArrayResult* before calling accumArrayResult*.What do you think?
As per your concern about calling initArrayResult* with subcontext=false,
while makeArrayResult* with release=true:
Also, it seems that using 'subcontext=false' and then 'release=true'
would be a bug. Maybe it would be appropriate to store the
'subcontext' value into the ArrayBuildState and then throw an error
if makeArrayResult* is called with (release=true && subcontext=false).
Yes, i think we should do that to minimize unexpected coding errors. In
makeArrayResult*, i think its better to not throwing an error, but using
assertions:
Assert(release == false || astate->subcontext == true);
Regards,
--
Ali Akbar
Hi!
First of all, thanks for the review - the insights and comments are
spot-on. More comments below.
On 20.12.2014 09:26, Ali Akbar wrote:
2014-12-16 11:01 GMT+07:00 Ali Akbar <the.apaan@gmail.com
<mailto:the.apaan@gmail.com>>:2014-12-16 10:47 GMT+07:00 Ali Akbar <the.apaan@gmail.com
<mailto:the.apaan@gmail.com>>:2014-12-16 6:27 GMT+07:00 Tomas Vondra <tv@fuzzy.cz
<mailto:tv@fuzzy.cz>>:
Just fast-viewing the patch.The patch is not implementing the checking for not creating new
context in initArrayResultArr. I think we should implement it
also there for consistency (and preventing future problems).
You're right that initArrayResultArr was missing the code deciding
whether to create a subcontext or reuse the parent one, and the fix you
proposed (i.e. reusing code from initArrayResult) is IMHO the right one.
Testing the performance with your query, looks promising: speedup is
between 12% ~ 15%.Because i'm using 32-bit systems, setting work_mem to 1024GB failed:
ERROR: 1073741824 is outside the valid range for parameter
"work_mem" (64 .. 2097151)
STATEMENT: SET work_mem = '1024GB';
psql:/media/truecrypt1/oss/postgresql/postgresql/../patches/array-agg.sql:20:
ERROR: 1073741824 is outside the valid range for parameter
"work_mem" (64 .. 2097151)
Yes, that's pretty clearly because of the 2GB limit on 32-bit systems.
Maybe because of that, in the large groups a test, the speedup is awesome:
master: 16,819 ms
with patch: 1,720 ms
Probably. It's difficult to say without explain plans or something, but
it's probably using a different plan (e.g. group aggregate).
Looks like with master, postgres resort to disk, but with the patch it
fits in memory.
I'd bet that's not postgres, but system using a swap (because postgres
allocates a lot of memory).
Note: I hasn't tested the large dataset.
As expected, testing array_agg(anyarray), the performance is still the
same, because the subcontext hasn't implemented there (test script
modified from Tomas', attached).I implemented the subcontext checking in initArrayResultArr by changing
the v3 patch like this:+++ b/src/backend/utils/adt/arrayfuncs.c @@ -4797,10 +4797,11 @@ initArrayResultArr(Oid array_type, Oid element_type, MemoryContext rcontext, bool subcontext) { ArrayBuildStateArr *astate; - MemoryContext arr_context; + MemoryContext arr_context = rcontext; /* by default use the parent ctx *//* Make a temporary context to hold all the junk */ - arr_context = AllocSetContextCreate(rcontext, + if (subcontext) + arr_context = AllocSetContextCreate(rcontext, "accumArrayResultArr", ALLOCSET_DEFAULT_MINSIZE, ALLOCSET_DEFAULT_INITSIZE,Testing the performance, it got the 12%~15% speedup. Good. (patch attached)
Nice, and it's consistent with my measurements on scalar values.
Looking at the modification in accumArrayResult* functions, i don't
really comfortable with:1. Code that calls accumArrayResult* after explicitly calling
initArrayResult* must always passing subcontext, but it has no
effect.
2. All existing codes that calls accumArrayResult must be changed.Just an idea: why don't we minimize the change in API like this:
1. Adding parameter bool subcontext, only in initArrayResult*
functions but not in accumArrayResult*
2. Code that want to not creating subcontext must calls
initArrayResult* explicitly.Other codes that calls directly to accumArrayResult can only be
changed in the call to makeArrayResult* (with release=true
parameter). In places that we don't want to create subcontext (as in
array_agg_transfn), modify it to use initArrayResult* before calling
accumArrayResult*.What do you think?
I think it's an interesting idea.
I've been considering this before, when thinking about the best way to
keep the calls to the various methods consistent (eg. enforcing the use
of release=true only with subcontexts).
What I ended up doing (see the v4 patch attached) is that I
(1) added 'private_cxt' flag to the ArrayBuildState[Arr] struct,
tracking whether there's a private memory context
(2) rolled back all the API changes, except for the initArray*
methods (as you proposed)
This has the positive benefit that it allows checking consistency of the
calls - you can still do
initArrayResult(..., subcontext=false)
...
makeArrayResult(..., release=true)
but it won't reset the memory context, and with assert-enabled build it
will actually fail.
Another positive benefit is that this won't break the code unless it
uses the new API. This is a problem especially with external code (e.g.
extensions), but the new API (initArray*) is not part of 9.4 so there's
no such code. So that's nice.
The one annoying thing is that this makes the API slighly unbalanced.
With the new API you can use a shared memory context, which with the old
one (not using the initArray* methods) you can't.
But I'm OK with that, and it makes the patch smaller (15kB -> 11kB).
As per your concern about calling initArrayResult* with
subcontext=false, while makeArrayResult* with release=true:Also, it seems that using 'subcontext=false' and then 'release=true'
would be a bug. Maybe it would be appropriate to store the
'subcontext' value into the ArrayBuildState and then throw an error
if makeArrayResult* is called with (release=true && subcontext=false).Yes, i think we should do that to minimize unexpected coding errors.
In makeArrayResult*, i think its better to not throwing an error, but
using assertions:Assert(release == false || astate->subcontext == true);
Yes. I called the flag 'private_cxt' but that's a minor difference. The
assert I used is this:
/* we can only release the context if it's a private one. */
Assert(! (release && !astate->private_cxt));
regards
Tomas
Attachments:
array-agg-v4.patchtext/x-diff; name=array-agg-v4.patchDownload+63-30
Attached is v5 of the patch, fixing an error with releasing a shared
memory context (invalid flag values in a few calls).
kind regards
Tomas Vondra
Attachments:
array-agg-v5.patchtext/x-diff; name=array-agg-v5.patchDownload+63-30
Tomas Vondra wrote:
Attached is v5 of the patch, fixing an error with releasing a shared
memory context (invalid flag values in a few calls).
The functions that gain a new argument should get their comment updated,
to explain what the new argument is for.
Also, what is it with this hunk?
@@ -4768,6 +4770,9 @@ makeMdArrayResult(ArrayBuildState *astate,
MemoryContextSwitchTo(oldcontext);
+ /* we can only release the context if it's a private one. */ + // Assert(! (release && !astate->private_cxt)); + /* Clean up all the junk */ if (release) MemoryContextDelete(astate->mcontext);
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 21.12.2014 02:54, Alvaro Herrera wrote:
Tomas Vondra wrote:
Attached is v5 of the patch, fixing an error with releasing a shared
memory context (invalid flag values in a few calls).The functions that gain a new argument should get their comment updated,
to explain what the new argument is for.
Right. I've added a short description of the 'subcontext' parameter to
all three variations of the initArray* function, and a more thorough
explanation to initArrayResult().
Also, what is it with this hunk?
@@ -4768,6 +4770,9 @@ makeMdArrayResult(ArrayBuildState *astate,
MemoryContextSwitchTo(oldcontext);
+ /* we can only release the context if it's a private one. */ + // Assert(! (release && !astate->private_cxt)); + /* Clean up all the junk */ if (release) MemoryContextDelete(astate->mcontext);
That's a mistake, of couse - the assert should not be commented out.
Attached is v6 of the patch, with the comments and assert fixed.
Thinking about the 'release' flag a bit more - maybe we could do this
instead:
if (release && astate->private_cxt)
MemoryContextDelete(astate->mcontext);
else if (release)
{
pfree(astate->dvalues);
pfree(astate->dnulls);
pfree(astate);
}
i.e. either destroy the whole context if possible, and just free the
memory when using a shared memory context. But I'm afraid this would
penalize the shared memory context, because that's intended for cases
where all the build states coexist in parallel and then at some point
are all converted into a result and thrown away. Adding pfree() calls is
no improvement here, and just wastes cycles.
regards
Tomas
Attachments:
array-agg-v6.patchtext/x-diff; name=array-agg-v6.patchDownload+76-30
Tomas Vondra <tv@fuzzy.cz> writes:
i.e. either destroy the whole context if possible, and just free the
memory when using a shared memory context. But I'm afraid this would
penalize the shared memory context, because that's intended for cases
where all the build states coexist in parallel and then at some point
are all converted into a result and thrown away. Adding pfree() calls is
no improvement here, and just wastes cycles.
FWIW, I quite dislike the terminology "shared memory context", because
it sounds too much like it means "a context in shared memory". I see
that the patch itself doesn't use that phrase, which is good, but can
we come up with some other phrase for talking about it?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Another positive benefit is that this won't break the code unless it
uses the new API. This is a problem especially with external code (e.g.
extensions), but the new API (initArray*) is not part of 9.4 so there's
no such code. So that's nice.The one annoying thing is that this makes the API slighly unbalanced.
With the new API you can use a shared memory context, which with the old
one (not using the initArray* methods) you can't.But I'm OK with that, and it makes the patch smaller (15kB -> 11kB).
Yes, with this API, we can backpatch this patch to 9.4 (or below) if we
need it there.
I think this API is a good compromise of old API and new API. Ideally if we
can migrate all code to new API (all code must call initArrayResult* before
accumArrayResult*), we can remove parameter MemoryContext rcontext from
accumArrayResult. Currently, the code isn't using the rcontext for anything
except for old API calls (in first call to accumArrayResult).
2014-12-21 20:38 GMT+07:00 Tomas Vondra <tv@fuzzy.cz>:
On 21.12.2014 02:54, Alvaro Herrera wrote:
Tomas Vondra wrote:
Attached is v5 of the patch, fixing an error with releasing a shared
memory context (invalid flag values in a few calls).The functions that gain a new argument should get their comment updated,
to explain what the new argument is for.Right. I've added a short description of the 'subcontext' parameter to
all three variations of the initArray* function, and a more thorough
explanation to initArrayResult().
With this API, i think we should make it clear if we call initArrayResult
with subcontext=false, we can't call makeArrayResult, but we must use
makeMdArrayResult directly.
Or better, we can modify makeArrayResult to release according to
astate->private_cxt:
@@ -4742,7 +4742,7 @@ makeArrayResult(ArrayBuildState *astate,
dims[0] = astate->nelems;
lbs[0] = 1;- return makeMdArrayResult(astate, ndims, dims, lbs, rcontext, true); + return makeMdArrayResult(astate, ndims, dims, lbs, rcontext, astate->private_cxt);
Or else we implement what you suggest below (more comments below):
Thinking about the 'release' flag a bit more - maybe we could do this
instead:
if (release && astate->private_cxt)
MemoryContextDelete(astate->mcontext);
else if (release)
{
pfree(astate->dvalues);
pfree(astate->dnulls);
pfree(astate);
}i.e. either destroy the whole context if possible, and just free the
memory when using a shared memory context. But I'm afraid this would
penalize the shared memory context, because that's intended for cases
where all the build states coexist in parallel and then at some point
are all converted into a result and thrown away. Adding pfree() calls is
no improvement here, and just wastes cycles.
As per Tom's comment, i'm using "parent memory context" instead of "shared
memory context" below.
In the future, if some code writer decided to use subcontext=false, to save
memory in cases where there are many array accumulation, and the parent
memory context is long-living, current code can cause memory leak. So i
think we should implement your suggestion (pfreeing astate), and warn the
implication in the API comment. The API user must choose between
release=true, wasting cycles but preventing memory leak, or managing memory
from the parent memory context.
In one possible use case, for efficiency maybe the caller will create a
special parent memory context for all array accumulation. Then uses
makeArrayResult* with release=false, and in the end releasing the parent
memory context once for all.
Regards,
--
Ali Akbar