Avg performance for int8/numeric
Avg performance for these two datatypes can be improved by *not*
calculating the sum of squares in the shared accumulator
(do_numeric_accum). However there is a little subtlety as this function
is also the shared by variance and stddev!
This patch:
- Modifies do_numeric_accum to have an extra bool parameter and does not
calc sumX2 when it is false.
- Amends all the accumulators that call it to include the bool (set to
true).
- Adds new functions [int8|numeric]_avg_accum that call do_numeric_accum
with the bool set to false.
- Amends the the bootstrap entries for pg_aggregate to use the new
accumulators for avg(int8|numeric).
- Adds the new accumulators into the bootstrap entries for pg_proc.
Performance gain is approx 33% (it is still slower than doing sum/count
- possibly due to the construct/deconstruct overhead of the numeric
transition array).
Cheers
Mark
Attachments:
avg1.patchtext/x-patch; name=avg1.patchDownload+52-47
On Fri, 2006-11-24 at 11:08 +1300, Mark Kirkwood wrote:
- Modifies do_numeric_accum to have an extra bool parameter and does not
calc sumX2 when it is false.
I think it would be clearer to reorganize this function slightly, and
have only a single branch on "useSumX2". On first glance it isn't
obviously that transdatums[2] is defined (but unchanged) when useSumX2
is false.
Performance gain is approx 33%
Nice.
(it is still slower than doing sum/count - possibly due to the
construct/deconstruct overhead of the numeric transition array).
This would indeed be worth profiling. If it turns out that array
overhead is significant, I wonder if we could use a composite type for
the transition variable instead. That might also make it easier to
represent the "N" value as an int8 rather than a numeric.
-Neil
Neil Conway <neilc@samurai.com> writes:
On Fri, 2006-11-24 at 11:08 +1300, Mark Kirkwood wrote:
- Modifies do_numeric_accum to have an extra bool parameter and does not
calc sumX2 when it is false.
I think it would be clearer to reorganize this function slightly, and
have only a single branch on "useSumX2".
It would be clearer to just have a separate function.
regards, tom lane
This has been saved for the 8.3 release:
http://momjian.postgresql.org/cgi-bin/pgpatches_hold
---------------------------------------------------------------------------
Mark Kirkwood wrote:
Avg performance for these two datatypes can be improved by *not*
calculating the sum of squares in the shared accumulator
(do_numeric_accum). However there is a little subtlety as this function
is also the shared by variance and stddev!This patch:
- Modifies do_numeric_accum to have an extra bool parameter and does not
calc sumX2 when it is false.
- Amends all the accumulators that call it to include the bool (set to
true).
- Adds new functions [int8|numeric]_avg_accum that call do_numeric_accum
with the bool set to false.
- Amends the the bootstrap entries for pg_aggregate to use the new
accumulators for avg(int8|numeric).
- Adds the new accumulators into the bootstrap entries for pg_proc.Performance gain is approx 33% (it is still slower than doing sum/count
- possibly due to the construct/deconstruct overhead of the numeric
transition array).Cheers
Mark
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
--
Bruce Momjian bruce@momjian.us
EnterpriseDB http://www.enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
Neil Conway wrote:
On Fri, 2006-11-24 at 11:08 +1300, Mark Kirkwood wrote:
- Modifies do_numeric_accum to have an extra bool parameter and does not
calc sumX2 when it is false.I think it would be clearer to reorganize this function slightly, and
have only a single branch on "useSumX2". On first glance it isn't
obviously that transdatums[2] is defined (but unchanged) when useSumX2
is false.
Right - new patch attached that adds a new function do_numeric_avg_accum
that only uses N and sum(X). This means I could amend the avg aggregates
for numeric, int8 to have a initvalues of {0,0}.
Cheers
Mark
Attachments:
avg2.patchtext/x-patch; name=avg2.patchDownload+74-9
Neil Conway wrote:
(it is still slower than doing sum/count - possibly due to the
construct/deconstruct overhead of the numeric transition array).This would indeed be worth profiling. If it turns out that array
overhead is significant, I wonder if we could use a composite type for
the transition variable instead. That might also make it easier to
represent the "N" value as an int8 rather than a numeric.
I've profiled the 2nd patch using the setup indicated below. The first
64 lines of the flat graph are attached. The complete profile is here:
http://homepages.paradise.net.nz/markir/download/postgres/postgres-avg.gprof.gz
Setup:
avg=# \d avgtest
Table "public.avgtest"
Column | Type | Modifiers
--------+---------------+-----------
id | integer |
val0 | bigint |
val1 | numeric(12,2) |
val2 | numeric(10,0) |
avg=# analyze verbose avgtest;
INFO: analyzing "public.avgtest"
INFO: "avgtest": scanned 3000 of 87689 pages, containing 342138 live
rows and 0 dead rows; 3000 rows in sample, 10000580 estimated total rows
ANALYZE
Time: 252.033 ms
avg=# select avg(val2) from avgtest;
avg
---------------------
714285.214285800000
(1 row)
Time: 35196.028 ms
avg=# \q
regards
Mark
Attachments:
postgres-avg.gprof.headtext/plain; name=postgres-avg.gprof.headDownload
(Blast - meant to send this to -hackers not -patches....)
Neil Conway wrote:
(it is still slower than doing sum/count - possibly due to the
construct/deconstruct overhead of the numeric transition array).This would indeed be worth profiling. If it turns out that array
overhead is significant, I wonder if we could use a composite type for
the transition variable instead. That might also make it easier to
represent the "N" value as an int8 rather than a numeric.
I've profiled the 2nd patch using the setup indicated below. The first
64 lines of the flat graph are attached. The complete profile is here:
http://homepages.paradise.net.nz/markir/download/postgres/postgres-avg.gprof.gz
Setup:
avg=# \d avgtest
Table "public.avgtest"
Column | Type | Modifiers
--------+---------------+-----------
id | integer |
val0 | bigint |
val1 | numeric(12,2) |
val2 | numeric(10,0) |
avg=# analyze verbose avgtest;
INFO: analyzing "public.avgtest"
INFO: "avgtest": scanned 3000 of 87689 pages, containing 342138 live
rows and 0 dead rows; 3000 rows in sample, 10000580 estimated total rows
ANALYZE
Time: 252.033 ms
avg=# select avg(val2) from avgtest;
avg
---------------------
714285.214285800000
(1 row)
Time: 35196.028 ms
avg=# \q
regards
Mark
Attachments:
postgres-avg.gprof.headtext/plain; name=postgres-avg.gprof.headDownload
So, if I understand this correctly, we're calling Alloc and ContextAlloc 10
times for every row being summed?
There are approx 10M rows and the profile snippet below shows 100M calls to
each of those.
- Luke
On 11/24/06 4:46 PM, "Mark Kirkwood" <markir@paradise.net.nz> wrote:
Show quoted text
time seconds seconds calls s/call s/call name
14.42 2.16 2.16 100002977 0.00 0.00 AllocSetAlloc
9.08 3.52 1.36 20000000 0.00 0.00 add_abs
5.54 4.35 0.83 10000000 0.00 0.00 slot_deform_tuple
5.41 5.16 0.81 60001673 0.00 0.00 AllocSetFree
4.34 5.81 0.65 10000000 0.00 0.00 construct_md_array
4.21 6.44 0.63 20000003 0.00 0.00 make_result
3.54 6.97 0.53 10000000 0.00 0.00 numeric_add
3.27 7.46 0.49 30000003 0.00 0.00 set_var_from_num
3.00 7.91 0.45 100002652 0.00 0.00 MemoryContextAlloc
So, if I understand this correctly, we're calling Alloc and ContextAlloc 10
times for every row being summed?
There are approx 10M rows and the profile snippet below shows 100M calls to
each of those.
- Luke
On 11/24/06 4:46 PM, "Mark Kirkwood" <markir@paradise.net.nz> wrote:
Show quoted text
time seconds seconds calls s/call s/call name
14.42 2.16 2.16 100002977 0.00 0.00 AllocSetAlloc
9.08 3.52 1.36 20000000 0.00 0.00 add_abs
5.54 4.35 0.83 10000000 0.00 0.00 slot_deform_tuple
5.41 5.16 0.81 60001673 0.00 0.00 AllocSetFree
4.34 5.81 0.65 10000000 0.00 0.00 construct_md_array
4.21 6.44 0.63 20000003 0.00 0.00 make_result
3.54 6.97 0.53 10000000 0.00 0.00 numeric_add
3.27 7.46 0.49 30000003 0.00 0.00 set_var_from_num
3.00 7.91 0.45 100002652 0.00 0.00 MemoryContextAlloc
Luke Lonergan wrote:
So, if I understand this correctly, we're calling Alloc and ContextAlloc 10
times for every row being summed?There are approx 10M rows and the profile snippet below shows 100M calls to
each of those.
Unless I've accidentally run gprof on the profile output for a 100M row
case I had lying around :-( ... I'll check
Mark Kirkwood wrote:
Luke Lonergan wrote:
So, if I understand this correctly, we're calling Alloc and
ContextAlloc 10
times for every row being summed?There are approx 10M rows and the profile snippet below shows 100M
calls to
each of those.Unless I've accidentally run gprof on the profile output for a 100M row
case I had lying around :-( ... I'll check
I haven't (so profile as attached is ok)...
Mark Kirkwood wrote:
I've profiled the 2nd patch using the setup indicated below. The first
64 lines of the flat graph are attached.
By way of comparison, here is the first 63 lines for:
select sum(val2) from avgtest
Attachments:
postgres-sum.gprof.headtext/plain; name=postgres-sum.gprof.headDownload
Mark,
On 11/24/06 6:03 PM, "Mark Kirkwood" <markir@paradise.net.nz> wrote:
Luke Lonergan wrote:
So, if I understand this correctly, we're calling Alloc and
ContextAlloc 10
times for every row being summed?I haven't (so profile as attached is ok)...
OK - so, without having looked at the source recently, the last time I
profiled it within gdb it looked like the following is what happens in the
executor agg path:
temp heaptuple is allocated, page is pinned, tuple is copied into temp
heaptuple, page is unpinned, temp heaptuple is processed in agg path
This seems to fit the pattern we're seeing in your profile given that we've
got 4 attributes in this relation except that we're seeing it happen twice.
It seems like for each attribute a DATUM is alloc'ed, plus one more, and
this is done twice -> 10 alloc calls for each row being summed.
If this is the case, we can surely not alloc/free for every row and datum by
reusing the space...
- Luke
Mark,
On 11/24/06 6:16 PM, "Mark Kirkwood" <markir@paradise.net.nz> wrote:
By way of comparison, here is the first 63 lines for:
select sum(val2) from avgtest
So, sum() is only alloc'ing 5 times for every row being processed, half of
what avg() is doing.
Seems like what we need to do is find a way to reuse the temp heaptuple
between calls.
- Luke
Luke Lonergan wrote:
Mark,
On 11/24/06 6:16 PM, "Mark Kirkwood" <markir@paradise.net.nz> wrote:
By way of comparison, here is the first 63 lines for:
select sum(val2) from avgtest
So, sum() is only alloc'ing 5 times for every row being processed, half of
what avg() is doing.Seems like what we need to do is find a way to reuse the temp heaptuple
between calls.
Yeah - and I'm *guessing* that its due to avg needing to
deconstruct/construct a 2 element numeric array every call (whereas sum
needs just a single numeric). So some delving into whether it is
construct_md_array that can re-use a heaptuple or whether we need to
look elsewhere...
Also Neil suggested investigating using a single composite type {int8,
numeric} for the {N,sum(X)} transition values. This could well be a
faster way to do this (not sure how to make it work yet... but it sounds
promising...).
Cheers
Mark
On Sat, 2006-11-25 at 18:57 +1300, Mark Kirkwood wrote:
Also Neil suggested investigating using a single composite type
{int8,
numeric} for the {N,sum(X)} transition values. This could well be a
faster way to do this (not sure how to make it work yet... but it
sounds
promising...).
If that is true it implies that any fixed length array is more expensive
than using a composite type. Is there something to be gained by changing
the basic representation of arrays, rather than rewriting all uses of
them?
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
"Simon Riggs" <simon@2ndquadrant.com> writes:
On Sat, 2006-11-25 at 18:57 +1300, Mark Kirkwood wrote:
Also Neil suggested investigating using a single composite type
{int8,
numeric} for the {N,sum(X)} transition values. This could well be a
faster way to do this (not sure how to make it work yet... but it
sounds
promising...).
If that is true it implies that any fixed length array is more expensive
than using a composite type.
Not sure how you derived that conclusion from this statement, but it
doesn't appear to me to follow at all. The reason for Neil's suggestion
was to avoid using numeric arithmetic to run a simple counter, and the
reason that this array stuff is expensive is that the array *components*
are variable-length, which is something that no amount of array
redesigning will eliminate.
regards, tom lane
Tom Lane wrote:
"Simon Riggs" <simon@2ndquadrant.com> writes:
On Sat, 2006-11-25 at 18:57 +1300, Mark Kirkwood wrote:
Also Neil suggested investigating using a single composite type
{int8,
numeric} for the {N,sum(X)} transition values. This could well be a
faster way to do this (not sure how to make it work yet... but it
sounds
promising...).If that is true it implies that any fixed length array is more expensive
than using a composite type.Not sure how you derived that conclusion from this statement, but it
doesn't appear to me to follow at all. The reason for Neil's suggestion
was to avoid using numeric arithmetic to run a simple counter, and the
reason that this array stuff is expensive is that the array *components*
are variable-length, which is something that no amount of array
redesigning will eliminate.
Here is what I think the major contributors to the time spent in avg are:
1/ maintaining sum of squares in the transition array ~ 33%
2/ calling numeric_inc on a numeric counter ~ 10%
3/ deconstruct/construct array of 2 numerics ~ 16%
I derived these by constructing a (deliberately inefficient) version of
sum that used an array of numerics and calculated extra stuff in its
transaction array, and then started removing code a bit at a time to see
what happened (I'm sure there are smarter ways... but this worked ok...).
The current patch does 1/, and doing a composite type of {int8, numeric}
would let us use a an int8 counter instead of numeric, which would
pretty much sort out 2/.
The array cost is more tricky - as Tom mentioned the issue is related to
the variable length nature of the array components, so just changing to
a composite type may not in itself save any of the (so-called) 'array
cost'. Having said that - the profiles suggest that we are perhaps doing
a whole lot more alloc'ing (i.e copying? detoasting?) of memory for
numerics than perhaps we need... I'm not sure how deeply buried the
decision about alloc'ing is being made, so doing anything about this may
be hard.
It looks to me like trying out a composite type is the next obvious step
to do, and then (once I've figured out how so that) we can check its
performance again!
Cheers
Mark
Mark Kirkwood <markir@paradise.net.nz> writes:
... Having said that - the profiles suggest that we are perhaps doing
a whole lot more alloc'ing (i.e copying? detoasting?) of memory for
numerics than perhaps we need...
Yeah. We've looked at this in the past and not figured out any
particularly desirable answer for variable-size accumulator state.
There is a hack in there for fixed-size pass-by-reference state (see
count(*)). It's possible that you could get some traction by only doing
a palloc when the state size has to increase --- with a custom state
type it'd be possible to keep track of the allocated size as well as the
actual current length of the numeric sum. Another idea to consider in
this context is avoiding repeated numeric pack/unpack overhead by
storing the running sum in the "calculation variable" format, but I'm
not sure how deep you'd need to burrow into numeric.c to allow that.
Most of this depends on being able to have a transition state value
that isn't any standard SQL datatype. We've discussed that recently
in I-forget-what-context, and didn't find a good answer yet. I think
you can find the thread by searching for a discussion of using type
"internal" as the state datatype --- which is an idea that doesn't work
by itself because of the security holes it'd open in the type system.
regards, tom lane
Tom Lane wrote:
Mark Kirkwood <markir@paradise.net.nz> writes:
... Having said that - the profiles suggest that we are perhaps doing
a whole lot more alloc'ing (i.e copying? detoasting?) of memory for
numerics than perhaps we need...Yeah. We've looked at this in the past and not figured out any
particularly desirable answer for variable-size accumulator state.
There is a hack in there for fixed-size pass-by-reference state (see
count(*)). It's possible that you could get some traction by only doing
a palloc when the state size has to increase --- with a custom state
type it'd be possible to keep track of the allocated size as well as the
actual current length of the numeric sum. Another idea to consider in
this context is avoiding repeated numeric pack/unpack overhead by
storing the running sum in the "calculation variable" format, but I'm
not sure how deep you'd need to burrow into numeric.c to allow that.
Right - I figured it would probably be hard :-(. It looks worth
investigating, but might be a more longer term project (for me anyway,
lots of stuff to read in there!).
Most of this depends on being able to have a transition state value
that isn't any standard SQL datatype. We've discussed that recently
in I-forget-what-context, and didn't find a good answer yet. I think
you can find the thread by searching for a discussion of using type
"internal" as the state datatype --- which is an idea that doesn't work
by itself because of the security holes it'd open in the type system.
Interesting, I didn't think of doing that - was considering creating a
suitable SQL composite type - a bit crass I guess, but I might just try
that out anyway and see what sort of improvement it gives (we can then
discuss how to do it properly in the advent that it's good....).
Cheers
Mark