Statistical aggregate functions are not working with partitionwise aggregate
Hi,
On PG-head, Some of statistical aggregate function are not giving correct
output when enable partitionwise aggregate while same is working on v11.
below are some of examples.
CREATE TABLE tbl(a int2,b float4) partition by range(a);
create table tbl_p1 partition of tbl for values from (minvalue) to (0);
create table tbl_p2 partition of tbl for values from (0) to (maxvalue);
insert into tbl values (-1,-1),(0,0),(1,1),(2,2);
--when partitionwise aggregate is off
postgres=# SELECT regr_count(b, a) FROM tbl;
regr_count
------------
4
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
regr_avgx | regr_avgy
-----------+-----------
0.5 | 0.5
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
corr
------
1
(1 row)
--when partitionwise aggregate is on
postgres=# SET enable_partitionwise_aggregate = true;
SET
postgres=# SELECT regr_count(b, a) FROM tbl;
regr_count
------------
0
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
regr_avgx | regr_avgy
-----------+-----------
|
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
corr
------
(1 row)
Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation
On Fri, May 3, 2019 at 2:56 PM Rajkumar Raghuwanshi <
rajkumar.raghuwanshi@enterprisedb.com> wrote:
Hi,
On PG-head, Some of statistical aggregate function are not giving correct
output when enable partitionwise aggregate while same is working on v11.
I had a quick look over this and observed that something broken with the
PARTIAL aggregation.
I can reproduce same issue with the larger dataset which results into
parallel scan.
CREATE TABLE tbl1(a int2,b float4) partition by range(a);
create table tbl1_p1 partition of tbl1 for values from (minvalue) to (0);
create table tbl1_p2 partition of tbl1 for values from (0) to (maxvalue);
insert into tbl1 select i%2, i from generate_series(1, 1000000) i;
# SELECT regr_count(b, a) FROM tbl1;
regr_count
------------
0
(1 row)
postgres:5432 [120536]=# explain SELECT regr_count(b, a) FROM tbl1;
QUERY
PLAN
------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=15418.08..15418.09 rows=1 width=8)
-> Gather (cost=15417.87..15418.08 rows=2 width=8)
Workers Planned: 2
-> Partial Aggregate (cost=14417.87..14417.88 rows=1 width=8)
-> Parallel Append (cost=0.00..11091.62 rows=443500
width=6)
-> Parallel Seq Scan on tbl1_p2 (cost=0.00..8850.00
rows=442500 width=6)
-> Parallel Seq Scan on tbl1_p1 (cost=0.00..24.12
rows=1412 width=6)
(7 rows)
postgres:5432 [120536]=# set max_parallel_workers_per_gather to 0;
SET
postgres:5432 [120536]=# SELECT regr_count(b, a) FROM tbl1;
regr_count
------------
1000000
(1 row)
After looking further, it seems that it got broken by following commit:
commit a9c35cf85ca1ff72f16f0f10d7ddee6e582b62b8
Author: Andres Freund <andres@anarazel.de>
Date: Sat Jan 26 14:17:52 2019 -0800
Change function call information to be variable length.
This commit is too big to understand and thus could not get into the excact
cause.
Thanks
below are some of examples.
CREATE TABLE tbl(a int2,b float4) partition by range(a);
create table tbl_p1 partition of tbl for values from (minvalue) to (0);
create table tbl_p2 partition of tbl for values from (0) to (maxvalue);
insert into tbl values (-1,-1),(0,0),(1,1),(2,2);--when partitionwise aggregate is off
postgres=# SELECT regr_count(b, a) FROM tbl;
regr_count
------------
4
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
regr_avgx | regr_avgy
-----------+-----------
0.5 | 0.5
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
corr
------
1
(1 row)--when partitionwise aggregate is on
postgres=# SET enable_partitionwise_aggregate = true;
SET
postgres=# SELECT regr_count(b, a) FROM tbl;
regr_count
------------
0
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
regr_avgx | regr_avgy
-----------+-----------
|
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
corr
------(1 row)
Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation
--
Jeevan Chalke
Technical Architect, Product Development
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
Hi,
As this issue is reproducible without partition-wise aggregate also,
changing email subject from "Statistical aggregate functions are not
working with partitionwise aggregate " to "Statistical aggregate functions
are not working with PARTIAL aggregation".
original reported test case and discussion can be found at below link.
/messages/by-id/CAKcux6=uZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew@mail.gmail.com
Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation
On Fri, May 3, 2019 at 5:26 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com>
wrote:
Show quoted text
On Fri, May 3, 2019 at 2:56 PM Rajkumar Raghuwanshi <
rajkumar.raghuwanshi@enterprisedb.com> wrote:Hi,
On PG-head, Some of statistical aggregate function are not giving correct
output when enable partitionwise aggregate while same is working on v11.I had a quick look over this and observed that something broken with the
PARTIAL aggregation.I can reproduce same issue with the larger dataset which results into
parallel scan.CREATE TABLE tbl1(a int2,b float4) partition by range(a);
create table tbl1_p1 partition of tbl1 for values from (minvalue) to (0);
create table tbl1_p2 partition of tbl1 for values from (0) to (maxvalue);
insert into tbl1 select i%2, i from generate_series(1, 1000000) i;# SELECT regr_count(b, a) FROM tbl1;
regr_count
------------
0
(1 row)postgres:5432 [120536]=# explain SELECT regr_count(b, a) FROM tbl1;
QUERY
PLAN------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=15418.08..15418.09 rows=1 width=8)
-> Gather (cost=15417.87..15418.08 rows=2 width=8)
Workers Planned: 2
-> Partial Aggregate (cost=14417.87..14417.88 rows=1 width=8)
-> Parallel Append (cost=0.00..11091.62 rows=443500
width=6)
-> Parallel Seq Scan on tbl1_p2 (cost=0.00..8850.00
rows=442500 width=6)
-> Parallel Seq Scan on tbl1_p1 (cost=0.00..24.12
rows=1412 width=6)
(7 rows)postgres:5432 [120536]=# set max_parallel_workers_per_gather to 0;
SET
postgres:5432 [120536]=# SELECT regr_count(b, a) FROM tbl1;
regr_count
------------
1000000
(1 row)After looking further, it seems that it got broken by following commit:
commit a9c35cf85ca1ff72f16f0f10d7ddee6e582b62b8
Author: Andres Freund <andres@anarazel.de>
Date: Sat Jan 26 14:17:52 2019 -0800Change function call information to be variable length.
This commit is too big to understand and thus could not get into the
excact cause.Thanks
below are some of examples.
CREATE TABLE tbl(a int2,b float4) partition by range(a);
create table tbl_p1 partition of tbl for values from (minvalue) to (0);
create table tbl_p2 partition of tbl for values from (0) to (maxvalue);
insert into tbl values (-1,-1),(0,0),(1,1),(2,2);--when partitionwise aggregate is off
postgres=# SELECT regr_count(b, a) FROM tbl;
regr_count
------------
4
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
regr_avgx | regr_avgy
-----------+-----------
0.5 | 0.5
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
corr
------
1
(1 row)--when partitionwise aggregate is on
postgres=# SET enable_partitionwise_aggregate = true;
SET
postgres=# SELECT regr_count(b, a) FROM tbl;
regr_count
------------
0
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
regr_avgx | regr_avgy
-----------+-----------
|
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
corr
------(1 row)
Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation--
Jeevan Chalke
Technical Architect, Product Development
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
Hello.
At Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in <CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>
Hi,
As this issue is reproducible without partition-wise aggregate also,
changing email subject from "Statistical aggregate functions are not
working with partitionwise aggregate " to "Statistical aggregate functions
are not working with PARTIAL aggregation".original reported test case and discussion can be found at below link.
/messages/by-id/CAKcux6=uZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew@mail.gmail.com
The immediate reason for the behavior seems that
EEOP_AGG_STRICT_INPUT_CHECK_ARGS regards the non-existent second
argument as null.
The invalid deserialfn_oid case in ExecBuildAggTrans, it
initializes args[1] using the second argument of the functoin
(int8pl() in the case) so the correct numTransInputs here is 1,
not 2.
The attached patch makes at least the test case work correctly
and this seems to be the alone instance of the same issue.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
Attachments:
agg_fix_poc.patchtext/x-patch; charset=us-asciiDownload+2-1
At Tue, 07 May 2019 20:47:28 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190507.204728.233299873.horiguchi.kyotaro@lab.ntt.co.jp>
Hello.
At Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in <CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>
Hi,
As this issue is reproducible without partition-wise aggregate also,
changing email subject from "Statistical aggregate functions are not
working with partitionwise aggregate " to "Statistical aggregate functions
are not working with PARTIAL aggregation".original reported test case and discussion can be found at below link.
/messages/by-id/CAKcux6=uZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew@mail.gmail.comThe immediate reason for the behavior seems that
EEOP_AGG_STRICT_INPUT_CHECK_ARGS considers non existent second
argument as null, which is out of arguments list in
trans_fcinfo->args[].The invalid deserialfn_oid case in ExecBuildAggTrans, it
initializes args[1] using the second argument of the functoin
(int8pl() in the case) so the correct numTransInputs here is 1,
not 2.I don't understand this fully but at least the attached patch
makes the test case work correctly and this seems to be the only
case of this issue.
This behavior is introduced by 69c3936a14 (in v11). At that time
FunctionCallInfoData is pallioc0'ed and has fixed length members
arg[6] and argnull[7]. So nulls[1] is always false even if nargs
= 1 so the issue had not been revealed.
After introducing a9c35cf85c (in v12) the same check is done on
FunctionCallInfoData that has NullableDatum args[] of required
number of elements. In that case args[1] is out of palloc'ed
memory so this issue has been revealed.
In a second look, I seems to me that the right thing to do here
is setting numInputs instaed of numArguments to numTransInputs in
combining step.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
Attachments:
0001-Give-correct-number-of-arguments-to-combine-function.patchtext/x-patch; charset=us-asciiDownload+2-2
Import Notes
Reply to msg id not found: 20190507.204728.233299873.horiguchi.kyotaro@lab.ntt.co.jp
At Wed, 08 May 2019 13:06:36 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190508.130636.184826233.horiguchi.kyotaro@lab.ntt.co.jp>
At Tue, 07 May 2019 20:47:28 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190507.204728.233299873.horiguchi.kyotaro@lab.ntt.co.jp>
Hello.
At Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in <CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>
Hi,
As this issue is reproducible without partition-wise aggregate also,
changing email subject from "Statistical aggregate functions are not
working with partitionwise aggregate " to "Statistical aggregate functions
are not working with PARTIAL aggregation".original reported test case and discussion can be found at below link.
/messages/by-id/CAKcux6=uZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew@mail.gmail.comThe immediate reason for the behavior seems that
EEOP_AGG_STRICT_INPUT_CHECK_ARGS considers non existent second
argument as null, which is out of arguments list in
trans_fcinfo->args[].The invalid deserialfn_oid case in ExecBuildAggTrans, it
initializes args[1] using the second argument of the functoin
(int8pl() in the case) so the correct numTransInputs here is 1,
not 2.I don't understand this fully but at least the attached patch
makes the test case work correctly and this seems to be the only
case of this issue.This behavior is introduced by 69c3936a14 (in v11). At that time
FunctionCallInfoData is pallioc0'ed and has fixed length members
arg[6] and argnull[7]. So nulls[1] is always false even if nargs
= 1 so the issue had not been revealed.After introducing a9c35cf85c (in v12) the same check is done on
FunctionCallInfoData that has NullableDatum args[] of required
number of elements. In that case args[1] is out of palloc'ed
memory so this issue has been revealed.In a second look, I seems to me that the right thing to do here
is setting numInputs instaed of numArguments to numTransInputs in
combining step.
By the way, as mentioned above, this issue exists since 11 but
harms at 12. Is this an open item, or older bug?
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
On Wed, May 8, 2019 at 12:09 AM Kyotaro HORIGUCHI
<horiguchi.kyotaro@lab.ntt.co.jp> wrote:
By the way, as mentioned above, this issue exists since 11 but
harms at 12. Is this an open item, or older bug?
Sounds more like an open item to me.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Wed, May 08, 2019 at 01:09:23PM +0900, Kyotaro HORIGUCHI wrote:
At Wed, 08 May 2019 13:06:36 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190508.130636.184826233.horiguchi.kyotaro@lab.ntt.co.jp>
At Tue, 07 May 2019 20:47:28 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190507.204728.233299873.horiguchi.kyotaro@lab.ntt.co.jp>
Hello.
At Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in <CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>
Hi,
As this issue is reproducible without partition-wise aggregate also,
changing email subject from "Statistical aggregate functions are not
working with partitionwise aggregate " to "Statistical aggregate functions
are not working with PARTIAL aggregation".original reported test case and discussion can be found at below link.
/messages/by-id/CAKcux6=uZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew@mail.gmail.comThe immediate reason for the behavior seems that
EEOP_AGG_STRICT_INPUT_CHECK_ARGS considers non existent second
argument as null, which is out of arguments list in
trans_fcinfo->args[].The invalid deserialfn_oid case in ExecBuildAggTrans, it
initializes args[1] using the second argument of the functoin
(int8pl() in the case) so the correct numTransInputs here is 1,
not 2.I don't understand this fully but at least the attached patch
makes the test case work correctly and this seems to be the only
case of this issue.This behavior is introduced by 69c3936a14 (in v11). At that time
FunctionCallInfoData is pallioc0'ed and has fixed length members
arg[6] and argnull[7]. So nulls[1] is always false even if nargs
= 1 so the issue had not been revealed.After introducing a9c35cf85c (in v12) the same check is done on
FunctionCallInfoData that has NullableDatum args[] of required
number of elements. In that case args[1] is out of palloc'ed
memory so this issue has been revealed.In a second look, I seems to me that the right thing to do here
is setting numInputs instaed of numArguments to numTransInputs in
combining step.By the way, as mentioned above, this issue exists since 11 but
harms at 12. Is this an open item, or older bug?
It is an open item - there's a section for older bugs, but considering
it's harmless in 11 (at least that's my understanding from the above
discussion) I've added it as a regular open item.
I've linked it to a9c35cf85c, which seems to be the culprit commit.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Don't we have a build farm animal that runs under valgrind that would
have caught this?
On 5/8/19 12:41 PM, Greg Stark wrote:
Don't we have a build farm animal that runs under valgrind that would
have caught this?
There are two animals running under valgrind: lousyjack and skink.
cheers
andrew
--
Andrew Dunstan https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hello. There is an unfortunate story on this issue.
At Wed, 8 May 2019 14:56:25 -0400, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote in <7969b496-096a-bf9b-2a03-4706baa4c48e@2ndQuadrant.com>
On 5/8/19 12:41 PM, Greg Stark wrote:
Don't we have a build farm animal that runs under valgrind that would
have caught this?There are two animals running under valgrind: lousyjack and skink.
Valgrind doesn't detect the overruning read since the block
doesn't has 'MEMNOACCESS' region, since the requested size is
just 64 bytes.
Thus the attached patch let valgrind detect the overrun.
==00:00:00:22.959 20254== VALGRINDERROR-BEGIN
==00:00:00:22.959 20254== Conditional jump or move depends on uninitialised value(s)
==00:00:00:22.959 20254== at 0x88A838: ExecInterpExpr (execExprInterp.c:1553)
==00:00:00:22.959 20254== by 0x88AFD5: ExecInterpExprStillValid (execExprInterp.c:1769)
==00:00:00:22.959 20254== by 0x8C3503: ExecEvalExprSwitchContext (executor.h:307)
==00:00:00:22.959 20254== by 0x8C4653: advance_aggregates (nodeAgg.c:679)
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
Attachments:
let_valgrind_find_the_overrun.patchtext/x-patch; charset=us-asciiDownload+1-1
At Thu, 09 May 2019 11:17:46 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190509.111746.217492977.horiguchi.kyotaro@lab.ntt.co.jp>
Valgrind doesn't detect the overruning read since the block
doesn't has 'MEMNOACCESS' region, since the requested size is
just 64 bytes.Thus the attached patch let valgrind detect the overrun.
So the attached patch makes palloc always attach the MEMNOACCESS
region and sentinel byte. The issue under discussion is detected
with this patch either. (But in return memory usage gets larger.)
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
Attachments:
0001-Always-put-sentinel-in-allocated-memory-blocks.patchtext/x-patch; charset=us-asciiDownload+64-46
Import Notes
Reply to msg id not found: 20190509.111746.217492977.horiguchi.kyotaro@lab.ntt.co.jp
Hi,
On 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:
This behavior is introduced by 69c3936a14 (in v11). At that time
FunctionCallInfoData is pallioc0'ed and has fixed length members
arg[6] and argnull[7]. So nulls[1] is always false even if nargs
= 1 so the issue had not been revealed.
After introducing a9c35cf85c (in v12) the same check is done on
FunctionCallInfoData that has NullableDatum args[] of required
number of elements. In that case args[1] is out of palloc'ed
memory so this issue has been revealed.
In a second look, I seems to me that the right thing to do here
is setting numInputs instaed of numArguments to numTransInputs in
combining step.
Yea, to me this just seems a consequence of the wrong
numTransInputs. Arguably this is a bug going back to 9.6, where
combining aggregates where introduced. It's just that numTransInputs
isn't used anywhere for combining aggregates, before 11.
It's documentation says:
/*
* Number of aggregated input columns to pass to the transfn. This
* includes the ORDER BY columns for ordered-set aggs, but not for plain
* aggs. (This doesn't count the transition state value!)
*/
int numTransInputs;
which IMO is violated by having it set to the plain aggregate's value,
rather than the combine func.
While I agree that fixing numTransInputs is the right way, I'm not
convinced the way you did it is the right approach. I'm somewhat
inclined to think that it's wrong that ExecInitAgg() calls
build_pertrans_for_aggref() with a numArguments that's not actually
right? Alternatively I think we should just move the numTransInputs
computation into the existing branch around DO_AGGSPLIT_COMBINE.
It seems pretty clear that this needs to be fixed for v11, it seems too
fragile to rely on trans_fcinfo->argnull[2] being zero initialized.
I'm less sure about fixing it for 9.6/10. There's no use of
numTransInputs for combining back then.
David, I assume you didn't adjust numTransInput plainly because it
wasn't needed / you didn't notice? Do you have a preference for a fix?
Independent of these changes, some of the code around partial, ordered
set and polymorphic aggregates really make it hard to understand things:
/* Detect how many arguments to pass to the finalfn */
if (aggform->aggfinalextra)
peragg->numFinalArgs = numArguments + 1;
else
peragg->numFinalArgs = numDirectArgs + 1;
What on earth is that supposed to mean? Sure, the +1 is obvious, but why
the different sources for arguments are needed isn't - especially
because numArguments was just calculated with the actual aggregate
inputs. Nor is aggfinalextra's documentation particularly elucidating:
/* true to pass extra dummy arguments to aggfinalfn */
bool aggfinalextra BKI_DEFAULT(f);
especially not why aggfinalextra means we have to ignore direct
args. Presumably because aggfinalextra just emulates what direct args
does for ordered set args, but we allow both to be set.
Similarly
/* Detect how many arguments to pass to the transfn */
if (AGGKIND_IS_ORDERED_SET(aggref->aggkind))
pertrans->numTransInputs = numInputs;
else
pertrans->numTransInputs = numArguments;
is hard to understand, without additional comments. One can, looking
around, infer that it's because ordered set aggs need sort columns
included. But that should just have been mentioned.
And to make sense of build_aggregate_transfn_expr()'s treatment of
direct args, one has to know that direct args are only possible for
ordered set aggregates. Which IMO is not obvious in nodeAgg.c.
...
I feel this code has become quite creaky in the last few years.
Greetings,
Andres Freund
Hi,
David, anyone, any comments?
On 2019-05-16 20:04:04 -0700, Andres Freund wrote:
On 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:
This behavior is introduced by 69c3936a14 (in v11). At that time
FunctionCallInfoData is pallioc0'ed and has fixed length members
arg[6] and argnull[7]. So nulls[1] is always false even if nargs
= 1 so the issue had not been revealed.After introducing a9c35cf85c (in v12) the same check is done on
FunctionCallInfoData that has NullableDatum args[] of required
number of elements. In that case args[1] is out of palloc'ed
memory so this issue has been revealed.In a second look, I seems to me that the right thing to do here
is setting numInputs instaed of numArguments to numTransInputs in
combining step.Yea, to me this just seems a consequence of the wrong
numTransInputs. Arguably this is a bug going back to 9.6, where
combining aggregates where introduced. It's just that numTransInputs
isn't used anywhere for combining aggregates, before 11.It's documentation says:
/*
* Number of aggregated input columns to pass to the transfn. This
* includes the ORDER BY columns for ordered-set aggs, but not for plain
* aggs. (This doesn't count the transition state value!)
*/
int numTransInputs;which IMO is violated by having it set to the plain aggregate's value,
rather than the combine func.While I agree that fixing numTransInputs is the right way, I'm not
convinced the way you did it is the right approach. I'm somewhat
inclined to think that it's wrong that ExecInitAgg() calls
build_pertrans_for_aggref() with a numArguments that's not actually
right? Alternatively I think we should just move the numTransInputs
computation into the existing branch around DO_AGGSPLIT_COMBINE.It seems pretty clear that this needs to be fixed for v11, it seems too
fragile to rely on trans_fcinfo->argnull[2] being zero initialized.I'm less sure about fixing it for 9.6/10. There's no use of
numTransInputs for combining back then.David, I assume you didn't adjust numTransInput plainly because it
wasn't needed / you didn't notice? Do you have a preference for a fix?
Unless somebody comments I'm later today going to move the numTransInput
computation into the DO_AGGSPLIT_COMBINE branch in
build_pertrans_for_aggref(), add a small test (using
enable_partitionwise_aggregate), and backpatch to 11.
Greetings,
Andres Freund
On Sun, 19 May 2019 at 07:37, Andres Freund <andres@anarazel.de> wrote:
David, anyone, any comments?
Looking at this now.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Fri, 17 May 2019 at 15:04, Andres Freund <andres@anarazel.de> wrote:
On 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:
In a second look, I seems to me that the right thing to do here
is setting numInputs instaed of numArguments to numTransInputs in
combining step.Yea, to me this just seems a consequence of the wrong
numTransInputs. Arguably this is a bug going back to 9.6, where
combining aggregates where introduced. It's just that numTransInputs
isn't used anywhere for combining aggregates, before 11.
Isn't it more due to the lack of any aggregates with > 1 arg having a
combine function?
While I agree that fixing numTransInputs is the right way, I'm not
convinced the way you did it is the right approach. I'm somewhat
inclined to think that it's wrong that ExecInitAgg() calls
build_pertrans_for_aggref() with a numArguments that's not actually
right? Alternatively I think we should just move the numTransInputs
computation into the existing branch around DO_AGGSPLIT_COMBINE.
Yeah, probably we should be passing in the correct arg count for the
combinefn to build_pertrans_for_aggref(). However, I see that we also
pass in the inputTypes from the transfn, just we don't use them when
working with the combinefn.
You'll notice that I've just hardcoded the numTransArgs to set it to 1
when we're working with a combinefn. The combinefn always requires 2
args of trans type, so this seems pretty valid to me. I think
Kyotaro's patch setting of numInputs is wrong. It just happens to
accidentally match. I also added a regression test to exercise
regr_count. I tagged it onto an existing query so as to minimise the
overhead. It seems worth doing since most other aggs have a single
argument and this one wasn't working because it had two args.
I also noticed that the code seemed to work in af025eed536d, so I
guess the new expression evaluation code is highlighting the existing
issue.
I feel this code has become quite creaky in the last few years.
You're not kidding!
Patch attached of how I think we should fix it.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachments:
fix_incorrect_arg_count_for_combine_funcs.patchapplication/octet-stream; name=fix_incorrect_arg_count_for_combine_funcs.patchDownload+55-25
Hi,
On 2019-05-19 20:18:38 +1200, David Rowley wrote:
On Fri, 17 May 2019 at 15:04, Andres Freund <andres@anarazel.de> wrote:
On 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:
In a second look, I seems to me that the right thing to do here
is setting numInputs instaed of numArguments to numTransInputs in
combining step.Yea, to me this just seems a consequence of the wrong
numTransInputs. Arguably this is a bug going back to 9.6, where
combining aggregates where introduced. It's just that numTransInputs
isn't used anywhere for combining aggregates, before 11.Isn't it more due to the lack of any aggregates with > 1 arg having a
combine function?
I'm not sure I follow? regr_count() already was in 9.6? Including a
combine function?
postgres[1490][1]=# SELECT version();
┌──────────────────────────────────────────────────────────────────────────────────────────┐
│ version │
├──────────────────────────────────────────────────────────────────────────────────────────┤
│ PostgreSQL 9.6.13 on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-7) 8.3.0, 64-bit │
└──────────────────────────────────────────────────────────────────────────────────────────┘
(1 row)
postgres[1490][1]=# SELECT aggfnoid::regprocedure FROM pg_aggregate pa JOIN pg_proc pptrans ON (pa.aggtransfn = pptrans.oid) AND pptrans.pronargs > 2 AND aggcombinefn <> 0;
┌───────────────────────────────────────────────────┐
│ aggfnoid │
├───────────────────────────────────────────────────┤
│ regr_count(double precision,double precision) │
│ regr_sxx(double precision,double precision) │
│ regr_syy(double precision,double precision) │
│ regr_sxy(double precision,double precision) │
│ regr_avgx(double precision,double precision) │
│ regr_avgy(double precision,double precision) │
│ regr_r2(double precision,double precision) │
│ regr_slope(double precision,double precision) │
│ regr_intercept(double precision,double precision) │
│ covar_pop(double precision,double precision) │
│ covar_samp(double precision,double precision) │
│ corr(double precision,double precision) │
└───────────────────────────────────────────────────┘
But it's not an active problem in 9.6, because numTransInputs wasn't
used at all for combine functions: Before c253b722f6 there simply was no
NULL check for strict trans functions, and after that the check was
simply hardcoded for the right offset in fcinfo, as it's done by code
specific to aggsplit combine.
In bf6c614a2f2 that was generalized, so the strictness check was done by
common code doing the strictness checks, based on numTransInputs. But
due to the fact that the relevant fcinfo->isnull[2..] was always
zero-initialized (more or less by accident, by being part of the
AggStatePerTrans struct, which is palloc0'ed), there was no observable
damage, we just checked too many array elements. And then finally in
a9c35cf85ca1f, that broke, because the fcinfo is a) dynamically
allocated without being zeroed b) exactly the right length.
While I agree that fixing numTransInputs is the right way, I'm not
convinced the way you did it is the right approach. I'm somewhat
inclined to think that it's wrong that ExecInitAgg() calls
build_pertrans_for_aggref() with a numArguments that's not actually
right? Alternatively I think we should just move the numTransInputs
computation into the existing branch around DO_AGGSPLIT_COMBINE.Yeah, probably we should be passing in the correct arg count for the
combinefn to build_pertrans_for_aggref(). However, I see that we also
pass in the inputTypes from the transfn, just we don't use them when
working with the combinefn.
Not sure what you mean by that "however"?
You'll notice that I've just hardcoded the numTransArgs to set it to 1
when we're working with a combinefn. The combinefn always requires 2
args of trans type, so this seems pretty valid to me.
I think Kyotaro's patch setting of numInputs is wrong.
Yea, my proposal was to simply harcode it to 2 in the
DO_AGGSPLIT_COMBINE path.
Patch attached of how I think we should fix it.
Thanks.
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index d01fc4f52e..b061162961 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -2522,8 +2522,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) int existing_aggno; int existing_transno; List *same_input_transnos; - Oid inputTypes[FUNC_MAX_ARGS]; + Oid transFnInputTypes[FUNC_MAX_ARGS]; int numArguments; + int numTransFnArgs; int numDirectArgs; HeapTuple aggTuple; Form_pg_aggregate aggform; @@ -2701,14 +2702,23 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) * could be different from the agg's declared input types, when the * agg accepts ANY or a polymorphic type. */ - numArguments = get_aggregate_argtypes(aggref, inputTypes); + numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);
Not sure I understand the distinction you're trying to make with the
variable renaming. The combine function is also a transition function,
no?
/* Count the "direct" arguments, if any */
numDirectArgs = list_length(aggref->aggdirectargs);+ /* + * Combine functions always have a 2 trans state type input params, so + * this is always set to 1 (we don't count the first trans state). + */
Perhaps the parenthetical should instead be something like "to 1 (the
trans type is not counted as an arg, just like with non-combine trans
function)" or similar?
@@ -2781,7 +2791,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aggref, transfn_oid, aggtranstype, serialfn_oid, deserialfn_oid, initValue, initValueIsNull, - inputTypes, numArguments); + transFnInputTypes, numArguments);
That means we pass in the wrong input types? Seems like it'd be better
to either pass an empty list, or just create the argument list here.
I'm inclined to push a minimal fix now, and then a slightly more evolved
version fo this after beta1.
diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql index d4fd657188..bd8b9e8b4f 100644 --- a/src/test/regress/sql/aggregates.sql +++ b/src/test/regress/sql/aggregates.sql @@ -963,10 +963,11 @@ SET enable_indexonlyscan = off;-- variance(int4) covers numeric_poly_combine -- sum(int8) covers int8_avg_combine +-- regr_cocunt(float8, float8) covers int8inc_float8_float8 and aggregates with > 1 arg
typo...
EXPLAIN (COSTS OFF) - SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1; + SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;-SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1; +SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;ROLLBACK;
Does this actually cover the bug at issue here? The non-combine case
wasn't broken?
Greetings,
Andres Freund
On Mon, 20 May 2019 at 06:36, Andres Freund <andres@anarazel.de> wrote:
Isn't it more due to the lack of any aggregates with > 1 arg having a
combine function?I'm not sure I follow? regr_count() already was in 9.6? Including a
combine function?
Oops, that line I meant to delete before sending.
Yeah, probably we should be passing in the correct arg count for the
combinefn to build_pertrans_for_aggref(). However, I see that we also
pass in the inputTypes from the transfn, just we don't use them when
working with the combinefn.Not sure what you mean by that "however"?
Well, previously those two arguments were always for the function in
pg_aggregate.aggtransfn. I only changed one of them to mean the trans
func that's being used, which detracted slightly from my ambition to
change just what numArguments means.
You'll notice that I've just hardcoded the numTransArgs to set it to 1
when we're working with a combinefn. The combinefn always requires 2
args of trans type, so this seems pretty valid to me.I think Kyotaro's patch setting of numInputs is wrong.
Yea, my proposal was to simply harcode it to 2 in the
DO_AGGSPLIT_COMBINE path.
ok.
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index d01fc4f52e..b061162961 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -2522,8 +2522,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) int existing_aggno; int existing_transno; List *same_input_transnos; - Oid inputTypes[FUNC_MAX_ARGS]; + Oid transFnInputTypes[FUNC_MAX_ARGS]; int numArguments; + int numTransFnArgs; int numDirectArgs; HeapTuple aggTuple; Form_pg_aggregate aggform; @@ -2701,14 +2702,23 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) * could be different from the agg's declared input types, when the * agg accepts ANY or a polymorphic type. */ - numArguments = get_aggregate_argtypes(aggref, inputTypes); + numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);Not sure I understand the distinction you're trying to make with the
variable renaming. The combine function is also a transition function,
no?
I was trying to make it more clear what each variable is for. It's
true that the combine function is used as a transition function in
this case, but I'd hoped it would be more easy to understand that the
input arguments listed in a variable named transFnInputTypes would be
for the function mentioned in pg_aggregate.aggtransfn rather than the
transfn we're using. If that's not any more clear then maybe another
fix is better, or we can leave it... I had to make sense of all this
code last night and I was just having a go at making it easier to
follow for the next person who has to.
/* Count the "direct" arguments, if any */
numDirectArgs = list_length(aggref->aggdirectargs);+ /* + * Combine functions always have a 2 trans state type input params, so + * this is always set to 1 (we don't count the first trans state). + */Perhaps the parenthetical should instead be something like "to 1 (the
trans type is not counted as an arg, just like with non-combine trans
function)" or similar?
Yeah, that's better.
@@ -2781,7 +2791,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aggref, transfn_oid, aggtranstype, serialfn_oid, deserialfn_oid, initValue, initValueIsNull, - inputTypes, numArguments); + transFnInputTypes, numArguments);That means we pass in the wrong input types? Seems like it'd be better
to either pass an empty list, or just create the argument list here.
What do you mean "here"? Did you mean to quote this fragment?
@@ -2880,7 +2895,7 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans,
Oid aggtransfn, Oid aggtranstype,
Oid aggserialfn, Oid aggdeserialfn,
Datum initValue, bool initValueIsNull,
- Oid *inputTypes, int numArguments)
+ Oid *transFnInputTypes, int numArguments)
I had hoped the rename would make it more clear that these are the
args for the function in pg_aggregate.aggtransfn. We could pass NULL
instead when it's the combine func, but I didn't really see the
advantage of it.
I'm inclined to push a minimal fix now, and then a slightly more evolved
version fo this after beta1.
Ok
diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql index d4fd657188..bd8b9e8b4f 100644 --- a/src/test/regress/sql/aggregates.sql +++ b/src/test/regress/sql/aggregates.sql @@ -963,10 +963,11 @@ SET enable_indexonlyscan = off;-- variance(int4) covers numeric_poly_combine -- sum(int8) covers int8_avg_combine +-- regr_cocunt(float8, float8) covers int8inc_float8_float8 and aggregates with > 1 argtypo...
oops. I spelt coconut wrong. :)
EXPLAIN (COSTS OFF) - SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1; + SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;-SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1; +SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;ROLLBACK;
Does this actually cover the bug at issue here? The non-combine case
wasn't broken?
The EXPLAIN shows the plan is:
QUERY PLAN
----------------------------------------------
Finalize Aggregate
-> Gather
Workers Planned: 4
-> Partial Aggregate
-> Parallel Seq Scan on tenk1
(5 rows)
so it is exercising the combine functions.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Hi,
Thanks to all for reporting, helping to identify and finally patch the
problem!
On 2019-05-20 10:36:43 +1200, David Rowley wrote:
On Mon, 20 May 2019 at 06:36, Andres Freund <andres@anarazel.de> wrote:
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index d01fc4f52e..b061162961 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -2522,8 +2522,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) int existing_aggno; int existing_transno; List *same_input_transnos; - Oid inputTypes[FUNC_MAX_ARGS]; + Oid transFnInputTypes[FUNC_MAX_ARGS]; int numArguments; + int numTransFnArgs; int numDirectArgs; HeapTuple aggTuple; Form_pg_aggregate aggform; @@ -2701,14 +2702,23 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) * could be different from the agg's declared input types, when the * agg accepts ANY or a polymorphic type. */ - numArguments = get_aggregate_argtypes(aggref, inputTypes); + numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);Not sure I understand the distinction you're trying to make with the
variable renaming. The combine function is also a transition function,
no?I was trying to make it more clear what each variable is for. It's
true that the combine function is used as a transition function in
this case, but I'd hoped it would be more easy to understand that the
input arguments listed in a variable named transFnInputTypes would be
for the function mentioned in pg_aggregate.aggtransfn rather than the
transfn we're using. If that's not any more clear then maybe another
fix is better, or we can leave it... I had to make sense of all this
code last night and I was just having a go at making it easier to
follow for the next person who has to.
That's what I guessed, but I'm not sure it really achieves that. How
about we have something roughly like:
int numTransFnArgs = -1;
int numCombineFnArgs = -1;
Oid transFnInputTypes[FUNC_MAX_ARGS];
Oid combineFnInputTypes[2];
if (DO_AGGSPLIT_COMBINE(...)
numCombineFnArgs = 1;
combineFnInputTypes = list_make2(aggtranstype, aggtranstype);
else
numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);
...
if (DO_AGGSPLIT_COMBINE(...))
build_pertrans_for_aggref(pertrans, aggstate, estate,
aggref, combinefn_oid, aggtranstype,
serialfn_oid, deserialfn_oid,
initValue, initValueIsNull,
combineFnInputTypes, numCombineFnArgs);
else
build_pertrans_for_aggref(pertrans, aggstate, estate,
aggref, transfn_oid, aggtranstype,
serialfn_oid, deserialfn_oid,
initValue, initValueIsNull,
transFnInputTypes, numTransFnArgs);
seems like that'd make the code clearer? I wonder if we shouldn't
strive to have *no* DO_AGGSPLIT_COMBINE specific logic in
build_pertrans_for_aggref (except perhaps for an error check or two).
Istm we shouldn't even need a separate build_aggregate_combinefn_expr()
from build_aggregate_transfn_expr().
@@ -2781,7 +2791,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) aggref, transfn_oid, aggtranstype, serialfn_oid, deserialfn_oid, initValue, initValueIsNull, - inputTypes, numArguments); + transFnInputTypes, numArguments);That means we pass in the wrong input types? Seems like it'd be better
to either pass an empty list, or just create the argument list here.What do you mean "here"? Did you mean to quote this fragment?
@@ -2880,7 +2895,7 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans, Oid aggtransfn, Oid aggtranstype, Oid aggserialfn, Oid aggdeserialfn, Datum initValue, bool initValueIsNull, - Oid *inputTypes, int numArguments) + Oid *transFnInputTypes, int numArguments)I had hoped the rename would make it more clear that these are the
args for the function in pg_aggregate.aggtransfn. We could pass NULL
instead when it's the combine func, but I didn't really see the
advantage of it.
The advantage is that if somebody starts to use the the wrong list in
the wrong context, we'd be more likely to get an error than something
that works in the common cases, but not in the more complicated
situations.
I'm inclined to push a minimal fix now, and then a slightly more evolved
version fo this after beta1.Ok
Done that now.
EXPLAIN (COSTS OFF) - SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1; + SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;-SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1; +SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;ROLLBACK;
Does this actually cover the bug at issue here? The non-combine case
wasn't broken?The EXPLAIN shows the plan is:
Err, comes from only looking at the diff :(. Missed the previous SETs,
and the explain wasn't included in the context either...
Ugh, I just noticed - as you did before - that numInputs is declared at
the top-level in build_pertrans_for_aggref, and then *again* in the
!DO_AGGSPLIT_COMBINE branch. Why, oh, why (yes, I'm aware that that's in
one of my commmits :(). I've renamed your numTransInputs variable to
numTransArgs, as it seems confusing to have different values in
pertrans->numTransInputs and a local numTransInputs variable.
Btw, the zero input case appears to also be affected by this bug: We
quite reasonably don't emit a strict input check expression step for the
combine function when numTransInputs = 0. But the only zero length agg
is count(*), and while it has strict trans & combine functions, it does
have an initval of 0. So I don't think it's reachable with builtin
aggregates, and I can't imagine another zero argument aggregate.
Greetings,
Andres Freund
On Mon, 20 May 2019 at 13:20, Andres Freund <andres@anarazel.de> wrote:
How
about we have something roughly like:int numTransFnArgs = -1;
int numCombineFnArgs = -1;
Oid transFnInputTypes[FUNC_MAX_ARGS];
Oid combineFnInputTypes[2];if (DO_AGGSPLIT_COMBINE(...)
numCombineFnArgs = 1;
combineFnInputTypes = list_make2(aggtranstype, aggtranstype);
else
numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);...
if (DO_AGGSPLIT_COMBINE(...))
build_pertrans_for_aggref(pertrans, aggstate, estate,
aggref, combinefn_oid, aggtranstype,
serialfn_oid, deserialfn_oid,
initValue, initValueIsNull,
combineFnInputTypes, numCombineFnArgs);
else
build_pertrans_for_aggref(pertrans, aggstate, estate,
aggref, transfn_oid, aggtranstype,
serialfn_oid, deserialfn_oid,
initValue, initValueIsNull,
transFnInputTypes, numTransFnArgs);seems like that'd make the code clearer?
I think that might be a good idea... I mean apart from trying to
assign a List to an array :) We still must call
get_aggregate_argtypes() in order to determine the final function, so
the code can't look exactly like you've written.
I wonder if we shouldn't
strive to have *no* DO_AGGSPLIT_COMBINE specific logic in
build_pertrans_for_aggref (except perhaps for an error check or two).
Just so we have a hard copy to review and discuss, I think this would
look something like the attached.
We do miss out on a few very small optimisations, but I don't think
they'll be anything we could measure. Namely
build_aggregate_combinefn_expr() called make_agg_arg() once and used
it twice instead of calling it once for each arg. I don't think
that's anything we could measure, especially in a situation where
two-stage aggregation is being used.
I ended up also renaming aggtransfn to transfn_oid in
build_pertrans_for_aggref(). Having it called aggtranfn seems a bit
too close to the pg_aggregate.aggtransfn column which is confusion
given that we might pass it the value of the aggcombinefn column.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services