BUG #15946: "duplicate key" error on ANALYZE of table partitions in transaction

Started by PG Bug reporting formover 6 years ago4 messagesbugs
Jump to latest
#1PG Bug reporting form
noreply@postgresql.org

The following bug has been logged on the website:

Bug reference: 15946
Logged by: Yaroslav Schekin
Email address: ladayaroslav@yandex.ru
PostgreSQL version: 11.5
Operating system: Any
Description:

Running this:
-----
BEGIN TRANSACTION;
CREATE TABLE sg (
id bigint NOT NULL,
sc_fk bigint,
geo_id bigint,
sl smallint NOT NULL,
a date NOT NULL,
o boolean NOT NULL
)
PARTITION BY RANGE (o, sl, a);

CREATE TABLE sg_19_01_d PARTITION OF sg FOR VALUES FROM (false, '5',
'2019-01-01') TO (false, '5', '2019-02-01');
CREATE TABLE sg_19_02_d PARTITION OF sg FOR VALUES FROM (false, '5',
'2019-02-01') TO (false, '5', '2019-03-01');

CREATE TABLE sc (
id bigint,
a date NOT NULL,
sl smallint NOT NULL,
o boolean NOT NULL
)
PARTITION BY RANGE (o, sl, a);

CREATE TABLE sc_19_01_d PARTITION OF sc FOR VALUES FROM (false, '5',
'2019-01-01') TO (false, '5', '2019-02-01');
CREATE TABLE sc_19_02_d PARTITION OF sc FOR VALUES FROM (false, '5',
'2019-02-01') TO (false, '5', '2019-03-01');

INSERT INTO sg_19_01_d(id, sc_fk, geo_id, sl, a, o)
SELECT n, n, 0, 5, '2019-01-01', false
FROM generate_series(1, 1000) AS g(n);

INSERT INTO sg_19_02_d(id, sc_fk, geo_id, sl, a, o)
SELECT n, n, 0, 5, '2019-02-01', false
FROM generate_series(1, 1000) AS g(n);

INSERT INTO sc_19_01_d(id, a, sl, o)
SELECT n, '2019-01-01', 5, false
FROM generate_series(1, 1000) AS g(n);

INSERT INTO sc_19_02_d(id, a, sl, o)
SELECT n, '2019-02-01', 5, false
FROM generate_series(1, 1000) AS g(n);

ANALYZE sg_19_01_d, sc_19_02_d, sc_19_01_d, sc_19_02_d;
-----
Throws this error:
ERROR: duplicate key value violates unique constraint
"pg_statistic_relid_att_inh_index"
DETAIL: Key (starelid, staattnum, stainherit)=(61056, 1, f) already exists.

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: PG Bug reporting form (#1)
Re: BUG #15946: "duplicate key" error on ANALYZE of table partitions in transaction

PG Bug reporting form <noreply@postgresql.org> writes:

Running this:
...
Throws this error:
ERROR: duplicate key value violates unique constraint "pg_statistic_relid_att_inh_index"
DETAIL: Key (starelid, staattnum, stainherit)=(61056, 1, f) already exists.

Hm, you don't need all the fancy partitioning stuff:

regression=# create table t as select generate_series(1,10) x;
SELECT 10
regression=# begin;
BEGIN
regression=# analyze t, t;
ERROR: duplicate key value violates unique constraint "pg_statistic_relid_att_inh_index"
DETAIL: Key (starelid, staattnum, stainherit)=(35836, 1, f) already exists.

It appears to work fine without the BEGIN:

regression=# analyze t, t;
ANALYZE

but then

regression=# begin;
BEGIN
regression=# analyze t, t;
ERROR: tuple already updated by self

I think the conclusion is that if we aren't using per-table
transactions we'd better do a CommandCounterIncrement between
tables in vacuum()'s loop.

regards, tom lane

#3naveen mahadevuni
nmahadevuni@gmail.com
In reply to: Tom Lane (#2)
Re: BUG #15946: "duplicate key" error on ANALYZE of table partitions in transaction

hi Tom,

I would like to start contributing to postgres, would like to fix this one
if it hasn't already been done.

Thanks,
Naveen

On Sat, Aug 10, 2019 at 7:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Show quoted text

PG Bug reporting form <noreply@postgresql.org> writes:

Running this:
...
Throws this error:
ERROR: duplicate key value violates unique constraint

"pg_statistic_relid_att_inh_index"

DETAIL: Key (starelid, staattnum, stainherit)=(61056, 1, f) already

exists.

Hm, you don't need all the fancy partitioning stuff:

regression=# create table t as select generate_series(1,10) x;
SELECT 10
regression=# begin;
BEGIN
regression=# analyze t, t;
ERROR: duplicate key value violates unique constraint
"pg_statistic_relid_att_inh_index"
DETAIL: Key (starelid, staattnum, stainherit)=(35836, 1, f) already
exists.

It appears to work fine without the BEGIN:

regression=# analyze t, t;
ANALYZE

but then

regression=# begin;
BEGIN
regression=# analyze t, t;
ERROR: tuple already updated by self

I think the conclusion is that if we aren't using per-table
transactions we'd better do a CommandCounterIncrement between
tables in vacuum()'s loop.

regards, tom lane

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: naveen mahadevuni (#3)
Re: BUG #15946: "duplicate key" error on ANALYZE of table partitions in transaction

naveen mahadevuni <nmahadevuni@gmail.com> writes:

I would like to start contributing to postgres, would like to fix this one
if it hasn't already been done.

Already done -

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=cabe0f298ea7efade11d8171c617e668934d0d09

regards, tom lane