Bug?

Started by Rasmus Resen Amossenover 24 years ago8 messagesgeneral
Jump to latest

I am using postgresql-7.0.3 and is now having a strange problem!!!
I got a table with a "serial" attribute. For example
CREATE TABLE test (
id SERIAL NOT NULL,
val INT
);

The the following insertions creates an error:
INSERT INTO test(id,val) VALUES (1,1);
INSERT INTO test(val) VALUES (1);

Error:
Cannot insert a duplicate key into unique index test_id_key

After that I can just re�eat the second line and insert without errors.
Is this bug fixed in newer versions?

--
Rasmus Resen Amossen | stud.mat.dat at the University of Copenhagen
Jagtvej 120, -244 | http://www.math.ku.dk/muh
2200 Kbh. N | http://www.ruf.adr.dk

#2Doug McNaught
doug@wireboard.com
In reply to: Rasmus Resen Amossen (#1)
Re: Bug?

Rasmus Resen Amossen <NOSPAM@hey.com> writes:

CREATE TABLE test (
id SERIAL NOT NULL,
val INT
);

The the following insertions creates an error:
INSERT INTO test(id,val) VALUES (1,1);
INSERT INTO test(val) VALUES (1);

Error:
Cannot insert a duplicate key into unique index test_id_key

After that I can just re�eat the second line and insert without errors.
Is this bug fixed in newer versions?

Not a bug. The second insert of the two above increments the SERIAL
sequence for 'id', and the incremented value is the same as the one
you explicitly inserted earlier. Since SERIAL columns have a unique
index on them, the insert fails.

When you repeat the second insert, the sequence is incremented again,
and this time it doesn't collide.

-Doug
--
Let us cross over the river, and rest under the shade of the trees.
--T. J. Jackson, 1863

#3Stephan Szabo
sszabo@megazone23.bigpanda.com
In reply to: Rasmus Resen Amossen (#1)
Re: Bug?

On Thu, 15 Nov 2001, Rasmus Resen Amossen wrote:

I am using postgresql-7.0.3 and is now having a strange problem!!!
I got a table with a "serial" attribute. For example
CREATE TABLE test (
id SERIAL NOT NULL,
val INT
);

The the following insertions creates an error:
INSERT INTO test(id,val) VALUES (1,1);
INSERT INTO test(val) VALUES (1);

Error:
Cannot insert a duplicate key into unique index test_id_key

After that I can just re���eat the second line and insert without errors.
Is this bug fixed in newer versions?

You shouldn't insert values into the serial column :). Or at least if you
are going to do manual inserts you need to update the sequence as well.

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Rasmus Resen Amossen (#1)
Re: Bug?

Rasmus Resen Amossen <NOSPAM@hey.com> writes:

I got a table with a "serial" attribute. For example
CREATE TABLE test (
id SERIAL NOT NULL,
val INT
);

The the following insertions creates an error:
INSERT INTO test(id,val) VALUES (1,1);
INSERT INTO test(val) VALUES (1);

Error:
Cannot insert a duplicate key into unique index test_id_key

If you're going to assign IDs manually, you might want to set
the sequence generator past those values. See setval().

regards, tom lane

#5Andrew G. Hammond
drew@xyzzy.dhs.org
In reply to: Doug McNaught (#2)
Re: Bug?

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2001 November 15 10:24 am, Doug McNaught wrote:

Rasmus Resen Amossen <NOSPAM@hey.com> writes:

CREATE TABLE test (
id SERIAL NOT NULL,
val INT
);

The the following insertions creates an error:
INSERT INTO test(id,val) VALUES (1,1);
INSERT INTO test(val) VALUES (1);

Error:
Cannot insert a duplicate key into unique index test_id_key

After that I can just re�eat the second line and insert without errors.
Is this bug fixed in newer versions?

Not a bug. The second insert of the two above increments the SERIAL
sequence for 'id', and the incremented value is the same as the one
you explicitly inserted earlier. Since SERIAL columns have a unique
index on them, the insert fails.

When you repeat the second insert, the sequence is incremented again,
and this time it doesn't collide.

Now maybe THAT behaviour should be a bug. Since the nextval() is implicit in
the second INSERT, and the second INSERT fails, it would make sense to roll
back the nextval().

Except of course that it appears that sequences are not within the scope of
transactions:

test=# SELECT nextval('test_id_seq');
nextval
- ---------
3
(1 row)

test=# BEGIN; SELECT nextval('test_id_seq'); ROLLBACK;
BEGIN
nextval
- ---------
4
(1 row)

ROLLBACK
test=# SELECT nextval('test_id_seq');
nextval
- ---------
5
(1 row)

Is this correct behaviour? ACID compliant? The argument in it's favour it
pretty obvious: if you never go backwards then you'll never get an accidental
duplication as follows:

t1 BEGIN; SELECT nextval('test_id_seq') # 1
t2 SELECT nextval('test_id_seq') # 2
t1 ROLLBACK # test_id_seq decremented
t3 SELECT nextval('test_id_seq') # 2

However wouldn't locking the sequence be more proper behaviour?

- --
Andrew G. Hammond mailto:drew@xyzzy.dhs.org http://xyzzy.dhs.org/~drew/
56 2A 54 EF 19 C0 3B 43 72 69 5B E3 69 5B A1 1F 613-389-5481
5CD3 62B0 254B DEB1 86E0 8959 093E F70A B457 84B1
"To blow recursion you must first blow recur" -- me
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iEYEARECAAYFAjv0F7cACgkQCT73CrRXhLG4NQCfV38y9IJyx9QYPaucjGY2B5Bf
AN8AnjtNjXGn09GDVikNDzx1f/5tb9vk
=0NwR
-----END PGP SIGNATURE-----

#6Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew G. Hammond (#5)
Re: Bug?

"Andrew G. Hammond" <drew@xyzzy.dhs.org> writes:

Now maybe THAT behaviour should be a bug. Since the nextval() is
implicit in the second INSERT, and the second INSERT fails, it would
make sense to roll back the nextval().

nextval() is never rolled back; this is quite intentional. See many past
discussions in the archives ...

regards, tom lane

#7Stephan Szabo
sszabo@megazone23.bigpanda.com
In reply to: Andrew G. Hammond (#5)
Re: Bug?

Is this correct behaviour? ACID compliant? The argument in it's favour it
pretty obvious: if you never go backwards then you'll never get an accidental
duplication as follows:

t1 BEGIN; SELECT nextval('test_id_seq') # 1
t2 SELECT nextval('test_id_seq') # 2
t1 ROLLBACK # test_id_seq decremented
t3 SELECT nextval('test_id_seq') # 2

However wouldn't locking the sequence be more proper behaviour?

That's the intended behavior. Sequences aren't defined to give sequential
values, merely unique ones across their output set. The point is to
prevent two insert transactions from blocking on each other trying to get
the sequence value.

#8Doug McNaught
doug@wireboard.com
In reply to: Rasmus Resen Amossen (#1)
Re: Bug?

"Andrew G. Hammond" <drew@xyzzy.dhs.org> writes:

Is this correct behaviour? ACID compliant? The argument in it's favour it
pretty obvious: if you never go backwards then you'll never get an accidental
duplication as follows:

t1 BEGIN; SELECT nextval('test_id_seq') # 1
t2 SELECT nextval('test_id_seq') # 2
t1 ROLLBACK # test_id_seq decremented
t3 SELECT nextval('test_id_seq') # 2

However wouldn't locking the sequence be more proper behaviour?

As I understand it, this is one case where efficiency wins out--if you
locked the sequence some apps could end up with a lot of lock
contention. So we don't lock, don't roll back and work fine except
that there can be gaps in a SERIAL column.

I don't think sequences are in the SQL standard so we're not really
violating anything. ;)

-Doug
--
Let us cross over the river, and rest under the shade of the trees.
--T. J. Jackson, 1863