Duplicate Unique Key constraint error
Hi,
I keep getting this duplicate unique key constraint error for my primary key
even
though I'm not inserting anything duplicate. It even inserts the records
properly
but my console throws this error that I'm sure of what it is all about.
Corruption of my Primary Key can be one of the possibilities but I'm really
not sure how
to get rid of this corruption and how to re-index the primary key.
Also, I was wondering what could be the cause of this PK corruption, if
possible and what does can this corruption lead to.
I mean what are its cons.
Thanks,
~Harpreet
Harpreet Dhaliwal wrote:
Hi,
I keep getting this duplicate unique key constraint error for my
primary key even
though I'm not inserting anything duplicate. It even inserts the
records properly
but my console throws this error that I'm sure of what it is all about.Corruption of my Primary Key can be one of the possibilities but I'm
really not sure how
to get rid of this corruption and how to re-index the primary key.Also, I was wondering what could be the cause of this PK corruption,
if possible and what does can this corruption lead to.
I mean what are its cons.Thanks,
~Harpreet
You haven't really given any useful information about your primary key,
but if you are using SERIAL as the column type (INT type with a
sequence) you may just be having a problem with its current value (but
then inserts shouldn't work).
If you are using a sequence here, see what it's current value is and
compare it to the highest value in the column. If its value is less than
the columns max() value, just reset the value in the sequence.
imp=# CREATE TABLE dup_pkey (id SERIAL PRIMARY KEY, insert_order int);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (1);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (2);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (3);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (4);
imp=# SELECT * FROM dup_pkey;
id | insert_order
----+--------------
1 | 1
2 | 2
3 | 3
4 | 4
(4 rows)
Now, if you set the value below what the max() column value is, you will
have a problem with inserts.
imp=# SELECT setval('dup_pkey_id_seq',3);
setval
--------
3
(1 row)
imp=# INSERT INTO dup_pkey (insert_order) VALUES (5);
ERROR: duplicate key violates unique constraint "dup_pkey_pkey"
If this is the case, use setval() to update the value of the sequence to
the max() value of your primary key. You can use \d to get information
about your table, including the sequence name. However if, as you say,
it IS inserting records properly, then this ISN'T going to help.
hth
Ron
my primary key is neither SERIAL nor a SEQUENCE.
CONSTRAINT pk_dig PRIMARY KEY (dig_id)
This is the clause that I have for my primary key in the create table
script.
thanks,
~Harpreet
Show quoted text
On 7/10/07, Ron St-Pierre <ron.pgsql@shaw.ca> wrote:
Harpreet Dhaliwal wrote:
Hi,
I keep getting this duplicate unique key constraint error for my
primary key even
though I'm not inserting anything duplicate. It even inserts the
records properly
but my console throws this error that I'm sure of what it is all about.Corruption of my Primary Key can be one of the possibilities but I'm
really not sure how
to get rid of this corruption and how to re-index the primary key.Also, I was wondering what could be the cause of this PK corruption,
if possible and what does can this corruption lead to.
I mean what are its cons.Thanks,
~HarpreetYou haven't really given any useful information about your primary key,
but if you are using SERIAL as the column type (INT type with a
sequence) you may just be having a problem with its current value (but
then inserts shouldn't work).If you are using a sequence here, see what it's current value is and
compare it to the highest value in the column. If its value is less than
the columns max() value, just reset the value in the sequence.imp=# CREATE TABLE dup_pkey (id SERIAL PRIMARY KEY, insert_order int);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (1);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (2);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (3);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (4);imp=# SELECT * FROM dup_pkey;
id | insert_order
----+--------------
1 | 1
2 | 2
3 | 3
4 | 4
(4 rows)Now, if you set the value below what the max() column value is, you will
have a problem with inserts.
imp=# SELECT setval('dup_pkey_id_seq',3);
setval
--------
3
(1 row)imp=# INSERT INTO dup_pkey (insert_order) VALUES (5);
ERROR: duplicate key violates unique constraint "dup_pkey_pkey"If this is the case, use setval() to update the value of the sequence to
the max() value of your primary key. You can use \d to get information
about your table, including the sequence name. However if, as you say,
it IS inserting records properly, then this ISN'T going to help.hth
Ron
I lately figured out the actual problem PHEW.
Its something like two different transactions are seeing the same snapshot
of the database.
Transaction 1 started, saw max(dig_id) = 30 and inserted new dig_id=31.
Now the time when Transaction 2 started and read max(dig_id) it was still 30
and by the time it tried to insert 31, 31 was already inserted by
Transaction 1 and hence the unique key constraint error.
I thought this would be taken care by the database itself by locking the
transactions but now I really don't know how does this locking takes place
in postgres. I used to work with SQL Server and never faced this problem
there.
Please guide me throug to get rid of this problem.
Thanks,
~Harpreet
Show quoted text
On 7/10/07, Harpreet Dhaliwal <harpreet.dhaliwal01@gmail.com> wrote:
my primary key is neither SERIAL nor a SEQUENCE.
CONSTRAINT pk_dig PRIMARY KEY (dig_id)
This is the clause that I have for my primary key in the create table
script.thanks,
~HarpreetOn 7/10/07, Ron St-Pierre <ron.pgsql@shaw.ca> wrote:
Harpreet Dhaliwal wrote:
Hi,
I keep getting this duplicate unique key constraint error for my
primary key even
though I'm not inserting anything duplicate. It even inserts the
records properly
but my console throws this error that I'm sure of what it is allabout.
Corruption of my Primary Key can be one of the possibilities but I'm
really not sure how
to get rid of this corruption and how to re-index the primary key.Also, I was wondering what could be the cause of this PK corruption,
if possible and what does can this corruption lead to.
I mean what are its cons.Thanks,
~HarpreetYou haven't really given any useful information about your primary key,
but if you are using SERIAL as the column type (INT type with a
sequence) you may just be having a problem with its current value (but
then inserts shouldn't work).If you are using a sequence here, see what it's current value is and
compare it to the highest value in the column. If its value is less thanthe columns max() value, just reset the value in the sequence.
imp=# CREATE TABLE dup_pkey (id SERIAL PRIMARY KEY, insert_order int);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (1);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (2);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (3);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (4);imp=# SELECT * FROM dup_pkey;
id | insert_order
----+--------------
1 | 1
2 | 2
3 | 3
4 | 4
(4 rows)Now, if you set the value below what the max() column value is, you will
have a problem with inserts.
imp=# SELECT setval('dup_pkey_id_seq',3);
setval
--------
3
(1 row)imp=# INSERT INTO dup_pkey (insert_order) VALUES (5);
ERROR: duplicate key violates unique constraint "dup_pkey_pkey"If this is the case, use setval() to update the value of the sequence to
the max() value of your primary key. You can use \d to get information
about your table, including the sequence name. However if, as you say,
it IS inserting records properly, then this ISN'T going to help.hth
Ron
On Jul 10, 2007, at 13:22 , Harpreet Dhaliwal wrote:
Transaction 1 started, saw max(dig_id) = 30 and inserted new
dig_id=31.
Now the time when Transaction 2 started and read max(dig_id) it was
still 30
and by the time it tried to insert 31, 31 was already inserted by
Transaction 1 and hence the unique key constraint error.I thought this would be taken care by the database itself by
locking the
transactions but now I really don't know how does this locking
takes place
in postgres.
Why would the server lock the table? It can't know your intention is
to add one to the number returned and insert. If this is what you
want, you have to lock the table explicitly.
Please guide me throug to get rid of this problem.
This exact reason is why sequences are often used for primary keys. I
recommend you change your primary key.
Michael Glaesemann
grzm seespotcode net
"Harpreet Dhaliwal" <harpreet.dhaliwal01@gmail.com> writes:
Transaction 1 started, saw max(dig_id) = 30 and inserted new dig_id=31.
Now the time when Transaction 2 started and read max(dig_id) it was still 30
and by the time it tried to insert 31, 31 was already inserted by
Transaction 1 and hence the unique key constraint error.
This is exactly why you're recommended to use sequences (ie serial
columns) for generating IDs. Taking max()+1 does not work, unless
you're willing to lock the whole table and throw away vast amounts of
concurrency.
regards, tom lane
Thanks alot for all your suggestions gentlemen.
I changed it to a SERIAL column and all the pain has been automatically
alleviated :)
Thanks a ton.
~Harpreet
Show quoted text
On 7/10/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Harpreet Dhaliwal" <harpreet.dhaliwal01@gmail.com> writes:
Transaction 1 started, saw max(dig_id) = 30 and inserted new dig_id=31.
Now the time when Transaction 2 started and read max(dig_id) it wasstill 30
and by the time it tried to insert 31, 31 was already inserted by
Transaction 1 and hence the unique key constraint error.This is exactly why you're recommended to use sequences (ie serial
columns) for generating IDs. Taking max()+1 does not work, unless
you're willing to lock the whole table and throw away vast amounts of
concurrency.regards, tom lane
On Jul 10, 2007, at 3:09 PM, Tom Lane wrote:
"Harpreet Dhaliwal" <harpreet.dhaliwal01@gmail.com> writes:
Transaction 1 started, saw max(dig_id) = 30 and inserted new
dig_id=31.
Now the time when Transaction 2 started and read max(dig_id) it
was still 30
and by the time it tried to insert 31, 31 was already inserted by
Transaction 1 and hence the unique key constraint error.This is exactly why you're recommended to use sequences (ie serial
columns) for generating IDs. Taking max()+1 does not work, unless
you're willing to lock the whole table and throw away vast amounts of
concurrency.
I wonder how SQL server is handling this? Are they locking the table?
I realize it's off-topic, but I'm still curious.
Sequences are your friend. they come in INT and BIGINT flavors, but
BIGINT is a lot of rows.
Can set set Sequences to automatically rollover back to zero?
Hello,
I use MS Access for data import. Access imports csv file, make some
calculation and transffers data to PostgreSQL.
Unfortunately, it takes a lot of time to transfer data to PostgreSQL.
My odbc settings are following:
[ODBC]
DRIVER=PostgreSQL Unicode
UID=postgres
XaOpt=1
LowerCaseIdentifier=0
UseServerSidePrepare=1
ByteaAsLongVarBinary=0
BI=0
TrueIsMinus1=0
DisallowPremature=1
UpdatableCursors=1
LFConversion=1
ExtraSysTablePrefixes=dd_
CancelAsFreeStmt=0
Parse=1
BoolsAsChar=0
UnknownsAsLongVarchar=0
TextAsLongVarchar=1
UseDeclareFetch=0
Ksqo=1
Optimizer=1
CommLog=0
Debug=0
MaxLongVarcharSize=8190
MaxVarcharSize=255
UnknownSizes=0
Socket=4096
Fetch=100
ConnSettings=
ShowSystemTables=0
RowVersioning=1
ShowOidColumn=0
FakeOidIndex=0
Protocol=7.4-1
ReadOnly=0
SSLmode=allow
PORT=5432
SERVER=localhost
DATABASE=PLANINGZ
Could you suggest what parameters values would yield best performance for
batch import to PostgreSQL?
Thanks.
am Wed, dem 11.07.2007, um 14:15:02 +0200 mailte Zlatko Matic folgendes:
Hello,
please don't hijack other threads. If you only change the subject for a
new question, your mail will be sorted completely wrong.
(within modern email-clients such thunderbird or mutt)
I use MS Access for data import. Access imports csv file, make some
calculation and transffers data to PostgreSQL.
Import the csv-file directly in postgresql and make the calculations
within PG?
Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net
Hello,
I use MS Access for data import. Access imports csv file, make some
calculation and transffers data to PostgreSQL.
Unfortunately, it takes a lot of time to transfer data to PostgreSQL.
I don't know how you transfer the data. So it's hard to answer.
UpdatableCursors=1
Do you need this?
LFConversion=1
Are you sure you want CR/LF conversion?
Parse=1
Do you need parse statements? I think you don't need one of
UpdatableCursors or Parse at least.
Fetch=100
You don't need specify this while you have UseDeclareFetch = 0
RowVersioning=1
Do you need RowVersioning?
Could you suggest what parameters values would yield best performance for
batch import to PostgreSQL?
Maybe this maybe someone suggest anothers. Let's experiment a little bit
yourself ;o)
Luf
I have already tried COPY.
But, it has problems with type castings.
For example, COPY operation fails because PostgreSQL can't copy value
7.844,000 into NUMERIC field...
Regards,
Zlatko
----- Original Message -----
From: "A. Kretschmer" <andreas.kretschmer@schollglas.com>
To: <pgsql-general@postgresql.org>
Sent: Wednesday, July 11, 2007 2:31 PM
Subject: Re: [GENERAL] odbc parameters
Show quoted text
am Wed, dem 11.07.2007, um 14:15:02 +0200 mailte Zlatko Matic folgendes:
Hello,
please don't hijack other threads. If you only change the subject for a
new question, your mail will be sorted completely wrong.
(within modern email-clients such thunderbird or mutt)I use MS Access for data import. Access imports csv file, make some
calculation and transffers data to PostgreSQL.Import the csv-file directly in postgresql and make the calculations
within PG?Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
am Wed, dem 11.07.2007, um 14:55:28 +0200 mailte Zlatko Matic folgendes:
I have already tried COPY.
But, it has problems with type castings.
For example, COPY operation fails because PostgreSQL can't copy value
7.844,000 into NUMERIC field...
Either copy such values into a temp. table with text-columns and work
with arbitrary sql-funktions (you can convert it to numeric with, for
instance, regexp_replace('7.844,000',',.*$', '')::numeric) to fill the
destination table with the values or work before the COPY with
text-lools like sed, awk, perl, ...
Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net
Could you post what you are doing in Access? Are you just copy/pasting
from a local table to a linked odbc table? Or are you executing insert
statements in a VBA for loop, if so please post the VBA code?
There are some settings in the ODBC driver that can effect performance,
but the types of SQL statements you are executing often have a much
greater effect on overall performance.
Also if you could turn on the logging in the ODBC driver, often Access
is executing queries on your behalf that you may be unaware of it.
Zlatko Matic wrote:
Hello,
I use MS Access for data import. Access imports csv file, make some
calculation and transffers data to PostgreSQL.
Unfortunately, it takes a lot of time to transfer data to PostgreSQL.
My odbc settings are following:
--
David Gardner
How can one rollover a sequence back to zero after you delete records from a
table with one such sequence.
I see it starting with the last value of the sequence inserted.
Show quoted text
On 7/11/07, Tom Allison <tom@tacocat.net> wrote:
On Jul 10, 2007, at 3:09 PM, Tom Lane wrote:
"Harpreet Dhaliwal" <harpreet.dhaliwal01@gmail.com> writes:
Transaction 1 started, saw max(dig_id) = 30 and inserted new
dig_id=31.
Now the time when Transaction 2 started and read max(dig_id) it
was still 30
and by the time it tried to insert 31, 31 was already inserted by
Transaction 1 and hence the unique key constraint error.This is exactly why you're recommended to use sequences (ie serial
columns) for generating IDs. Taking max()+1 does not work, unless
you're willing to lock the whole table and throw away vast amounts of
concurrency.I wonder how SQL server is handling this? Are they locking the table?
I realize it's off-topic, but I'm still curious.Sequences are your friend. they come in INT and BIGINT flavors, but
BIGINT is a lot of rows.Can set set Sequences to automatically rollover back to zero?
Harpreet Dhaliwal escribi�:
How can one rollover a sequence back to zero after you delete records from
a
table with one such sequence.
I see it starting with the last value of the sequence inserted.
You can use setval(), but normally you just leave it alone. Having
numbers not starting from 0 is not a problem in most cases.
--
Alvaro Herrera http://www.flickr.com/photos/alvherre/
Hi! I'm a .signature virus!
cp me into your .signature file to help me spread!