Duplicate key error
Hi!
Sometimes duplicate key error
duplicate key value violates unique constraint "session_pkey"
Key (workplace)=(WIN-N9BSKUNKBC8 ) already exists.
occurs in script:
delete from session where workplace='WIN-N9BSKUNKBC8' ;
INSERT INTO session (workplace,ipaddress,logintime,loggeduser)
SELECT 'WIN-N9BSKUNKBC8' ,
inet_client_addr()::CHAR(14),current_timestamp::CHAR(28),CURRENT_USER
WHERE NOT EXISTS (SELECT 1 FROM session WHERE workplace='WIN-N9BSKUNKBC8' )
Sript is running form windows task scheduler on every 10 minutes. Error
occurs only sometimes.
How to fix this ?
Table is defined as
CREATE TABLE public.session
(
loggeduser character(10) ,
workplace character(16) NOT NULL,
ipaddress character(20) ,
logintime character(28) ,
activity timestamp with time zone,
CONSTRAINT session_pkey PRIMARY KEY (workplace)
)
Environment:
PostgreSQL 13.1, compiled by Visual C++ build 1914, 64-bit
Windows server 2019
psqlODBC driver 13.00.0000
I already asked this in this list many years ago. In this case Tom
replies that is looks like index race condition bug which was fixed in
later Postgres 9 version.
However this issue still exists in Postgres 13.1
Andrus.
On 3/3/21 7:23 AM, Andrus wrote:
Hi!
Sometimes duplicate key error
duplicate key value violates unique constraint "session_pkey"
Key (workplace)=(WIN-N9BSKUNKBC8 ) already exists.occurs in script:
delete from session where workplace='WIN-N9BSKUNKBC8' ;
INSERT INTO session (workplace,ipaddress,logintime,loggeduser)
SELECT 'WIN-N9BSKUNKBC8' ,
inet_client_addr()::CHAR(14),current_timestamp::CHAR(28),CURRENT_USER
WHERE NOT EXISTS (SELECT 1 FROM session WHERE workplace='WIN-N9BSKUNKBC8' )Sript is running form windows task scheduler on every 10 minutes. Error
occurs only sometimes.
There is no other process inserting to this table?
--
Adrian Klaver
adrian.klaver@aklaver.com
Hi!
There is no other process inserting to this table?
There may be other processes in this server trying to insert same
primary key value (server name).
Last inserted row data should remain.
Andrus.
On 3/3/21 8:08 AM, Andrus wrote:
Hi!
There is no other process inserting to this table?
There may be other processes in this server trying to insert same
primary key value (server name).
And if that name already exists there would be a duplicate key error.
Last inserted row data should remain.
I'm not understanding the above.
Andrus.
--
Adrian Klaver
adrian.klaver@aklaver.com
It sounds like this should be re-written as 'insert on conflict do update'
statement.
Hi!
And if that name already exists there would be a duplicate key error.
Name cannot exist: First delete deletes its value. Tables are not
changed by external process during transaction. As double insurance,
insert perfoms additional existence check and adds only if key does not
exist.
Last inserted row data should remain.
I'm not understanding the above.
This table should contain last login time, user and ip address. It
should be updated by every process on login. For this old entry is
removed if it exists and new entry with same primary key is added.
Andrus.
Hi!
It sounds like this should be re-written as 'insert on conflict do
update' statement.
It should work in Postgres 9 also. on confilct is not available in
postgres 9.
This code does not insert duplicate key valuse ? Why it throws error ?
You can create testcase to run this code from multiple processes to try
reproduce the error.
Andrus.
On 3/3/21 11:59 AM, Andrus wrote:
Hi!
And if that name already exists there would be a duplicate key error.
Name cannot exist: First delete deletes its value. Tables are not
changed by external process during transaction. As double insurance,
insert perfoms additional existence check and adds only if key does not
exist.
The error message:
"
duplicate key value violates unique constraint "session_pkey"
Key (workplace)=(WIN-N9BSKUNKBC8 ) already exists.
"
says something else is inserting/updating using that key value. So
obviously your script is not catching all the conflicts. At this point
your best bet is to monitor the Postgres log and see what else is
happening at the time of the error. I'm guessing you will find another
process working on that table.
Last inserted row data should remain.
I'm not understanding the above.
This table should contain last login time, user and ip address. It
should be updated by every process on login. For this old entry is
removed if it exists and new entry with same primary key is added.Andrus.
--
Adrian Klaver
adrian.klaver@aklaver.com
Hi!
says something else is inserting/updating using that key value. So
obviously your script is not catching all the conflicts.
At this point your best bet is to monitor the Postgres log and see
what else is happening at the time of the error. I'm guessing you will
find another process working on that table.
It looks like other process has added same row during trancaction.
How to change script so that it works starting at Postgres 9.0 and does
not cause error in this case ?
Andrus.
On 3/4/21 12:14 AM, Andrus wrote:
Hi!
says something else is inserting/updating using that key value. So
obviously your script is not catching all the conflicts.At this point your best bet is to monitor the Postgres log and see what
else is happening at the time of the error. I'm guessing you will find
another process working on that table.It looks like other process has added same row during trancaction.
How to change script so that it works starting at Postgres 9.0 and does
not cause error in this case ?
Having all the processes which try to update that table explicitly set a
serializable transaction would probably eliminate the duplicate key error.
There's no free lunch, though: you'd have to handle the blocking.
--
Angular momentum makes the world go 'round.
On 3/3/21 10:14 PM, Andrus wrote:
Hi!
says something else is inserting/updating using that key value. So
obviously your script is not catching all the conflicts.At this point your best bet is to monitor the Postgres log and see
what else is happening at the time of the error. I'm guessing you will
find another process working on that table.It looks like other process has added same row during trancaction.
How to change script so that it works starting at Postgres 9.0 and does
not cause error in this case ?
Since we have not actually seen the entire script nor have any idea what
the other process is, there is no way to answer this.
Andrus.
--
Adrian Klaver
adrian.klaver@aklaver.com
Hi!
Since we have not actually seen the entire script nor have any idea
what the other process is, there is no way to answer this.
This is the same whole script. It will ran by multiple scheduled tasks,
maybe at same time.
It registers logged in user. Different processes may have same user
name. In this case one row should remain.
I can just ignore duplicate key error on commit but maybe there is some
better way not to cause error.
Andrus.
Why just do a plain update, relying on row level locking to serialize
requests properly, and then just do an insert where not exists? Is there
value in doing the delete? I don't see it.
Note- On conflict clause is supported from 9.5+ and that is already past
EOL. Upgrading to at least v10 is recommended.
Hi
Why just do a plain update, relying on row level locking to serialize
requests properly, and then just do an insert where not exists? Is there
value in doing the delete? I don't see it.
This is an option. How to do update+insert in 9+ in SQL ? Or should
plpgsql procedure created for this ?
After insert if other process inserts same key value, transaction still
ways. Should manual locking used or is there better method.
Andrus.
I just meant a regular update (which might impact 0 rows) and then insert
(not exists) like you are doing already.
--transaction still ways. Should manual locking used or is there better
method.
I don't follow what you mean.
Hi!
I just meant a regular update (which might impact 0 rows) and then
insert (not exists) like you are doing already.
This causes duplicate key exception if other process adds same row to
table at same time.
--transaction still ways. Should manual locking used or is there
better method.
I don't follow what you mean.
Allow access to this table for single transaction only. Wait for
exclusive lock , do update/insert, release exclusive lock.
Andrus.