conditional insert
Hi follk
i trying to performe a conditional insert into a table, indeed, what i'm
trying to do is not insert a record into the table if that record exist
googleling i found something like
insert into XX values (1,2,3) where not exist (select ....);
but i'm having and error near where...
anyone knows how do i can perfome this insert?
thanks
p
Στις Monday 05 September 2011 12:38:34 ο/η Pau Marc Muñoz Torres έγραψε:
Hi follk
i trying to performe a conditional insert into a table, indeed, what i'm
trying to do is not insert a record into the table if that record exist
thats why primary/unique keys are for.
isolate the columns which you consider to be a correct unique key
and create a unique key on them.
thereis no notion of conditional insert that i know of.
googleling i found something like
insert into XX values (1,2,3) where not exist (select ....);
but i'm having and error near where...
anyone knows how do i can perfome this insert?
thanks
p
--
Achilleas Mantzios
On 05/09/2011 10:38, Pau Marc Mu�oz Torres wrote:
Hi follk
i trying to performe a conditional insert into a table, indeed, what
i'm trying to do is not insert a record into the table if that record existgoogleling i found something like
insert into XX values (1,2,3) where not exist (select ....);
but i'm having and error near where...
Shouldn't it be EXISTS, not EXIST?
Anyway, what is the error you're getting?
Ray.
--
Raymond O'Donnell :: Galway :: Ireland
rod@iol.ie
Pau Marc Muñoz Torres, 05.09.2011 11:38:
Hi follk
i trying to performe a conditional insert into a table, indeed, what i'm trying to do is not insert a record into the table if that record exist
googleling i found something like
insert into XX values (1,2,3) where not exist (select ....);
but i'm having and error near where...
anyone knows how do i can perfome this insert?
INSERT INTO xxx
SELECT 1,2,3
WHERE NOT EXISTS (SELECT ...)
Regards
Thomas
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html style="direction: ltr;">
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<style>body p { margin-bottom: 0cm; margin-top: 0pt; } </style>
</head>
<body style="direction: ltr;"
bidimailui-detected-decoding-type="latin-charset" bgcolor="#ffffff"
text="#000000">
On 09/05/2011 12:38 PM, Pau Marc Muñoz Torres wrote:
<blockquote
cite="mid:CADFuJLjCF6fmxswgS033EFyofmbJFy+J0ToUvwTCDDLYj+nr3Q@mail.gmail.com"
type="cite">Hi follk<br>
<br>
i trying to performe a conditional insert into a table, indeed,
what i'm trying to do is not insert a record into the table if
that record exist<br>
<br>
googleling i found something like<br>
<br>
insert into XX values (1,2,3) where not exist (select ....);<br>
<br>
but i'm having and error near where...<br>
<br>
anyone knows how do i can perfome this insert?<br>
<br>
thanks<br>
<br>
p<br>
</blockquote>
<p>You can either do an Insert(...) select... from...where...</p>
<p>or you can add a rule to the table that checks if the key exists
and if so, do an update or nothing instead.<br>
</p>
</body>
</html>
i don't see it clear, let me put an example
i got the following table
molec varchar(30)
seq varchar(100)
where I insert my values
lets image that i have a record introduced as ('ubq', 'aadgylpittrs')
how i can prevent to insert another record where molec='ubq' ?
thanks
2011/9/5 Thomas Kellerer <spam_eater@gmx.net>
Pau Marc Muñoz Torres, 05.09.2011 11:38:
Hi follk
i trying to performe a conditional insert into a table, indeed, what
i'm trying to do is not insert a record into the table if that record existgoogleling i found something like
insert into XX values (1,2,3) where not exist (select ....);
but i'm having and error near where...
anyone knows how do i can perfome this insert?INSERT INTO xxx
SELECT 1,2,3
WHERE NOT EXISTS (SELECT ...)Regards
Thomas--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
--
*Pau Marc Muñoz Torres*
Laboratori de Biologia Computacional
Institut de Biotecnologia i Biomedicina Vicent Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)
telèfon: (+34)935 86 89 39*
Email : paumarc.munoz@bioinf.uab.cat*
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html style="direction: ltr;">
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<style>body p { margin-bottom: 0cm; margin-top: 0pt; } </style>
</head>
<body style="direction: ltr;"
bidimailui-detected-decoding-type="latin-charset" bgcolor="#ffffff"
text="#000000">
On 09/05/2011 01:37 PM, Pau Marc Muñoz Torres wrote:
<blockquote
cite="mid:CADFuJLi3K0X1A7OwR+9_neWjKCEXtYYyMP_n=x5z=D2+GNCTNw@mail.gmail.com"
type="cite">i don't see it clear, let me put an example<br>
<br>
i got the following table<br>
<br>
molec varchar(30)<br>
seq varchar(100)<br>
<br>
where I insert my values<br>
<br>
lets image that i have a record introduced as ('ubq',
'aadgylpittrs')<br>
<br>
how i can prevent to insert another record where molec='ubq' ?<br>
<br>
thanks<br>
<br>
</blockquote>
Either put a unique constraint on molec or do<br>
insert into tbl(molec,seq)<br>
select 'ubq', 'aadgylpittrs' where not exists(select molec from tbl
where molec='ubq')<br>
</body>
</html>
Ok , thanks Sim, now i see it
P
2011/9/5 Sim Zacks <sim@compulab.co.il>
**
On 09/05/2011 01:37 PM, Pau Marc Muñoz Torres wrote:i don't see it clear, let me put an example
i got the following table
molec varchar(30)
seq varchar(100)where I insert my values
lets image that i have a record introduced as ('ubq', 'aadgylpittrs')
how i can prevent to insert another record where molec='ubq' ?
thanks
Either put a unique constraint on molec or do
insert into tbl(molec,seq)
select 'ubq', 'aadgylpittrs' where not exists(select molec from tbl where
molec='ubq')
--
*Pau Marc Muñoz Torres*
Laboratori de Biologia Computacional
Institut de Biotecnologia i Biomedicina Vicent Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)
telèfon: (+34)935 86 89 39*
Email : paumarc.munoz@bioinf.uab.cat*
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
I agree that there are better ways to do this.<br>
But for me this works. (legacy driven situation)<br>
<br>
INSERT INTO tbinitialisatie (col1, col2)
<br>
SELECT 'x', 'y'
<br>
FROM tbinitialisatie
<br>
WHERE not exists (select * from tbinitialisatie where col1 = 'x' and
col2 = 'y')
<br>
LIMIT 1
<br>
<br>
<br>
Pau Marc Muñoz Torres schreef:
<blockquote
cite="mid:CADFuJLjfHUMEuYTPKgG6GtJMfzJeeq3-bkt162RywRVqc7OYgg@mail.gmail.com"
type="cite">Ok , thanks Sim, now i see it<br>
<br>
P<br>
<br>
<div class="gmail_quote">2011/9/5 Sim Zacks <span dir="ltr"><<a
moz-do-not-send="true" href="mailto:sim@compulab.co.il">sim@compulab.co.il</a>></span><br>
<blockquote class="gmail_quote"
style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div style="direction: ltr;" bgcolor="#ffffff" text="#000000">
<div class="im"> On 09/05/2011 01:37 PM, Pau Marc Muñoz Torres
wrote:
<blockquote type="cite">i don't see it clear, let me put an example<br>
<br>
i got the following table<br>
<br>
molec varchar(30)<br>
seq varchar(100)<br>
<br>
where I insert my values<br>
<br>
lets image that i have a record introduced as ('ubq', 'aadgylpittrs')<br>
<br>
how i can prevent to insert another record where molec='ubq' ?<br>
<br>
thanks<br>
<br>
</blockquote>
</div>
Either put a unique constraint on molec or do<br>
insert into tbl(molec,seq)<br>
select 'ubq', 'aadgylpittrs' where not exists(select molec from tbl
where molec='ubq')<br>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<b>Pau Marc Muñoz Torres</b><br>
<br>
Laboratori de Biologia Computacional <br>
Institut de Biotecnologia i Biomedicina Vicent Villar
<br>
Universitat Autonoma de Barcelona<br>
E-08193 Bellaterra (Barcelona)<br>
<br>
telèfon: (+34)935 86 89 39<b><br>
Email : <a moz-do-not-send="true"
href="mailto:paumarc.munoz@bioinf.uab.cat" target="_blank">paumarc.munoz@bioinf.uab.cat</a></b><br>
</blockquote>
</body>
</html>
At 07:02 PM 9/5/2011, J. Hondius wrote:
I agree that there are better ways to do this.
But for me this works. (legacy driven situation)INSERT INTO tbinitialisatie (col1, col2)
SELECT 'x', 'y'
FROM tbinitialisatie
WHERE not exists (select * from tbinitialisatie where col1 = 'x'
and col2 = 'y')
LIMIT 1
Hi,
That does not work 100%. Try it with two psql instances.
Do:
*** psql #1
begin;
INSERT INTO tbinitialisatie (col1, col2)
SELECT 'x', 'y'
FROM tbinitialisatie
WHERE not exists (select * from tbinitialisatie where col1 = 'x'
and col2 = 'y')
LIMIT 1 ;
*** psql #2
begin;
INSERT INTO tbinitialisatie (col1, col2)
SELECT 'x', 'y'
FROM tbinitialisatie
WHERE not exists (select * from tbinitialisatie where col1 = 'x'
and col2 = 'y')
LIMIT 1 ;
commit;
*** psql #1
commit;
You should find duplicate inserts.
In most cases the "begin" and "commit" are very close together so you
won't notice the problem. But one day you might get unlucky.
Your options are:
a) put a unique constraint and handle the insert errors when they occur
b) lock the entire table first (affects performance: blocks all
selects on that table)
c) use a lock elsewhere (but this requires all applications using the
database to cooperate and use the lock).
d) wait for SQL MERGE to be implemented ( but from what I see the
current proposal seems to require a) or b) anyway:
http://wiki.postgresql.org/wiki/SQL_MERGE )
You could do both a) and b) too. Or both a) and c) (if you don't want
insert errors in the cooperating apps and want to allow other selects
during the transaction).
Regards,
Link.
On Tue, Sep 6, 2011 at 1:50 PM, Lincoln Yeoh <lyeoh@pop.jaring.my> wrote:
At 07:02 PM 9/5/2011, J. Hondius wrote:
I agree that there are better ways to do this.
But for me this works. (legacy driven situation)INSERT INTO tbinitialisatie (col1, col2)
SELECT 'x', 'y'
FROM tbinitialisatie
WHERE not exists (select * from tbinitialisatie where col1 = 'x' and col2
= 'y')
LIMIT 1Hi,
That does not work 100%. Try it with two psql instances.
Do:
*** psql #1
begin;
INSERT INTO tbinitialisatie (col1, col2)
SELECT 'x', 'y'
FROM tbinitialisatie
WHERE not exists (select * from tbinitialisatie where col1 = 'x' and col2 =
'y')
LIMIT 1 ;*** psql #2
begin;
INSERT INTO tbinitialisatie (col1, col2)
SELECT 'x', 'y'
FROM tbinitialisatie
WHERE not exists (select * from tbinitialisatie where col1 = 'x' and col2 =
'y')
LIMIT 1 ;
commit;*** psql #1
commit;You should find duplicate inserts.
In most cases the "begin" and "commit" are very close together so you won't
notice the problem. But one day you might get unlucky.Your options are:
a) put a unique constraint and handle the insert errors when they occur
b) lock the entire table first (affects performance: blocks all selects on
that table)
c) use a lock elsewhere (but this requires all applications using the
database to cooperate and use the lock).
d) wait for SQL MERGE to be implemented ( but from what I see the current
proposal seems to require a) or b) anyway:
http://wiki.postgresql.org/wiki/SQL_MERGE )
b) doesn't block reads if you lock in EXCLUSIVE mode. a) is the best
way to go if you prefer to handle errors on the client and/or
concurrency is important...c) otherwise.
merlin
On Tue, Sep 6, 2011 at 3:45 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Tue, Sep 6, 2011 at 1:50 PM, Lincoln Yeoh <lyeoh@pop.jaring.my> wrote:
At 07:02 PM 9/5/2011, J. Hondius wrote:
I agree that there are better ways to do this.
But for me this works. (legacy driven situation)INSERT INTO tbinitialisatie (col1, col2)
SELECT 'x', 'y'
FROM tbinitialisatie
WHERE not exists (select * from tbinitialisatie where col1 = 'x' and col2
= 'y')
LIMIT 1Hi,
That does not work 100%. Try it with two psql instances.
Do:
*** psql #1
begin;
INSERT INTO tbinitialisatie (col1, col2)
SELECT 'x', 'y'
FROM tbinitialisatie
WHERE not exists (select * from tbinitialisatie where col1 = 'x' and col2 =
'y')
LIMIT 1 ;*** psql #2
begin;
INSERT INTO tbinitialisatie (col1, col2)
SELECT 'x', 'y'
FROM tbinitialisatie
WHERE not exists (select * from tbinitialisatie where col1 = 'x' and col2 =
'y')
LIMIT 1 ;
commit;*** psql #1
commit;You should find duplicate inserts.
In most cases the "begin" and "commit" are very close together so you won't
notice the problem. But one day you might get unlucky.Your options are:
a) put a unique constraint and handle the insert errors when they occur
b) lock the entire table first (affects performance: blocks all selects on
that table)
c) use a lock elsewhere (but this requires all applications using the
database to cooperate and use the lock).
d) wait for SQL MERGE to be implemented ( but from what I see the current
proposal seems to require a) or b) anyway:
http://wiki.postgresql.org/wiki/SQL_MERGE )b) doesn't block reads if you lock in EXCLUSIVE mode. a) is the best
way to go if you prefer to handle errors on the client and/or
concurrency is important...c) otherwise.
whoops! meant to say b) otherwise! As far as c) goes, that is
essentially an advisory lock for the purpose -- using advisory locks
in place of mvcc locks is pretty weak sauce -- they should be used
when what you are locking doesn't follow mvcc rules.
merlin
At 05:23 AM 9/7/2011, Merlin Moncure wrote:
On Tue, Sep 6, 2011 at 3:45 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
b) doesn't block reads if you lock in EXCLUSIVE mode. a) is the best
way to go if you prefer to handle errors on the client and/or
concurrency is important...c) otherwise.whoops! meant to say b) otherwise! As far as c) goes, that is
essentially an advisory lock for the purpose -- using advisory locks
in place of mvcc locks is pretty weak sauce -- they should be used
when what you are locking doesn't follow mvcc rules.merlin
Don't you have to block SELECTs so that the SELECTs get serialized?
Otherwise concurrent SELECTs can occur at the same time, find no
existing rows, then "all" the inserts proceed and you get errors (or dupes).
That's how Postgresql still works right? I haven't really been keeping up.
From what I see this (UPSERT/MERGE) has been a common problem/query
over the years but it's not in a Postgresql FAQ and many people seem
to be using methods that don't actually work. Google shows that many
are even recommending those methods to others. Postgresql might still
get blamed for the resulting problems.
Regards,
Link.
On Wed, Sep 07, 2011 at 11:45:11PM +0800, Lincoln Yeoh wrote:
Don't you have to block SELECTs so that the SELECTs get serialized?
If you want to do that, why wouldn't you just use serializable mode?
A
--
Andrew Sullivan
ajs@crankycanuck.ca
On Wed, Sep 7, 2011 at 10:45 AM, Lincoln Yeoh <lyeoh@pop.jaring.my> wrote:
At 05:23 AM 9/7/2011, Merlin Moncure wrote:
On Tue, Sep 6, 2011 at 3:45 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
b) doesn't block reads if you lock in EXCLUSIVE mode. a) is the best
way to go if you prefer to handle errors on the client and/or
concurrency is important...c) otherwise.whoops! meant to say b) otherwise! As far as c) goes, that is
essentially an advisory lock for the purpose -- using advisory locks
in place of mvcc locks is pretty weak sauce -- they should be used
when what you are locking doesn't follow mvcc rules.merlin
Don't you have to block SELECTs so that the SELECTs get serialized?
Otherwise concurrent SELECTs can occur at the same time, find no existing
rows, then "all" the inserts proceed and you get errors (or dupes).That's how Postgresql still works right? I haven't really been keeping up.
yeah -- but you only need to block selects if you are selecting in the
inserting transaction (this is not a full upsert). if both writers
are doing:
begin;
lock table foo exclusive;
insert into foo select ... where ...;
commit;
is good enough. btw even if you are doing upsert pattern
(lock...select for update...insert/update), you'd be fine with
straight exclusive locks because the 'for update' lock takes a higher
lock that is blocked by exclusive. A basic rule of thumb is to try
and not fully block readers unless absolutely necessary...basically
maintenance operations.
From what I see this (UPSERT/MERGE) has been a common problem/query over the
years but it's not in a Postgresql FAQ and many people seem to be using
methods that don't actually work. Google shows that many are even
recommending those methods to others. Postgresql might still get blamed for
the resulting problems.
yeah -- there are two basic ways to do upsert -- a) table lock b) row
lock with loop/retry (either in app or server side via procedure). I
greatly prefer a) for simplicity's sake unless you are shooting for
maximum possible concurrency.
@andrew s: going SERIALIZABLE doesn't help if you trying to eliminate
cases that would push you into retrying the transaction.
merlin
Import Notes
Reply to msg id not found: 4e679195.12848e0a.019e.ffff918eSMTPIN_ADDED@mx.google.com
On Wed, Sep 07, 2011 at 02:51:32PM -0500, Merlin Moncure wrote:
@andrew s: going SERIALIZABLE doesn't help if you trying to eliminate
cases that would push you into retrying the transaction.
Well, no, of course. But why not catch the failure and retry? I
guess I just don't get the problem, since I hear people say this all
the time. (I mean, I've also seen places where 'upsert' would be
cool, but it doesn't seem trivial to do in a general way and you can
do this with catch-serialization-error-and-retry, I think?)
A
--
Andrew Sullivan
ajs@crankycanuck.ca
On Wed, Sep 7, 2011 at 3:04 PM, Andrew Sullivan <ajs@crankycanuck.ca> wrote:
On Wed, Sep 07, 2011 at 02:51:32PM -0500, Merlin Moncure wrote:
@andrew s: going SERIALIZABLE doesn't help if you trying to eliminate
cases that would push you into retrying the transaction.Well, no, of course. But why not catch the failure and retry? I
guess I just don't get the problem, since I hear people say this all
the time. (I mean, I've also seen places where 'upsert' would be
cool, but it doesn't seem trivial to do in a general way and you can
do this with catch-serialization-error-and-retry, I think?)
good points, but consider that savepoints have a certain amount of
performance overhead, and there may be some dependent client side
processing that is non-trivial to roll back. Also, if you have a lot
of contention, things can get nasty very quickly -- a full lock is
reliable, simple, and fast, and can be done in one round trip.
Any solution that doesn't have loops is inherently more robust than
one that does. I'll rest my case on that point -- consider very
carefully that the upsert loop example presented in the docs for years
was vulnerable to an infinite loop condition that was caught by one of
our users in production. That completely turned me off towards that
general method of dealing with these types of problems unless there is
really no other reasonable way to do it.
merlin
At 03:51 AM 9/8/2011, Merlin Moncure wrote:
yeah -- but you only need to block selects if you are selecting in the
inserting transaction (this is not a full upsert). if both writers
are doing:
begin;
lock table foo exclusive;
insert into foo select ... where ...;
commit;is good enough. btw even if you are doing upsert pattern
(lock...select for update...insert/update), you'd be fine with
straight exclusive locks because the 'for update' lock takes a higher
lock that is blocked by exclusive. A basic rule of thumb is to try
and not fully block readers unless absolutely necessary...basically
maintenance operations.
Yeah it works if all the inserters do the lock table (or select for
update), and provides better performance.
But if you're paranoid and lazy - a full lock will ensure that your
code won't get dupe errors even if someone else's code or manual
control doesn't do the lock table (they might get the dupe errors[1]I'm assuming a unique constraint is present- the locking is to simplify things.,
but that's their problem ;) ). So your code can safely assume that
any DB errors that occur are those that deserve a full rollback of
everything (which is what Postgresql "likes" by default). This means
fewer scenarios to handle so you don't need to write as much code,
nor document and support as much code ;).
Link.
[1]: I'm assuming a unique constraint is present- the locking is to simplify things.
simplify things.
At 03:51 AM 9/8/2011, Merlin Moncure wrote:
Don't you have to block SELECTs so that the SELECTs get serialized?
Otherwise concurrent SELECTs can occur at the same time, find no existing
rows, then "all" the inserts proceed and you get errors (or dupes).That's how Postgresql still works right? I haven't really been keeping up.
yeah -- but you only need to block selects if you are selecting in the
inserting transaction (this is not a full upsert). if both writers
are doing:
begin;
lock table foo exclusive;
insert into foo select ... where ...;
commit;is good enough. btw even if you are doing upsert pattern
(lock...select for update...insert/update), you'd be fine with
straight exclusive locks because the 'for update' lock takes a higher
lock that is blocked by exclusive. A basic rule of thumb is to try
and not fully block readers unless absolutely necessary...basically
maintenance operations.
Oh wait, now I think I get it. "lock table foo exclusive" will block
the inserts too, so I wouldn't get dupe errors even if other
transactions "blindly" insert dupes at the same time. The other
transactions might get the dupe errors, but mine won't as long as it
selects first and only inserts if there are no rows at that point.
Is that correct?
Link.
On Thu, Sep 8, 2011 at 9:14 AM, Lincoln Yeoh <lyeoh@pop.jaring.my> wrote:
At 03:51 AM 9/8/2011, Merlin Moncure wrote:
Don't you have to block SELECTs so that the SELECTs get serialized?
Otherwise concurrent SELECTs can occur at the same time, find no
existing
rows, then "all" the inserts proceed and you get errors (or dupes).That's how Postgresql still works right? I haven't really been keeping
up.yeah -- but you only need to block selects if you are selecting in the
inserting transaction (this is not a full upsert). if both writers
are doing:
begin;
lock table foo exclusive;
insert into foo select ... where ...;
commit;is good enough. btw even if you are doing upsert pattern
(lock...select for update...insert/update), you'd be fine with
straight exclusive locks because the 'for update' lock takes a higher
lock that is blocked by exclusive. A basic rule of thumb is to try
and not fully block readers unless absolutely necessary...basically
maintenance operations.Oh wait, now I think I get it. "lock table foo exclusive" will block the
inserts too, so I wouldn't get dupe errors even if other transactions
"blindly" insert dupes at the same time. The other transactions might get
the dupe errors, but mine won't as long as it selects first and only inserts
if there are no rows at that point.Is that correct?
correct -- your transactions never get dup errors and external
transactions only get them if they, say, select without update before
the upsert (which is a bug any way you slice it).
fully blocking readers on a high traffic table is a good way to crash
your application. bring this issue up to any 'sql server admin' and
they'll start to develop a nervous tic...
merlin
Import Notes
Reply to msg id not found: 4e68cdba.138b8e0a.305e.ffffc3baSMTPIN_ADDED@mx.google.com