guaranteeing that a sequence never skips
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?
Sometimes a business requirement is that a serial sequence never skips,
e.g. when generating invoice/ticket/formal letter numbers. Would an
INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
or must I install a trigger too to do additional checking?
--
dave
On Sun, 2004-10-03 at 08:58, David Garamond wrote:
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?Sometimes a business requirement is that a serial sequence never skips,
e.g. when generating invoice/ticket/formal letter numbers. Would an
INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
or must I install a trigger too to do additional checking?
You will have to lock the whole table and your parallel performance will
be poor.
On Sun, 2004-10-03 at 08:58, David Garamond wrote:
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?Sometimes a business requirement is that a serial sequence never skips,
e.g. when generating invoice/ticket/formal letter numbers. Would an
INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
or must I install a trigger too to do additional checking?You will have to lock the whole table and your parallel performance will
be poor.
Locking the table isn't sufficient to guarantee that a sequence value
never skips. What if a transaction fails and has to be rolled back?
I've written database systems that used pre-numbered checks, what's usually
necessary is to postpone the check-numbering phase until the number of
checks is finalized, so that there's not much chance of anything else
causing a rollback.
--
Mike Nolan
Import Notes
Resolved by subject fallback
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On Sunday 03 October 2004 10:21 am, Scott Marlowe wrote:
On Sun, 2004-10-03 at 08:58, David Garamond wrote:
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?Sometimes a business requirement is that a serial sequence never skips,
e.g. when generating invoice/ticket/formal letter numbers. Would an
INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
or must I install a trigger too to do additional checking?You will have to lock the whole table and your parallel performance will
be poor.
There was a thread about this a while back. I'm using a separate counter table
and stored procs that increment the value of the counter - similar to nextval
used for sequences. My "nextval" locks the "counterrow" in question using
"...for update". So while I'm generating the record that requires the
sequential number I'm in the same stored proc and therefor in a transaction.
If I have to roll back, the counter number in the countertable will roll back
too. You just have to make sure your routine to completely generate whatever
you have to generate doesn't take long, because parallel uses of the same
thing will block until your proc commits or rolls back.
UC
- --
Open Source Solutions 4U, LLC 2570 Fleetwood Drive
Phone: +1 650 872 2425 San Bruno, CA 94066
Cell: +1 650 302 2405 United States
Fax: +1 650 872 2417
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
iD8DBQFBYD6KjqGXBvRToM4RAgFOAKCeJnwA6PnXquCrUMwGbR9tQZBxdgCdGqyy
nwNbHafAiInSX+WTh5Uzb4o=
=Uixo
-----END PGP SIGNATURE-----
On Sun, 2004-10-03 at 11:48, Mike Nolan wrote:
On Sun, 2004-10-03 at 08:58, David Garamond wrote:
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?Sometimes a business requirement is that a serial sequence never skips,
e.g. when generating invoice/ticket/formal letter numbers. Would an
INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
or must I install a trigger too to do additional checking?You will have to lock the whole table and your parallel performance will
be poor.Locking the table isn't sufficient to guarantee that a sequence value
never skips. What if a transaction fails and has to be rolled back?I've written database systems that used pre-numbered checks, what's usually
necessary is to postpone the check-numbering phase until the number of
checks is finalized, so that there's not much chance of anything else
causing a rollback.
--
I didn't mean to use a sequence, sorry for being vague. I meant this:
lock table
select max(idfield)+1
insert new row
disconnect.
A long time ago, in a galaxy far, far away, lists@zara.6.isreserved.com (David Garamond) wrote:
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be
34)?
What is guaranteed is that sequence values will not be repeated
(assuming you don't do a setval() :-).)
If value caching is turned on, then each connection may grab them in
groups of (say) 100, so that one insert, on one (not-too-busy)
connection might add in 5399, and an insert on another connection,
that has been much busier, might add in 6522, and those values differ
quite a bit :-).
Sometimes a business requirement is that a serial sequence never
skips, e.g. when generating invoice/ticket/formal letter
numbers. Would an INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1
FROM t, ...) suffice, or must I install a trigger too to do
additional checking?
This is a troublesome scenario...
1. Your requirement makes it MUCH harder to deal with concurrent
updates efficiently.
That "SELECT MAX()" destroys the efficiency achieved by the use of
sequences.
2. It may be difficult to avoid deadlocks of some sort.
Suppose several inserts take place more or less simultaneously. In
that case, they might all get the same value of SELECT MAX(), and only
one of them could therefore succeed. The others would get
"clotheslined" by the UNIQUE constraint, like a hapless fugitive that
runs into a tree branch, and you'll see transactions failing due to
concurrency. Not a good thing.
Another possibiity would be to have _two_ fields, one, call it C1,
using a sequence, and the other, C2, which gets populated later.
Periodically, a process goes through and calculates CURR=SELECT
MAX(C2), and then walks through all of the records populated with
values in C1. For each non-null C1, it assigns C2 based on the value
of CURR, and then empties C1.
That means that there is a period of time during which the "ultimate"
sequence value, C2, is not not populated, which might or might not be
a problem for your application.
--
(format nil "~S@~S" "cbbrowne" "linuxfinances.info")
http://linuxfinances.info/info/linuxxian.html
Life's a duck, and then you sigh.
In an attempt to throw the authorities off his trail, smarlowe@qwest.net ("Scott Marlowe") transmitted:
On Sun, 2004-10-03 at 11:48, Mike Nolan wrote:
On Sun, 2004-10-03 at 08:58, David Garamond wrote:
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?Sometimes a business requirement is that a serial sequence never skips,
e.g. when generating invoice/ticket/formal letter numbers. Would an
INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
or must I install a trigger too to do additional checking?You will have to lock the whole table and your parallel performance will
be poor.Locking the table isn't sufficient to guarantee that a sequence value
never skips. What if a transaction fails and has to be rolled back?I've written database systems that used pre-numbered checks, what's usually
necessary is to postpone the check-numbering phase until the number of
checks is finalized, so that there's not much chance of anything else
causing a rollback.
--I didn't mean to use a sequence, sorry for being vague. I meant this:
lock table
select max(idfield)+1
insert new row
disconnect.
Yeah, that'll work, so long as you're prepared to wait for the table
to be available.
I think I like my idea of putting in provisional values, and then
fixing them up later...
You could do this via a sequence thus:
select setval('ourseq', 250000000); -- Make sure the sequence starts
-- way high
create index idf_250m on thistable(idfield) where idfield > 250000000;
-- Provide an efficient way to look up the entries that need
-- to get reset
Then, every once in a while, a separate process would go in, see the
highest value on idfield < 250M, and rewrite the idfield on all of the
tuples where idfield > 250M. It would be efficient due to the partial
index. It limits the number of documents to 250M, but I'm sure that
can be alleviated when it turns into an issue...
--
output = reverse("gro.mca" "@" "enworbbc")
http://linuxfinances.info/info/nonrdbms.html
Would I be an optimist or a pessimist if I said my bladder was half
full?
Then, every once in a while, a separate process would go in, see the
highest value on idfield < 250M, and rewrite the idfield on all of the
tuples where idfield > 250M. It would be efficient due to the partial
index. It limits the number of documents to 250M, but I'm sure that
can be alleviated when it turns into an issue...
I think you'd be better off using two columns. Call the first one the
'work ticket' for the check request, and you don't really care if it has gaps
in it or not, its primary purpose is to ensure that each check request
has a unique document number of some kind, so a sequence works fine.
One and only one program assigns the actual check numbers--in a separate
column.
That's the sort of thing that most commercial packages do, even though it
seems clumsy and adds an extra step, and that's why they do it that way, too.
--
Mike Nolan
Sometimes a business requirement is that a serial sequence
never skips,
e.g. when generating invoice/ticket/formal letter numbers. Would an
INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...)
suffice,
or must I install a trigger too to do additional checking?
If id is defined unique it should be ok but if two of those statments
happen
to run concurrently you could get duplicate key violations and be
prepared
to retry the transaction. Savepoints in 8.0 will probably come on handy.
Of course concurrency is determined by the length of time you
insert a new value and commit. So it would help if you had all
other values/statements ready/done by the time you
insert into this table, and can do commit straight after
this statement.
Andre
Import Notes
Resolved by subject fallback