Re: Auto incrementing primary keys
On 18 Feb, 13:36, django_user <amalt...@gmail.com> wrote:
How can stop postgresql from incrementing the primary key value, so
that even after many failed insert statements it get the next id val.
"Auto-incrementing" columns, typically implemented using the serial
data type [1]http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-SERIAL, employ sequences.
From the manual:
"To avoid blocking of concurrent transactions that obtain numbers from
the same sequence, a nextval operation is never rolled back; that is,
once a value has been fetched it is considered used, even if the
transaction that did the nextval later aborts. This means that aborted
transactions may leave unused "holes" in the sequence of assigned
values. setval operations are never rolled back, either."
http://www.postgresql.org/docs/8.1/interactive/functions-sequence.html
In other words, to permit a decent level of concurrency, PostgreSQL
doesn't wait to see if a transaction succeeds with a value from a
sequence before updating the sequence. If you want to reset a sequence
so that it always uses the next unused value as determined by looking
at the table, I suppose you could do something like this:
select setval('mytable_id_seq', x) from (select max(id) as x from
mytable) as y;
But I doubt that you would want to do this too often in any system
with any reasonable level of concurrent access to the table or the
sequence concerned.
Paul
[1]: http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-SERIAL
Import Notes
Reference msg id not found: 1eda72b0-b3cc-4c00-9c2a-ac35742876b0@i12g2000prf.googlegroups.com
On Feb 18, 6:08 pm, Paul Boddie <p...@boddie.org.uk> wrote:
On 18 Feb, 13:36, django_user <amalt...@gmail.com> wrote:
How can stop postgresql from incrementing the primary key value, so
that even after many failed insert statements it get the next id val."Auto-incrementing" columns, typically implemented using the serial
data type [1], employ sequences.From the manual:
"To avoid blocking of concurrent transactions that obtain numbers from
the same sequence, a nextval operation is never rolled back; that is,
once a value has been fetched it is considered used, even if the
transaction that did the nextval later aborts. This means that aborted
transactions may leave unused "holes" in the sequence of assigned
values. setval operations are never rolled back, either."http://www.postgresql.org/docs/8.1/interactive/functions-sequence.html
In other words, to permit a decent level of concurrency, PostgreSQL
doesn't wait to see if a transaction succeeds with a value from a
sequence before updating the sequence. If you want to reset a sequence
so that it always uses the next unused value as determined by looking
at the table, I suppose you could do something like this:select setval('mytable_id_seq', x) from (select max(id) as x from
mytable) as y;But I doubt that you would want to do this too often in any system
with any reasonable level of concurrent access to the table or the
sequence concerned.Paul
[1]http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE...
so wouldnt I run out of ids one day, if there are lot of failed insert
statements, lets say for every successful insert there are 50
unsuccessful inserts, so ids would be 1, 50, 100, and once I have
thousands of rows, I will run out of IDs ? should I use bigserial
instead ?
Thanks
Import Notes
Reference msg id not found: 1eda72b0-b3cc-4c00-9c2a-ac35742876b0@i12g2000prf.googlegroups.com
On 18/02/2008 13:14, pgsql_user wrote:
so wouldnt I run out of ids one day, if there are lot of failed insert
statements, lets say for every successful insert there are 50
unsuccessful inserts, so ids would be 1, 50, 100, and once I have
thousands of rows, I will run out of IDs ? should I use bigserial
instead ?
Well, that depends on your usage, so only you can answer that. According
to the docs, "serial" creates an integer column, which will give you
2147483647 values - how quickly will you use that lot up? If you think
you will run out, by all means use bigserial.
Ray.
---------------------------------------------------------------
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
rod@iol.ie
---------------------------------------------------------------
On 18/02/2008 17:46, Raymond O'Donnell wrote:
Well, that depends on your usage, so only you can answer that. According
to the docs, "serial" creates an integer column, which will give you
2147483647 values - how quickly will you use that lot up? If you think
you will run out, by all means use bigserial.
Actually, that isn't quite right - just looked at the docs for CREATE
SEQUENCE, and the default maximum value is 2^63-1.
Ray.
---------------------------------------------------------------
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
rod@iol.ie
---------------------------------------------------------------
On Feb 18, 2008, at 9:56 AM, Raymond O'Donnell wrote:
On 18/02/2008 17:46, Raymond O'Donnell wrote:
Well, that depends on your usage, so only you can answer that.
According to the docs, "serial" creates an integer column, which
will give you 2147483647 values - how quickly will you use that lot
up? If you think you will run out, by all means use bigserial.Actually, that isn't quite right - just looked at the docs for
CREATE SEQUENCE, and the default maximum value is 2^63-1.
That's the maximum value for the sequence itself, but a "serial" is
just an integer, so the 2^31-1 limit before wraparound still applies.
Cheers,
Steve
On Feb 18, 2008 12:16 PM, Steve Atkins <steve@blighty.com> wrote:
On Feb 18, 2008, at 9:56 AM, Raymond O'Donnell wrote:
On 18/02/2008 17:46, Raymond O'Donnell wrote:
Well, that depends on your usage, so only you can answer that.
According to the docs, "serial" creates an integer column, which
will give you 2147483647 values - how quickly will you use that lot
up? If you think you will run out, by all means use bigserial.Actually, that isn't quite right - just looked at the docs for
CREATE SEQUENCE, and the default maximum value is 2^63-1.That's the maximum value for the sequence itself, but a "serial" is
just an integer, so the 2^31-1 limit before wraparound still applies.
Look up bigserial
On Mon, 2008-02-18 at 17:46 +0000, Raymond O'Donnell wrote:
On 18/02/2008 13:14, pgsql_user wrote:
so wouldnt I run out of ids one day, if there are lot of failed insert
statements, lets say for every successful insert there are 50
unsuccessful inserts, so ids would be 1, 50, 100, and once I have
thousands of rows, I will run out of IDs ? should I use bigserial
instead ?Well, that depends on your usage, so only you can answer that. According
to the docs, "serial" creates an integer column, which will give you
2147483647 values - how quickly will you use that lot up? If you think
you will run out, by all means use bigserial.Ray.
---------------------------------------------------------------
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
rod@iol.ie
------------------------------------------------------------------------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
http://www.google.com/search?q=2147483647+seconds+to
+years&ie=utf-8&oe=utf-8&aq=t&rls=com.ubuntu:en-US:official&client=firefox-a
at one per second.
2 147 483 647 seconds = 68.0511039 years
On Feb 18, 1:14 pm, pgsql_user <amalt...@gmail.com> wrote:
On Feb 18, 6:08 pm, Paul Boddie <p...@boddie.org.uk> wrote:
On 18 Feb, 13:36, django_user <amalt...@gmail.com> wrote:
How can stop postgresql from incrementing the primary key value, so
that even after many failed insert statements it get the next id val."Auto-incrementing" columns, typically implemented using the serial
data type [1], employ sequences.From the manual:
"To avoid blocking of concurrent transactions that obtain numbers from
the same sequence, a nextval operation is never rolled back; that is,
once a value has been fetched it is considered used, even if the
transaction that did the nextval later aborts. This means that aborted
transactions may leave unused "holes" in the sequence of assigned
values. setval operations are never rolled back, either."http://www.postgresql.org/docs/8.1/interactive/functions-sequence.html
In other words, to permit a decent level of concurrency, PostgreSQL
doesn't wait to see if a transaction succeeds with a value from a
sequence before updating the sequence. If you want to reset a sequence
so that it always uses the next unused value as determined by looking
at the table, I suppose you could do something like this:select setval('mytable_id_seq', x) from (select max(id) as x from
mytable) as y;But I doubt that you would want to do this too often in any system
with any reasonable level of concurrent access to the table or the
sequence concerned.Paul
[1]http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE...
so wouldnt I run out of ids one day, if there are lot of failed insert
statements, lets say for every successful insert there are 50
unsuccessful inserts, so ids would be 1, 50, 100, and once I have
thousands of rows, I will run out of IDs ? should I use bigserial
instead ?Thanks
In theory, yes. but the standard 4 byte integer can represent about 2
billion positive numbers so even with a lot of failed inserts you're
probably not going to run out for years.
Import Notes
Reference msg id not found: 1eda72b0-b3cc-4c00-9c2a-ac35742876b0@i12g2000prf.googlegroups.com
django_user wrote:
How can stop postgresql from incrementing the primary key value, so
that even after many failed insert statements it get the next id val.
,,,
so wouldnt I run out of ids one day, if there are lot of failed insert
statements, lets say for every successful insert there are 50
unsuccessful inserts, so ids would be 1, 50, 100, and once I have
thousands of rows, I will run out of IDs ? should I use bigserial
instead ?
Gordon wrote:
In theory, yes. but the standard 4 byte integer can represent about 2
billion positive numbers so even with a lot of failed inserts you're
probably not going to run out for years.
First of all, sequenced keys are an artificial (surrogate) key. They should
carry no meaning whatsoever to the data model or business logic. If they do
carry meaning, then your code should manage the values rather than using an
automatic sequence.
Also, if you (after careful review of your analysis) still have restrictions
that are not part of the auto-generated sequence mechanism, such as leaving no
gaps in the sequence, you should implement your own sequence instead of
wishing the automatic mechanism were different from what it is.
Chances are good that the auto-genned sequence will work for you, if you think
about it. If not, would you share why you anticipate that gaps will cause
trouble for you?
--
Lew