BUG #3811: Getting multiple values from a sequence generator
The following bug has been logged online:
Bug reference: 3811
Logged by: Adriaan van Os
Email address: postgres@microbizz.nl
PostgreSQL version: 8.2.5
Operating system: Mac OS X 10.5
Description: Getting multiple values from a sequence generator
Details:
The following has been observed In Postgres 8.2.5
1. LOCK TABLE doesn't work on a sequence
2. nextval will happily ignore ISOLATION LEVEL SERIALIZABLE of concurrent
transactions
3. nextval doesn't have an optional "increase" parameter (the increase is
always one).
Therefore, there is no way (I know of) to prevent a concurrent nextval
between a nextval and a setval call. Consequently, it is not possible to
create an atomic operation that increases nextval by a value N > 1.
I suggest an optional "increase" parameter to the nextval function. This can
make a certain kind of bulk operation and bulk import much more efficient.
On Dec 9, 2007, at 9:24 , Adriaan van Os wrote:
1. LOCK TABLE doesn't work on a sequence
2. nextval will happily ignore ISOLATION LEVEL SERIALIZABLE of
concurrent
transactions
Nor should it. Sequences are *by design* non-lockable and non-blocking.
http://www.postgresql.org/docs/faqs.FAQ.html#item4.11.4
http://www.postgresql.org/docs/8.2/static/functions-sequence.html
3. nextval doesn't have an optional "increase" parameter (the
increase is
always one).
Not true. Please read the documentation at
http://www.postgresql.org/docs/8.2/static/sql-createsequence.html
http://www.postgresql.org/docs/8.2/static/sql-altersequence.html
In particular, the INCREMENT BY option (and perhaps CACHE).
Michael Glaesemann
grzm seespotcode net
On Sun, 2007-12-09 at 09:24 +0000, Adriaan van Os wrote:
The following bug has been logged online:
Bug reference: 3811
Logged by: Adriaan van Os
Email address: postgres@microbizz.nl
PostgreSQL version: 8.2.5
Operating system: Mac OS X 10.5
Description: Getting multiple values from a sequence generator
Details:The following has been observed In Postgres 8.2.5
1. LOCK TABLE doesn't work on a sequence
2. nextval will happily ignore ISOLATION LEVEL SERIALIZABLE of concurrent
transactions
3. nextval doesn't have an optional "increase" parameter (the increase is
always one).Therefore, there is no way (I know of) to prevent a concurrent nextval
between a nextval and a setval call. Consequently, it is not possible to
create an atomic operation that increases nextval by a value N > 1.I suggest an optional "increase" parameter to the nextval function. This can
make a certain kind of bulk operation and bulk import much more efficient.
ALTER SEQUENCE blah INCREMENT BY val;
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
"Michael Glaesemann" <grzm@seespotcode.net> writes:
On Dec 9, 2007, at 9:24 , Adriaan van Os wrote:
3. nextval doesn't have an optional "increase" parameter (the increase is
always one).Not true. Please read the documentation at
http://www.postgresql.org/docs/8.2/static/sql-createsequence.html
http://www.postgresql.org/docs/8.2/static/sql-altersequence.htmlIn particular, the INCREMENT BY option (and perhaps CACHE).
I think he's looking for a an option to increase a sequence which normally
increments by 1 by a larger number for a single transaction. You would want to
do this if you were doing an exceptional bulk operation. If you set the
"increment by" then if another transaction happens to come along while you've
modified it you'll waste N sequence numbers.
Seems like a reasonable feature request. But I do wonder if the OP has
actually tried just incrementing it one by one for each of the records being
inserted. Incrementing sequences is pretty damn quick and I doubt it would
actually be a bottleneck.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!
"Adriaan van Os" <postgres@microbizz.nl> writes:
Right, I want to use it with a bulk operation, say importing a million records
with COPY. Calling nextval one million times looks to me like an enormous waste
of resources. Suppose, you are on an ADSL line: it will cost one million times
the ping time of the ADSL line (say 10 milliseconds per call). Well OK, one
could write a server function that does this, but then the one million result
values must be transported back to the client, because they are not guaranteed
to be contiguous. Unneeded complexity compared to a simple nextval increment
parameter.
The usual way to use nextval() is to use it on the server as an expression in
an INSERT or DEFAULT. If you're using COPY and don't have a column default set
up then, hm, I guess you're kind of stuck. That would make a good use case for
a one-time nextval(increment) or something like that.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!
Import Notes
Reply to msg id not found: 475D1861.3060900@microbizz.nl
On Sun, Dec 09, 2007 at 03:32:17PM +0000, Simon Riggs wrote:
ALTER SEQUENCE blah INCREMENT BY val;
this has the sideeffect that all concurrent nextvals() will also
increment by val, which is not always acceptable.
depesz
--
quicksil1er: "postgres is excellent, but like any DB it requires a
highly paid DBA. here's my CV!" :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)
Hi,
Right, I want to use it with a bulk operation, say importing a million
records
with COPY. Calling nextval one million times looks to me like an
enormous waste
of resources. Suppose, you are on an ADSL line: it will cost one million
times
the ping time of the ADSL line (say 10 milliseconds per call). Well OK,
one
could write a server function that does this, but then the one million
result
values must be transported back to the client, because they are not
guaranteed
to be contiguous. Unneeded complexity compared to a simple nextval
increment
parameter.
The usual way to use nextval() is to use it on the server as an expression
in
an INSERT or DEFAULT. If you're using COPY and don't have a column default
set
up then, hm, I guess you're kind of stuck. That would make a good use case
for
a one-time nextval(increment) or something like that.
Coincidently, I very briefly discussed (offline) about supporting
expressions while doing loads using COPY FROM with Heikki a while back. From
the above mail exchanges, it does appear that adding this kind of
functionality will be useful while doing bulk imports into tables using
COPY.
Heikki's initial suggestion was as follows:
COPY <table> FROM <file> USING <query>
Where query could be any SELECT query, executed once for row using the
values from the input data file. For example:
COPY footable (strcol, strcollen, moredata) FROM <file> USING SELECT $1,
length($1), $2;
The sql expressions could refer to the columns being read or could be user
defined procedures, built-in functions etc too. These expressions would need
to be executed per row read from the input data file to form a new set of
values[], nulls[] entries before forming the corresponding tuple entry.
I think the above will be a very useful enhancement to COPY. The syntax and
other details mentioned above are ofcourse subject to discussion and
approval on the list.
Regards,
Nikhils
--
EnterpriseDB http://www.enterprisedb.com
On Mon, 2007-12-10 at 12:31 +0100, hubert depesz lubaczewski wrote:
On Sun, Dec 09, 2007 at 03:32:17PM +0000, Simon Riggs wrote:
ALTER SEQUENCE blah INCREMENT BY val;
this has the sideeffect that all concurrent nextvals() will also
increment by val, which is not always acceptable.
So this is a feature proposal, not a bug?
Sounds interesting but needs to be on pgsql-hackers, please.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
"NikhilS" <nikkhils@gmail.com> writes:
Coincidently, I very briefly discussed (offline) about supporting
expressions while doing loads using COPY FROM with Heikki a while back. From
the above mail exchanges, it does appear that adding this kind of
functionality will be useful while doing bulk imports into tables using
COPY.Heikki's initial suggestion was as follows:
COPY <table> FROM <file> USING <query>
Where query could be any SELECT query, executed once for row using the
values from the input data file. For example:
Another direction to head would be to take away COPY's special logic to insert
into tables and instead have something like:
COPY FROM <file> USING <query>
where <query> is an *INSERT* statement. Or for that matter a DELETE or an
UPDATE. It would prepare the query then execute it once per line read from the
streamed copy data.
It would be much more general but perhaps be harder to optimize the our
current COPY can be optimized.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!
On Fri, 2007-12-14 at 10:47 +0100, Adriaan van Os wrote:
Simon Riggs wrote:
On Mon, 2007-12-10 at 12:31 +0100, hubert depesz lubaczewski wrote:
On Sun, Dec 09, 2007 at 03:32:17PM +0000, Simon Riggs wrote:
ALTER SEQUENCE blah INCREMENT BY val;
this has the sideeffect that all concurrent nextvals() will also
increment by val, which is not always acceptable.So this is a feature proposal, not a bug?
Sounds interesting but needs to be on pgsql-hackers, please.
I posted a message to pgsql-hackers three times but it gets stalled (and the owner of the list
doesn't reply).
Somebody will need to join pgsql-hackers. Other mail will be reflected
because of spam.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
Import Notes
Reply to msg id not found: 4762512A.4000305@microbizz.nl
Simon Riggs wrote:
On Fri, 2007-12-14 at 10:47 +0100, Adriaan van Os wrote:
Simon Riggs wrote:
Sounds interesting but needs to be on pgsql-hackers, please.
I posted a message to pgsql-hackers three times but it gets stalled (and the owner of the list
doesn't reply).Somebody will need to join pgsql-hackers. Other mail will be reflected
because of spam.
Even when it gets stalled, moderators (of which I am one) approve the
message if it's not spam. I haven't received anything from Adrian as
far as I can remember, leading to the idea that his mail is being lost
for other reasons (because it's considered spam by some filter
perhaps?).
--
Alvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J
"Las cosas son buenas o malas segun las hace nuestra opini�n" (Lisias)