Equivalent for AUTOINCREMENT?
Hello,
I have a table where I have a serialnumber which shuld be increased be
each INSERT. I know I can use max() to get the highest number, but how
can I use it in a INSERT statement?
There was a message for some month a message describing it on this list
but I do not find the message anymore...
Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant
--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
+49/177/9351947 50, rue de Soultz MSN LinuxMichi
+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)
# create table foo( a SERIAL );
NOTICE: CREATE TABLE will create implicit sequence "foo_a_seq" for serial
column "foo.a"
CREATE TABLE
# \d+ foo
Table "public.foo"
Column | Type | Modifiers | Description
--------+---------+-------------------------------------------------+-------------
a | integer | not null default nextval('foo_a_seq'::regclass) |
Has OIDs: no
AUTOINCREMENT has so many problems, that soon you'll start to love sequences
so much, you'll start to hate mysql's childlish approach to problem solving
:)
Also make sure you read carefuly:
http://www.postgresql.org/docs/8.3/static/sql-createsequence.html
On Fri, Oct 31, 2008 at 03:30:44AM +0100, Michelle Konzack wrote:
I have a table where I have a serialnumber which shuld be increased be
each INSERT. I know I can use max() to get the highest number, but how
can I use it in a INSERT statement?
Just don't mention the column. For example, say I had the following table:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name TEXT
);
And I wanted to insert a new user, I'd do:
INSERT INTO users (name) VALUES ('Sam Mason');
If I wanted to refer back to the identifier I'd just used, there are a
couple of options depending on what you want to do. If it's just your
code that wants to know, use the RETURNING clause:
INSERT INTO users (name) VALUES ('Sam Mason')
RETURNING (id);
If you want to use the value inside more SQL code and don't want more
round trips between your code and the database you can do:
BEGIN;
INSERT INTO users (name) VALUES ('Sam Mason');
INSERT INTO sessions (userid, startdate)
VALUES (currval('users_id_seq'), now());
SELECT currval('users_id_seq');
COMMIT;
The "users_id_seq" here identifies the sequence that was created when
the table was created; if you're not sure what this is you can use the
pg_get_serial_sequence() function[1]http://www.postgresql.org/docs/current/static/functions-info.html#FUNCTIONS-INFO-CATALOG-TABLE. The SELECT statement at the end
will cause the database to return the generated code back to you so
it can be used elsewhere. The reason for having to name the sequence
explicitly is that because of triggers and other complications multiple
sequences can get involved, when you'd only expect one. Because of
this if there was just a simple function like MySQL's LAST_INSERT_ID()
Postgres wouldn't know which sequence you were actually wanted to refer
to.
Sam
[1]: http://www.postgresql.org/docs/current/static/functions-info.html#FUNCTIONS-INFO-CATALOG-TABLE
pardon me, I didn't read the post - just judged it by subject really :)
you can also use keyward DEFAULT, so insert into foo(a) values(default);
But that's the whole point of DEFAULT .... in create table statement. If you
omit that column, it will be set to default value.
Michelle Konzack wrote:
I have a table where I have a serialnumber which shuld be increased be
each INSERT. I know I can use max() to get the highest number, but how
can I use it in a INSERT statement?There was a message for some month a message describing it on this list
but I do not find the message anymore...
You want to use 'SERIAL' or 'BIGSERIAL' data type. Then, for your
insert statement, use the 'RETURNING' clause like this:
CREATE TABLE mytable (
mypkey SERIAL NOT NULL,
somecol VARCHAR(200)
);
INSERT INTO mytable (somecol)
VALUES ('this is a test)
RETURNING mypkey;
You'll get your insert statement to return the integer value generated
by the SERIAL sequence.
These features are way more powerful than AUTO_INCREMENT.
-- Dante
Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant
--
----------
D. Dante Lorenso
dante@lorenso.com
Hello Nikolas,
Am 2008-10-31 09:44:50, schrieb Nikolas Everett:
I think you want a sequence. Give the serial number the type bigserial or
serial. See
http://www.postgresql.org/docs/current/static/functions-sequence.html for
more.
OK, thats cool... I have found an example in "sql-createsequence.html"
CREATE SEQUENCE serial START 1;
INSERT INTO distributors VALUES (nextval('serial'), 'nothing');
But there is one thing not clear:
Du I need to create a SEQUENCE for each table or do I need only ONE of
if and can use it independant on differnt tables?
Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant
--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
+49/177/9351947 50, rue de Soultz MSN LinuxMichi
+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)
Import Notes
Reply to msg id not found: d4e11e980810310644h2bd16511l69e88684b0c51e6@mail.gmail.com
Hello Grzegorz,
Am 2008-10-31 13:39:46, schrieb Grzegorz Ja??kiewicz:
AUTOINCREMENT has so many problems, that soon you'll start to love sequences
so much, you'll start to hate mysql's childlish approach to problem solving
:)
OK, you hit me, I am trying to convert a mysql scheme to postgresql...
Realy, I hate programs which relay on ONE database only and can not
easyly adapt to use another one...
I use PostgreSQL since 1999 and since some time I have to install mysql
and sqlite because programs are sucking. There is NO seperate definition
for the database stuff, so I have to walt trough the PHP scripts to get
the stuff and try to put it into a seperated file... which then can be
adapted to any databases...
Note: Since I am hit by such problems, my own php script use external
database definition which then can easily adapted and of course
I try avoid database specific functions...
MySQL has hit me more then once.
Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant
--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
+49/177/9351947 50, rue de Soultz MSN LinuxMichi
+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)
Michelle Konzack wrote:
Hello Grzegorz,
Am 2008-10-31 13:39:46, schrieb Grzegorz Ja??kiewicz:
AUTOINCREMENT has so many problems, that soon you'll start to love sequences
so much, you'll start to hate mysql's childlish approach to problem solving
:)OK, you hit me, I am trying to convert a mysql scheme to postgresql...
Realy, I hate programs which relay on ONE database only and can not
easyly adapt to use another one...
There are times when this can be appropriate, mostly when your app is
complex enough that you have to worry about a lot more than SQL dialect
differences.
For example, if you're thinking about concurrency issues you'll find
differences between databases in transaction isolation, visibility,
locking, handling of deadlocks, etc. You'll potentially be able to use
the database much more efficiently, reliably, and smoothly if you design
to one particular RDBMS's concurrency features and behaviour, rather
than trying to make it universal. In fact, making it universal may end
up being the same thing as serializing absolutely everything. This isn't
generally something you can just hide behind a database abstraction
layer unless you're prepared for miserable performance and ugly, ugly
ways of doing things.
If you want to impose strong data intregrity checking as part of your
schema, and handle violations of those checks cleanly in your
application, then you'll probably be doing database specific things
there too.
Use of stored procedures / functions or updateable views to minimize
round trips, improve access control, provide a consistent and stable
public interface for the database, etc will usually force the use of
database specific features. At least in this case the app interface
should be similar enough that you can probably abstract it.
Sometimes there are also database features that're just so compelling
that you'll save yourself vast amounts of development time (and thus
produce a better app due to spending that time on other things) by using
them. I've made use of PostgreSQL's advisory locks to solve problems
that'd otherwise require inter-client communication by side channels or
the use of an application server, for example.
The main app I'm working on at present (not the awful Access based one
I'm having to do) would probably be portable to Oracle with a bit of
work. Porting it to MySQL, even new versions, or worse to SQLite would
be absurd to even try. The benefit of using powerful database features
and designing around MVCC has been significantly quicker development
than would've been possible had I been forced to attempt to be
database-agnostic, as well as very strong data integrity enforcement,
good error handling & recovery, etc.
So ... targeting a specific database isn't all bad, so long as you think
carefully about it, understand the risks vs benefits, and don't use
non-portable or database-specific features just because you can. It
helps if the database you've targeted is highly portable (to avoid
platform lock-in), open source (so there's little risk of vendor
collapse or massive price hikes), and has a number of support options
out there if you need them. That's a large part of why I decided to
target PostgreSQL specifically, though the fact that it's powerful,
stable, fast and has a great community also made a big difference.
--
Craig Ringer
On Sat, Nov 01, 2008 at 02:24:37PM +0100, Michelle Konzack wrote:
Du I need to create a SEQUENCE for each table or do I need only ONE of
if and can use it independant on differnt tables?
If you just create a bunch of tables with SERIAL or BIGSERIAL columns,
it will create one sequence for each column. But you can make a set of
such columns use the same sequence if you want. SERIAL and BIGSERIAL are
really just "syntactic sugar" which create a sequence and set the
column's default value to the next value in the sequence, like this:
jtolley=# create table a (id serial);
NOTICE: CREATE TABLE will create implicit sequence "a_id_seq" for
serial column "a.id"
CREATE TABLE
jtolley=# \d a
Table "public.a"
Column | Type | Modifiers
--------+---------+------------------------------------------------
id | integer | not null default nextval('a_id_seq'::regclass)
If I need a new table or column using the same sequence, I just do this:
jtolley=# create table b (q integer not null default
nextval('a_id_seq'));
CREATE TABLE
jtolley=# \d b
Table "public.b"
Column | Type | Modifiers
--------+---------+------------------------------------------------
q | integer | not null default nextval('a_id_seq'::regclass)
- Josh / eggyknap
Michelle Konzack wrote:
I think you want a sequence. Give the serial number the type
bigserial or
serial. See
http://www.postgresql.org/docs/current/static/functions-sequence.html
for
more.
OK, thats cool... I have found an example in
"sql-createsequence.html"
CREATE SEQUENCE serial START 1;
INSERT INTO distributors VALUES (nextval('serial'), 'nothing');
But there is one thing not clear:
Du I need to create a SEQUENCE for each table or do I need only ONE
of
if and can use it independant on differnt tables?
You can use one sequence for many tables.
If you follow the advice by Nikolas Everett and use a "serial" column,
that would create one dedicated sequence per serial column.
Yours,
Laurenz Albe
Hi again,
Am 2008-11-05 20:13:40, schrieb Craig Ringer:
Michelle Konzack wrote:
OK, you hit me, I am trying to convert a mysql scheme to postgresql...
OK, in the same time I am trying to make programs like os-commerce
PostgreSQL usable
Realy, I hate programs which relay on ONE database only and can not
easyly adapt to use another one...
For example, if you're thinking about concurrency issues you'll find
differences between databases in transaction isolation, visibility,
locking, handling of deadlocks, etc. You'll potentially be able to use
the database much more efficiently, reliably, and smoothly if you design
to one particular RDBMS's concurrency features and behaviour, rather
than trying to make it universal. In fact, making it universal may end
up being the same thing as serializing absolutely everything. This isn't
generally something you can just hide behind a database abstraction
layer unless you're prepared for miserable performance and ugly, ugly
ways of doing things.
My major problem is, that I am using PostgreSQL since I think 6.4 (it
was March 1999) and my databases are between 20 MBytes and several
TBytes and some of my hosting providers (I have 8 worldwide) use ONLY
MySQL and want install PostgreSQL.
And no, I do not realy use very High-Complex Database operation which
can not ported to anoter databases.
The ONLY real option would be, go to a HostingProvider which VERY good
and reliabel Internet connectivity and install there my WHOLE PostgreSQL
database and let my other websites access them over the internet...
This would solv all of my MySQL/PostgreSQL problems...
But I have not found a singel Hosting-Provider which is willing to host
a couple of Databases for which I need 6 TByte of diskspace...
Even a smaller one (used by my own website and onlinestore) which will
have arround 1-2 GByte give me already trouble. Also I have data from
over 140.000 worldwide customers in it.
A dedicated Root-Server with 300 GByte SAS in Raid-1 without Hotfix cost
alread 150 Euro per month. I have at home a SCSI dino of fiveteen
300 GB drives (->Raid-5 with 3600 GByte) and the most inexpensive Option
would be a 19"/42U closet in Nürnberg at Hetzner) which then cost
99€/month plus traffic and electricity.
Can you recommend to host the database on a seperated machine in the
Internet at another ISP?
If you want to impose strong data intregrity checking as part of your
schema, and handle violations of those checks cleanly in your
application, then you'll probably be doing database specific things
there too.
This why I have asked in another thread on <lists.php.net> how to make
an "API" for my programs which then can be adapted to the specific DB.
Since I have no need for complex operation I have created PHP includes
like
/usr/share/tdphp-vserver/01_database_pgsql.inc
/usr/share/tdphp-vserver/01_database_mysql.inc
/usr/share/tdphp-vserver/01_database_oracle.inc
which have the neccesary functions defined. However, not all functions
are universell used in my php5 scripts, which mean, that I have
sometimes 4 or 5 different UPDATE functions...
...and of course, have to comment it since otherwise in 4 weeks I would
not more know WHAT I have done and WHY...
However, this INCLUDES can then adapted to the database specific
functions, but right it is the hell...
Sometimes there are also database features that're just so compelling
that you'll save yourself vast amounts of development time (and thus
produce a better app due to spending that time on other things) by using
them. I've made use of PostgreSQL's advisory locks to solve problems
that'd otherwise require inter-client communication by side channels or
the use of an application server, for example.
I know, WHY I use PostgreSQL for 9 1/2 years...
The main app I'm working on at present (not the awful Access based one
I'm having to do) would probably be portable to Oracle with a bit of
work. Porting it to MySQL, even new versions, or worse to SQLite would
be absurd to even try.
My german hosting provider is using MySQL 5 and I have MANY problems to
adapt my own OnlineStore to it... With PostgreSQL 8.0 to 8.3 it was not
a problem... But since my hosting provider is a small one, maybe I can
convice him to do something. For some month he told me, for the
15 Euro/month month I have 2000 MByte of diskspace and if I do not
install a website Ican use the entired 2000 MByte for the database...
<grmpf>It is MySQL</grmpf>
I am thinking on a small dedicated database server with 4x SAS 147 GByte
and an AMD Opteron with 2 GByte of diskspace.
The benefit of using powerful database features
and designing around MVCC has been significantly quicker development
than would've been possible had I been forced to attempt to be
database-agnostic, as well as very strong data integrity enforcement,
good error handling & recovery, etc.
Again: I know, WHY I use PostgreSQL for 9 1/2 years... ;-)
So ... targeting a specific database isn't all bad, so long as you think
Right, but there are MANY application which are realy simple, where
database access can easyly ported but they HARDCODE inside the
applications the database functions instead of exporting them to an
INCLUDE for example and of course, without anny comments...
carefully about it, understand the risks vs benefits, and don't use
non-portable or database-specific features just because you can. It
This is, what I try to do in my applications...
helps if the database you've targeted is highly portable (to avoid
platform lock-in), open source (so there's little risk of vendor
collapse or massive price hikes), and has a number of support options
out there if you need them. That's a large part of why I decided to
target PostgreSQL specifically, though the fact that it's powerful,
stable, fast and has a great community also made a big difference.
Yeah, peoples like you (and of course other) are responsable for this
great community here.
Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant
--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
+49/177/9351947 50, rue de Soultz MSN LinuxMichi
+33/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)
Michelle Konzack wrote:
The ONLY real option would be, go to a HostingProvider which VERY good
and reliabel Internet connectivity and install there my WHOLE PostgreSQL
database and let my other websites access them over the internet...
... making them subject to problems with transit between the host
handing PostgreSQL and the host handling the rest of the work. You need
every hop of the main routes between the Pg host and all your other
providers to work reliably, with stable and reasonably low latency, all
the time. This isn't very likely.
You'll also have big latency problems, and if your design doesn't use
any more powerful database features you probably do LOTS of round trips,
lots of read-modify-writes, etc, and you'll suffer a horrible impact on
performance.
But I have not found a singel Hosting-Provider which is willing to host
a couple of Databases for which I need 6 TByte of diskspace...
You would probably need dedicated co-located hosting where you provide
the machine and they provide rack space, stable power, an Ethernet port
or two, access to a console server with remote power cycle capability,
and hopefully access to some backup storage.
You'd have a hard time fitting 6TB of fast, reliable storage even in a
5RU enclosure (say 24 300GB SAS disks in a high-density storage server
chassis) so you'd probably need an external FC or SAS storage enclosure,
probably as well as internal storage.
$LOTS.
On the other hand, you expect $LOTS if you need to store 6TB of data
with fast, reliable access to it.
At least with SAS becoming so standard you don't need to worry about
fibre channel anymore.
If you don't need all of your data to be accessible particularly
quickly, you can save megabucks by using a set of 1TB SATA disks for
your bigger, more rarely used stuff. If you don't have too many big
indexes you might be able to keep them on the SAS disks for faster access.
A dedicated Root-Server with 300 GByte SAS in Raid-1 without Hotfix cost
alread 150 Euro per month.
Where is it hosted? At that price, I want one.
Can you recommend to host the database on a seperated machine in the
Internet at another ISP?
Personally, I wouldn't do it, because of the aforementioned problems
with latency and with unstable transit. I guess it depends on your
price/performance/reliability tradeoff requirements.
If you want to impose strong data intregrity checking as part of your
schema, and handle violations of those checks cleanly in your
application, then you'll probably be doing database specific things
there too.This why I have asked in another thread on <lists.php.net> how to make
an "API" for my programs which then can be adapted to the specific DB.Since I have no need for complex operation
If you host your database somewhere far enough away from the application
that you're seeing several-hundred-millisecond latencies then you WILL
need complex operations. You will need to do as much work as possible on
the database end, avoid read-modify-writes (but you should do that
anyway), use SQL or PL/PgSQL functions to batch operations together, etc.
So ... targeting a specific database isn't all bad, so long as you think
Right, but there are MANY application which are realy simple, where
database access can easyly ported but they HARDCODE inside the
applications the database functions instead of exporting them to an
INCLUDE for example and of course, without anny comments...
Yeah, I agree that's pretty objectionable. Whether it's bad because the
database access is simple (ie they're not using the database's ability
to do lots of the hard work for them) or because they've mixed it in
with the rest of the code I don't know.
I find I'll look at some application and see an "abstraction layer" that
results in the application doing things like:
x = get_value('field', 'table');
x ++;
set_value('field', 'table', x);
ie
SELECT field FROM table;
-- returns `42'
UPDATE table SET field = 43;
... which just makes me want to cry. Concurrency issues? Latency? Argh.
And that's a simple one; it gets WAY better once they come up with their
own procedural methods of doing what could be done with an UPDATE ...
FROM ... WHERE or the like.
That said, I'll confess to using Hibernate (a Java ORM) in large parts
of one of the apps I'm working on at present. It does a pretty good job,
and it's sane enough that you can bypass and use SQL (via the JDBC APIs)
when you have work to do that it's not well suited for, such as bulk
updates or reporting queries.
--
Craig Ringer