Primary Key

Started by P. Dwayne Milleralmost 25 years ago43 messageshackers
Jump to latest
#1P. Dwayne Miller
dmiller@espgroup.net

Given the following CREATE TABLE instructions...

1)
CREATE TABLE message
(
int4 msgid PRIMARY KEY NOT NULL,
text msgtext
);

2)
CREATE TABLE message
(
int4 msgid not null,
text msgtext,
PRIMARY KEY (msgid)
);

3)
CREATE TABLE message
(
int4 msgid not null,
text msgtext,
CONSTRAINT cons_001_pk PRIMARY KEY on (msgid)
);

The first two actually create a PRIMARY KEY on msgid. The third seems
to have a PRIMARY KEY on 'oid', not 'msgid', though it does create a
unique index on 'msgid'. One of the applications I'm using (Cold
Fusion) looks for the PRIMARY KEY and checks that I have included that
column(s) in my data statement.

The first two work, the third does not. Cold Fusion reports that I did
not provide 'oid' as one of the data elements.

Cold Fusion is accessing the database using ODBC.
Database is Postgres v7.1.1 on Red Hat Linux 7.0

I'm not looking for a fix as I can create the table using the syntax
that gives the expected results, but just wanted to alert someone that
there is some inconsistency in the way a PRIMARY KEY is used or
designated.

BTW, I did not try the COLUMN CONSTRAINT syntax.

Thanks

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: P. Dwayne Miller (#1)
Re: Primary Key

"P. Dwayne Miller" <dmiller@espgroup.net> writes:

CREATE TABLE message
(
int4 msgid not null,
text msgtext,
CONSTRAINT cons_001_pk PRIMARY KEY on (msgid)
);

The first two actually create a PRIMARY KEY on msgid. The third seems
to have a PRIMARY KEY on 'oid', not 'msgid', though it does create a
unique index on 'msgid'.

After fixing the several obvious syntax errors, it works fine for me:

regression=# CREATE TABLE message
regression-# (
regression(# msgid int4 not null,
regression(# msgtext text,
regression(# CONSTRAINT cons_001_pk PRIMARY KEY (msgid)
regression(# );
NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'cons_001_pk' for table 'message'
CREATE
regression=# \d message
Table "message"
Attribute | Type | Modifier
-----------+---------+----------
msgid | integer | not null
msgtext | text |
Primary Key: cons_001_pk

regression=#

Is Cold Fusion perhaps doing strange things to the query behind your
back? None of those CREATE TABLE commands are legal SQL according
to my references.

regards, tom lane

#3P. Dwayne Miller
dmiller@espgroup.net
In reply to: Tom Lane (#2)
Re: Primary Key

My bad on the syntax of all three. I used your syntax (which is what I had originally used) and
got the same results with the \d command that you show.

I'm only using Cold Fusion to read data from the resulting table, not create the table... and I
still get an error when I have created the primary key using the table constraint syntax. Cold
Fusion is reporting that the primary key has been defined for the column oid. Using the correct
syntax with the first two CREATE TABLE statements, Cold Fusion reports the primary key field as
msgid.

Thanks for your reply,
Dwayne

Tom Lane wrote:

Show quoted text

"P. Dwayne Miller" <dmiller@espgroup.net> writes:

CREATE TABLE message
(
int4 msgid not null,
text msgtext,
CONSTRAINT cons_001_pk PRIMARY KEY on (msgid)
);

The first two actually create a PRIMARY KEY on msgid. The third seems
to have a PRIMARY KEY on 'oid', not 'msgid', though it does create a
unique index on 'msgid'.

After fixing the several obvious syntax errors, it works fine for me:

regression=# CREATE TABLE message
regression-# (
regression(# msgid int4 not null,
regression(# msgtext text,
regression(# CONSTRAINT cons_001_pk PRIMARY KEY (msgid)
regression(# );
NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'cons_001_pk' for table 'message'
CREATE
regression=# \d message
Table "message"
Attribute | Type | Modifier
-----------+---------+----------
msgid | integer | not null
msgtext | text |
Primary Key: cons_001_pk

regression=#

Is Cold Fusion perhaps doing strange things to the query behind your
back? None of those CREATE TABLE commands are legal SQL according
to my references.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

#4Thomas Swan
tswan-lst@ics.olemiss.edu
In reply to: P. Dwayne Miller (#1)
Re: Primary Key

Tom Lane wrote:

After fixing the several obvious syntax errors, it works fine for me:

regression=# CREATE TABLE message
regression-# (
regression(# msgid int4 not null,
regression(# msgtext text,
regression(# CONSTRAINT cons_001_pk PRIMARY KEY (msgid)
regression(# );
NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'cons_001_pk' for table 'message'
CREATE
regression=# \d message
Table "message"
Attribute | Type | Modifier
-----------+---------+----------
msgid | integer | not null
msgtext | text |
Primary Key: cons_001_pk

regression=#

Is Cold Fusion perhaps doing strange things to the query behind your
back? None of those CREATE TABLE commands are legal SQL according
to my references.

I've been using the syntax "PRIMARY KEY (/column_name/ [, /column_name/
])," without the constraint name, and the "/COLUMN_NAME TYPE/ PRIMARY
KEY" syntax for sometime now. I may be admitting to SQL heresy in
saying that; but, that's the syntax I've seen in MySQL and in quite a
few SQL/database books.

AFIAK, it's a legal table creation statement.

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Swan (#4)
Re: Primary Key

Thomas Swan <tswan-lst@ics.olemiss.edu> writes:

AFIAK, it's a legal table creation statement.

The variant I showed is. The original one had an extraneous "ON" in the
FOREIGN KEY clause, and even more to the point all the column
declarations had column name and type name reversed. That's why I was
questioning the syntax ...

regards, tom lane

#6Naomi Walker
nwalker@eldocomp.com
In reply to: P. Dwayne Miller (#3)
Backup and Recovery

I'm new to this list, and Postgresql, and could use some advice from you
experienced users. We are very interested in making postrgresql work for
our project, but its missing one big feature, that is absolutely necessary
for a true OLTP shop.

Even more important that uptime to us, is to never put ourselves in a
position where we could lose data. I understand I can do a hot backup with
pg_dumpall. What we need on top of that is the ability to replay the
transaction logs against the previous database archive. Without such a
feature, even if I did a full backup a few times a day, we would be
vulnerable to losing hours of data (which would not be acceptable to our
users).

I can tell this has been designed to do exactly that, because its really
close. What would be needed is a hook to write the logs to disk/tape, when
they are full (and not overwrite them until they go elsewhere), and, the
ability to actually play back the logs, exactly at the right place, tied to
a specific archive.

I'm sure this is something that would benefit all our lives. Other than
just hiring a consultant to do so, is there some way to make this
happen? Other than eliminating all my single points of failover in the
hardware, is there some other way to solve this problem?

Thanks,
Naomi
--
Naomi Walker
Chief Information Officer
Eldorado Computing, Inc.
602-604-3100 ext 242

#7Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Tom Lane (#2)
Re: Re: Primary Key

"P. Dwayne Miller" wrote:

My bad on the syntax of all three. I used your syntax (which is what I had originally used) and
got the same results with the \d command that you show.

I'm only using Cold Fusion to read data from the resulting table, not create the table... and I
still get an error when I have created the primary key using the table constraint syntax. Cold
Fusion is reporting that the primary key has been defined for the column oid. Using the correct
syntax with the first two CREATE TABLE statements, Cold Fusion reports the primary key field as
msgid.

SQLPrimaryKey() in the current psqlodbc driver doesn't
report the Primary key other than tablename_pkey.
It seems the cause.
I would change the implementatin of SQLPrimaryKey().
Dwayne, could you try the modified driver ?

regards,
Hiroshi Inoue

#8P. Dwayne Miller
dmiller@espgroup.net
In reply to: Hiroshi Inoue (#7)
Re: Primary Key

I can try it. Where do I get it.

My question would be why, if SQLPrimaryKey() only reported tablename_pkey, then why does my front end
return oid as the primary key?

Thanks,
Dwayne

Hiroshi Inoue wrote:

Show quoted text

"P. Dwayne Miller" wrote:

My bad on the syntax of all three. I used your syntax (which is what I had originally used) and
got the same results with the \d command that you show.

I'm only using Cold Fusion to read data from the resulting table, not create the table... and I
still get an error when I have created the primary key using the table constraint syntax. Cold
Fusion is reporting that the primary key has been defined for the column oid. Using the correct
syntax with the first two CREATE TABLE statements, Cold Fusion reports the primary key field as
msgid.

SQLPrimaryKey() in the current psqlodbc driver doesn't
report the Primary key other than tablename_pkey.
It seems the cause.
I would change the implementatin of SQLPrimaryKey().
Dwayne, could you try the modified driver ?

regards,
Hiroshi Inoue

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html

#9Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Hiroshi Inoue (#7)
Re: Re: Primary Key

"P. Dwayne Miller" wrote:

Show quoted text

I can try it. Where do I get it.

My question would be why, if SQLPrimaryKey() only reported tablename_pkey, then why does my front end
return oid as the primary key?

Thanks,
Dwayne

Hiroshi Inoue wrote:

"P. Dwayne Miller" wrote:

My bad on the syntax of all three. I used your syntax (which is what I had originally used) and
got the same results with the \d command that you show.

I'm only using Cold Fusion to read data from the resulting table, not create the table... and I
still get an error when I have created the primary key using the table constraint syntax. Cold
Fusion is reporting that the primary key has been defined for the column oid. Using the correct
syntax with the first two CREATE TABLE statements, Cold Fusion reports the primary key field as
msgid.

SQLPrimaryKey() in the current psqlodbc driver doesn't
report the Primary key other than tablename_pkey.
It seems the cause.
I would change the implementatin of SQLPrimaryKey().
Dwayne, could you try the modified driver ?

regards,
Hiroshi Inoue

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

#10Hiroshi Inoue
Inoue@tpf.co.jp
In reply to: Hiroshi Inoue (#7)
Re: Re: Primary Key

"P. Dwayne Miller" wrote:

I can try it. Where do I get it.

I would send you the dll though I don't test it by myself.
OK ?

My question would be why, if SQLPrimaryKey() only reported tablename_pkey, then why does my front end
return oid as the primary key?

Don't you turn on the FAKEOIDINDEX option ?

regards,
Hiroshi Inoue

#11Matthew Kirkwood
matthew@hairy.beasts.org
In reply to: Naomi Walker (#6)
Re: Backup and Recovery

On Tue, 19 Jun 2001, Naomi Walker wrote:

Even more important that uptime to us, is to never put ourselves in a
position where we could lose data. I understand I can do a hot backup
with pg_dumpall. What we need on top of that is the ability to replay
the transaction logs against the previous database archive. Without
such a feature, even if I did a full backup a few times a day, we
would be vulnerable to losing hours of data (which would not be
acceptable to our users).

This is what I'd like too (though I'm not that bothered about
rolling forward from a dump if I can just do it by replaying
logs onto real datafiles).

I mentioned it a while ago:

http://fts.postgresql.org/db/mw/msg.html?mid=114397

but got no response.

You are aware that you can still lose up to (by default) 16Mb
worth of transactions in this scheme, I presume?

Matthew.

#12P. Dwayne Miller
dmiller@espgroup.net
In reply to: Hiroshi Inoue (#10)
Re: Primary Key

Please send it. And yes, I do have the Fake OID Index turned on. Although I have no idea what it does.

Thanks,
Dwayne

Hiroshi Inoue wrote:

Show quoted text

"P. Dwayne Miller" wrote:

I can try it. Where do I get it.

I would send you the dll though I don't test it by myself.
OK ?

My question would be why, if SQLPrimaryKey() only reported tablename_pkey, then why does my front end
return oid as the primary key?

Don't you turn on the FAKEOIDINDEX option ?

regards,
Hiroshi Inoue

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl

#13Ross J. Reedstrom
reedstrm@rice.edu
In reply to: P. Dwayne Miller (#3)
Re: Re: Primary Key

Hmm, your using ColdFusion, so that goes through the ODBC driver, which
picks up the 'primary key' by looking for an index named 'foo_pkey',
I think. Ah, here it is:

in interfaces/odbc/info.c:

sprintf(tables_query, "select ta.attname, ia.attnum"
" from pg_attribute ta, pg_attribute ia, pg_class c, pg_index i"
" where c.relname = '%s_pkey'"
" AND c.oid = i.indexrelid"
" AND ia.attrelid = i.indexrelid"
" AND ta.attrelid = i.indrelid"
" AND ta.attnum = i.indkey[ia.attnum-1]"
" order by ia.attnum", pktab);

So, don't name the primary key constraint, or name it 'something_pkey'
and you should be fine. Something's falling back to trying to use
oid if it can't find a primary key: I'm note sure if that's inside the
ODBC driver, or in ColdFusion.

Hmm, seems we have other Access specific hacks in the ODBC driver:

/*
* I have to hide the table owner from Access, otherwise it
* insists on referring to the table as 'owner.table'. (this
* is valid according to the ODBC SQL grammar, but Postgres
* won't support it.)
*
* set_tuplefield_string(&row->tuple[1], table_owner);
*/

I bet PgAdmin would like to have that info.

Ross

Show quoted text

On Tue, Jun 19, 2001 at 06:11:12PM -0400, P. Dwayne Miller wrote:

My bad on the syntax of all three. I used your syntax (which is what I had originally used) and
got the same results with the \d command that you show.

I'm only using Cold Fusion to read data from the resulting table, not create the table... and I
still get an error when I have created the primary key using the table constraint syntax. Cold
Fusion is reporting that the primary key has been defined for the column oid. Using the correct
syntax with the first two CREATE TABLE statements, Cold Fusion reports the primary key field as
msgid.

Thanks for your reply,
Dwayne

#14Naomi Walker
nwalker@eldocomp.com
In reply to: Matthew Kirkwood (#11)
Re: Backup and Recovery

At 12:26 PM 6/20/01 +0100, Matthew Kirkwood wrote:

On Tue, 19 Jun 2001, Naomi Walker wrote:

Even more important that uptime to us, is to never put ourselves in a
position where we could lose data. I understand I can do a hot backup
with pg_dumpall. What we need on top of that is the ability to replay
the transaction logs against the previous database archive. Without
such a feature, even if I did a full backup a few times a day, we
would be vulnerable to losing hours of data (which would not be
acceptable to our users).

This is what I'd like too (though I'm not that bothered about
rolling forward from a dump if I can just do it by replaying
logs onto real datafiles).

I mentioned it a while ago:

http://fts.postgresql.org/db/mw/msg.html?mid=114397

but got no response.

Well, so now there is at least TWO of us....

We should start the thread again.

You are aware that you can still lose up to (by default) 16Mb
worth of transactions in this scheme, I presume?

I'm just starting with Postgresql, but, I thought with fsync on this was
not the case. Is that not true or what else did I miss?

Matthew.

--
Naomi Walker
Chief Information Officer
Eldorado Computing, Inc.
602-604-3100 ext 242

#15Philip Warner
pjw@rhyme.com.au
In reply to: Naomi Walker (#14)
Re: Backup and Recovery

At 13:41 20/06/01 -0700, Naomi Walker wrote:

Well, so now there is at least TWO of us....

We should start the thread again.

WAL based backup & recovery is something I have been trying to do in
background, but unfortunately I have no time at the moment. I do plan to
get back to it as soon as I can.

----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/

#16Matthew Kirkwood
matthew@hairy.beasts.org
In reply to: Naomi Walker (#14)
Re: Backup and Recovery

On Wed, 20 Jun 2001, Naomi Walker wrote:

You are aware that you can still lose up to (by default) 16Mb
worth of transactions in this scheme, I presume?

I'm just starting with Postgresql, but, I thought with fsync on this
was not the case. Is that not true or what else did I miss?

I suppose that it rather depends on how you expected to
move the logs over. My approach was to archive the redo
when PG is done with them and only then to roll them
forward.

If a catastrophe occurs, then I wouldn't be able to do
anything with a half-full log.

Our Oracle setups use redo logs of only 1Mb for this
reason, and it doesn't seem to hurt too much (though
Oracle's datafile formats seem a fair bit denser than
Postgres's).

Matthew.

#17Nathan Myers
ncm@zembu.com
In reply to: Matthew Kirkwood (#16)
Re: Backup and Recovery

On Thu, Jun 21, 2001 at 11:01:29AM +0100, Matthew Kirkwood wrote:

On Wed, 20 Jun 2001, Naomi Walker wrote:

You are aware that you can still lose up to (by default) 16Mb
worth of transactions in this scheme, I presume?

I'm just starting with Postgresql, but, I thought with fsync on this
was not the case. Is that not true or what else did I miss?

I suppose that it rather depends on how you expected to
move the logs over. My approach was to archive the redo
when PG is done with them and only then to roll them
forward.

If a catastrophe occurs, then I wouldn't be able to do
anything with a half-full log.

Our Oracle setups use redo logs of only 1Mb for this
reason, and it doesn't seem to hurt too much (though
Oracle's datafile formats seem a fair bit denser than
Postgres's).

The above makes no sense to me. A hot recovery that discards some
random number of committed transactions is a poor sort of recovery.

Ms. Walker might be able to adapt one of the several replication
tools available for PG to do replayable logging, instead.

It seems to me that for any replication regime (symmetric or not,
synchronous or not, global or not), and also any hot-backup/recovery
approach, an update-log mechanism that produces a high-level
description of changes is essential. Using triggers to produce
such a log seems to me to be too slow and too dependent on finicky
administrative procedures.

IIUC, the regular WAL records are optimized for a different purpose:
speeding up normal operation. Also IIUC, the WAL cannot be applied
to a database reconstructed from a dump. If augmented to enable such
reconstruction, the WAL might be too bulky to serve well in that role;
it currently only needs to keep enough data to construct the current
database from a recent checkpoint, so compactness is has not been
crucial. But there's much to be said for having just a single
synchronous log mechanism. A high-level log mixed into the WAL, to
be extracted asynchrously to a much more complact replay log, might
be the ideal compromise.

The same built-in high-level logging mechanism could make all the
various kinds of disaster prevention, disaster recovery, and load
sharing much easier to implement, because they all need much the
same thing.

Nathan Myers
ncm@zembu.com

#18Matthew Kirkwood
matthew@hairy.beasts.org
In reply to: Nathan Myers (#17)
Re: Backup and Recovery

On Thu, 21 Jun 2001, Nathan Myers wrote:

I suppose that it rather depends on how you expected to
move the logs over. My approach was to archive the redo
when PG is done with them and only then to roll them
forward.

The above makes no sense to me. A hot recovery that discards some
random number of committed transactions is a poor sort of recovery.

Agreed. Nevertheless it's at least db_size - 1Mb better
than the current options.

Recovering the rest manually from log files is good enough
for us (indeed, much better than the potential loss of
performance or reliability from "real" replication).

If it horrifies you that much, think of it as 15-minutely
incremental backups.

Matthew.

#19John Moore
nj7e@yahoo.com
In reply to: Matthew Kirkwood (#11)
Re: Backup and Recovery

matthew@hairy.beasts.org (Matthew Kirkwood) wrote in message news:<Pine.LNX.4.33.0106201212240.25630-100000@sphinx.mythic-beasts.com>...

On Tue, 19 Jun 2001, Naomi Walker wrote:

Even more important that uptime to us, is to never put ourselves in a
position where we could lose data. I understand I can do a hot backup
with pg_dumpall. What we need on top of that is the ability to replay
the transaction logs against the previous database archive. Without
such a feature, even if I did a full backup a few times a day, we
would be vulnerable to losing hours of data (which would not be
acceptable to our users).

This is what I'd like too (though I'm not that bothered about
rolling forward from a dump if I can just do it by replaying
logs onto real datafiles).

With stock PostgreSQL... how many committed transactions can one lose
on a simple system crash/reboot? With Oracle or Informix, the answer
is zero. Is that true with PostgreSQL in fsync mode? If not, does it
lose all in the log, or just those not yet written to the DB?

Thanks

John

#20Rod Taylor
rbt@rbt.ca
In reply to: Matthew Kirkwood (#11)
Re: Re: Backup and Recovery

With stock PostgreSQL... how many committed transactions can one

lose

on a simple system crash/reboot? With Oracle or Informix, the answer
is zero. Is that true with PostgreSQL in fsync mode? If not, does it
lose all in the log, or just those not yet written to the DB?

With WAL the theory is that it will not lose a committed transaction.
Bugs have plagged previous versions (7.1.2 looks clean) and it none
(Oracle, Informix, Postgres) can protect against coding errors in the
certain cases but from general power failure it's fine.

This assumes adequate hardware too. Some harddrives claim to have
written when they haven't among other things, but Postgres itself
won't lose the information -- your hardware might :do that silently
though.)

#21Mikheev, Vadim
vmikheev@SECTORBASE.COM
In reply to: Rod Taylor (#20)
#22Nathan Myers
ncm@zembu.com
In reply to: John Moore (#19)
#23Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Nathan Myers (#22)
#24Nathan Myers
ncm@zembu.com
In reply to: Zeugswetter Andreas SB (#23)
#25Bruce Momjian
bruce@momjian.us
In reply to: Nathan Myers (#24)
#26Bruce Momjian
bruce@momjian.us
In reply to: Nathan Myers (#24)
#27Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Bruce Momjian (#26)
#28Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Zeugswetter Andreas SB (#27)
#29Bruce Momjian
bruce@momjian.us
In reply to: Zeugswetter Andreas SB (#28)
#30Naomi Walker
nwalker@eldocomp.com
In reply to: Bruce Momjian (#25)
#31Nathan Myers
ncm@zembu.com
In reply to: Zeugswetter Andreas SB (#28)
#32Bruce Momjian
bruce@momjian.us
In reply to: Nathan Myers (#31)
#33Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#32)
#34Nathan Myers
ncm@zembu.com
In reply to: Tom Lane (#33)
#35Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Nathan Myers (#34)
#36Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Zeugswetter Andreas SB (#35)
#37Bruce Momjian
bruce@momjian.us
In reply to: Nathan Myers (#34)
#38Tom Lane
tgl@sss.pgh.pa.us
In reply to: Zeugswetter Andreas SB (#35)
#39Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Tom Lane (#38)
#40Zeugswetter Andreas SB
ZeugswetterA@wien.spardat.at
In reply to: Zeugswetter Andreas SB (#39)
#41Tom Lane
tgl@sss.pgh.pa.us
In reply to: Zeugswetter Andreas SB (#39)
#42Mikheev, Vadim
vmikheev@SECTORBASE.COM
In reply to: Tom Lane (#41)
#43Nathan Myers
ncm@zembu.com
In reply to: Bruce Momjian (#37)