help please

Started by Dorward Villaruzover 23 years ago5 messagesgeneral
Jump to latest
#1Dorward Villaruz
dorwardv@ntsp.nec.co.jp

i need to create a table with this property

the fields are f1 integer , f2 integer , f3 integer defaults to f1 + f2

is this possible? how?

i have created something like this

create table table1(a integer not null unique primary key, b integer not null, c not null default a + b)

thanks

#2frbn
frbn@efbs-seafrigo.fr
In reply to: Dorward Villaruz (#1)
Re: help please

Dorward Villaruz a écrit:

i need to create a table with this property

the fields are f1 integer , f2 integer , f3 integer defaults to f1 + f2

you shouldn't store f1+f2, as you already store f1 and f2.

i have created something like this

create table table1(a integer not null unique primary key, b integer not
null, c not null default a + b)

"you can't use attributes in the default clause"

if you *really* want to do that, use a trigger after insert: update f3 with f1+f2

#3Bob Parkinson
rwp@biome.ac.uk
In reply to: frbn (#2)
clog problem

I've got this:

FATAL 2: open of /usr/local/pgsql/data/pg_clog/02B6 failed: No such file
or directory
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost

This was when the nightly vacuum was running.

I had this problem a few weeks ago and had to restore from a backup. Whats
is causing this? I've seen some refs. to clog problems relating to
7.2.beta somethin, but not found any resoloution.

Cheers,

Bob

Bob Parkinson
rwp@biome.ac.uk
------------------------------------------------------------------
Technical Manager: Biome http://biome.ac.uk/

Greenfield Medical Library,
Queens Medical Centre,
Nottingham. 0115 9249924 x 42059
------------------------------------------------------------------
We are stardust

#4Bob Parkinson
rwp@biome.ac.uk
In reply to: Bob Parkinson (#3)
Re: clog problem + version info

Doh, postgres-7.2.1 on freebsd4.3

On Wed, 4 Sep 2002, Bob Parkinson wrote:

I've got this:

FATAL 2: open of /usr/local/pgsql/data/pg_clog/02B6 failed: No such file
or directory
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost

This was when the nightly vacuum was running.

I had this problem a few weeks ago and had to restore from a backup. Whats
is causing this? I've seen some refs. to clog problems relating to
7.2.beta somethin, but not found any resoloution.

Cheers,

Bob

Bob Parkinson
rwp@biome.ac.uk
------------------------------------------------------------------
Technical Manager: Biome http://biome.ac.uk/

Greenfield Medical Library,
Queens Medical Centre,
Nottingham. 0115 9249924 x 42059
------------------------------------------------------------------
We are stardust

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

Bob Parkinson
rwp@biome.ac.uk
------------------------------------------------------------------
Technical Manager: Biome http://biome.ac.uk/

Greenfield Medical Library,
Queens Medical Centre,
Nottingham. 0115 9249924 x 42059
------------------------------------------------------------------
We are stardust

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bob Parkinson (#3)
Re: clog problem

Bob Parkinson <rwp@biome.ac.uk> writes:

FATAL 2: open of /usr/local/pgsql/data/pg_clog/02B6 failed: No such file
or directory

The direct cause of this problem is a tuple containing a bogus
transaction ID number (evidently 0x2B6xxxxx for some xxxxx, which I
assume is not close to your really active transaction numbers --- what
filenames do exist in $PGDATA/pg_clog?).

The next question of course is how did it get that way? It's possible
that this is a symptom of hardware problems, or there could be a
software bug we need to identify and fix. But that would take a lot
more info than we have.

If you want to dig into it, the next step would be to identify where the
bad tuple is and then use pg_filedump or something similar to have a
look at the raw data.

If you just want to get rid of the bad data as expeditiously as
possible, I'd suggest (a) make a file 256K long containing all zeroes,
(b) temporarily install it as $PGDATA/pg_clog/02B6, (c) run VACUUM;
(d) remove the bogus 02B6 file again. However this will probably ruin
any chance of deducing what went wrong afterwards...

regards, tom lane