more corruption
Now I know that you all believe that postgres only has problems due to
bad programming, but I'm getting another problem that I can't figure out
in 6.5.3
[PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]
type \? for help on slash commands
type \q to quit
type \g or terminate with semicolon to execute query
You are currently connected to the database: db_geocrawler
db_geocrawler=> vacuum analyze;
ERROR: cannot find attribute 1 of relation pg_attrdef
This is causing geocrawler.com to be totally fubar at this point.
Any ideas? Do I have to recover from the last backup?
Tim
--
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems
408-542-5723
On Sun, 9 Jul 2000, Tim Perdue wrote:
Now I know that you all believe that postgres only has problems due to
bad programming, but I'm getting another problem that I can't figure out
in 6.5.3[PostgreSQL 6.5.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66]
type \? for help on slash commands
type \q to quit
type \g or terminate with semicolon to execute query
You are currently connected to the database: db_geocrawlerdb_geocrawler=> vacuum analyze;
ERROR: cannot find attribute 1 of relation pg_attrdefThis is causing geocrawler.com to be totally fubar at this point.
Any ideas? Do I have to recover from the last backup?
just a quick thought ... have you tried shutting down and restrating the
postmaster? basically, "reset" the shared memory? v7.x handles
corruptions like that alot cleaner, but previous versions caused odd
results if shared memory got corrupted ...
The Hermit Hacker wrote:
just a quick thought ... have you tried shutting down and restrating the
postmaster? basically, "reset" the shared memory? v7.x handles
corruptions like that alot cleaner, but previous versions caused odd
results if shared memory got corrupted ...
Well, I've rebooted twice. In fact, it was a hard lock that caused the
problems. When the machine was brought back up, the db was foobar.
I'm doing something really really evil to avoid losing the last days'
data:
-I created a new db
-used the old db schema to create all new blank tables
-copied the physical table files from the old data directory into the
new database directory
-currently vacuuming the new db - nothing is barfing yet
-now hopefully I can create my indexes and be back in business
Tim
--
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems
408-542-5723
You have recreated what pg_upgrade does. It is for upgrading system
tables. If only your system tables were hosed, you are fine now.
The Hermit Hacker wrote:
just a quick thought ... have you tried shutting down and restrating the
postmaster? basically, "reset" the shared memory? v7.x handles
corruptions like that alot cleaner, but previous versions caused odd
results if shared memory got corrupted ...Well, I've rebooted twice. In fact, it was a hard lock that caused the
problems. When the machine was brought back up, the db was foobar.I'm doing something really really evil to avoid losing the last days'
data:-I created a new db
-used the old db schema to create all new blank tables
-copied the physical table files from the old data directory into the
new database directory
-currently vacuuming the new db - nothing is barfing yet
-now hopefully I can create my indexes and be back in businessTim
--
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems
408-542-5723
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Bruce Momjian <pgman@candle.pha.pa.us> writes:
You have recreated what pg_upgrade does. It is for upgrading system
tables. If only your system tables were hosed, you are fine now.
Er, not unless he did exactly the right fancy footwork with pg_log and
vacuum. Or have you forgotten how tricky it was to get pg_upgrade to
work reliably?
regards, tom lane
You have recreated what pg_upgrade does. It is for upgrading system
tables. If only your system tables were hosed, you are fine now.
Only if your previous system has been vacuum'ed and no dml afterwards.
Otherwise you also need to copy your old pg_log.
The Hermit Hacker wrote:
just a quick thought ... have you tried shutting down and
restrating the
postmaster? basically, "reset" the shared memory? v7.x handles
corruptions like that alot cleaner, but previous versionscaused odd
results if shared memory got corrupted ...
Well, I've rebooted twice. In fact, it was a hard lock that
caused the
problems. When the machine was brought back up, the db was foobar.
I'm doing something really really evil to avoid losing the
last days'
data:
-I created a new db
-used the old db schema to create all new blank tables
vacuum new db
(I would do a tar backup of the whole old db)
vacuum old db, if that is possible
-copied the physical table files from the old data
directory into the
new database directory
if above vacuum old db was not possible copy old pg_log
-currently vacuuming the new db - nothing is barfing yet
-now hopefully I can create my indexes and be back in businessTim
Andreas
Import Notes
Resolved by subject fallback
Zeugswetter Andreas SB wrote:
-I created a new db
-used the old db schema to create all new blank tablesvacuum new db
(I would do a tar backup of the whole old db)
vacuum old db, if that is possible
Was not possible.
-copied the physical table files from the old data
directory into the
new database directory
if above vacuum old db was not possible copy old pg_log
Oops - I didn't do that.
-currently vacuuming the new db - nothing is barfing yet
Actually, the vacuum seemed to be running forever making no progress so
I killed it.
-now hopefully I can create my indexes and be back in business
I vacuumed here and it worked. I did not use my "old" pg_log file - what
did I lose?
Tim
--
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems
408-542-5723
Bruce Momjian <pgman@candle.pha.pa.us> writes:
You have recreated what pg_upgrade does. It is for upgrading system
tables. If only your system tables were hosed, you are fine now.Er, not unless he did exactly the right fancy footwork with pg_log and
vacuum. Or have you forgotten how tricky it was to get pg_upgrade to
work reliably?
Yes, I had forgotten. The new table file names will make pg_upgrade
useless in the future.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Zeugswetter Andreas SB wrote:
-I created a new db
-used the old db schema to create all new blank tablesvacuum new db
(I would do a tar backup of the whole old db)
vacuum old db, if that is possibleWas not possible.
-copied the physical table files from the old data
directory into the
new database directory
if above vacuum old db was not possible copy old pg_log
Oops - I didn't do that.
-currently vacuuming the new db - nothing is barfing yet
Actually, the vacuum seemed to be running forever making no
progress so
I killed it.-now hopefully I can create my indexes and be back in business
I vacuumed here and it worked. I did not use my "old" pg_log
file - what
did I lose?
Tuples that have been inserted/updated/deleted after last vacuum
in old db assume that the corresponding txn has to be rolled back.
Since your vacuum on old db only did half the db, that half will be current,
but the rest will be old, thus you loose consistency.
One of the core please confirm.
Andreas
Import Notes
Resolved by subject fallback
-currently vacuuming the new db - nothing is barfing yet
Actually, the vacuum seemed to be running forever making no
progress so
I killed it.-now hopefully I can create my indexes and be back in business
I vacuumed here and it worked. I did not use my "old" pg_log
file - what
did I lose?Tuples that have been inserted/updated/deleted after last vacuum
in old db assume that the corresponding txn has to be rolled back.
Correction: Tuples that have been inserted/updated/deleted but have not
been accessed afterwards (the magic first access that updates the tuple
transaction status inplace).
Since your vacuum on old db only did half the db, that half
will be current,
but the rest will be old, thus you loose consistency.One of the core please confirm.
Andreas
Import Notes
Resolved by subject fallback
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Yes, I had forgotten. The new table file names will make pg_upgrade
useless in the future.
Hmm ... that's an implication I hadn't thought about. I wonder how much
work it would be to get pg_upgrade to rename table files. Be a shame to
throw pg_upgrade away after all the sweat we put into making it work ;-)
regards, tom lane
Tim Perdue <tperdue@valinux.com> writes:
-now hopefully I can create my indexes and be back in business
I vacuumed here and it worked. I did not use my "old" pg_log file - what
did I lose?
Hard to tell. Any tuples that weren't already marked on disk as "known
committed" have probably gone missing, because their originating
transaction IDs likely won't be shown as committed in the new pg_log.
So I'd look for missing tuples from recent transactions in the old DB.
regards, tom lane
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Yes, I had forgotten. The new table file names will make pg_upgrade
useless in the future.Hmm ... that's an implication I hadn't thought about. I wonder how much
work it would be to get pg_upgrade to rename table files. Be a shame to
throw pg_upgrade away after all the sweat we put into making it work ;-)
Seems impossible. The physical file names are not dumped by pg_dump, so
there is really no way to re-assocate the files with the table names.
Looks like a lost cause.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Yes, I had forgotten. The new table file names will make pg_upgrade
useless in the future.Hmm ... that's an implication I hadn't thought about. I wonder how much
work it would be to get pg_upgrade to rename table files. Be a shame to
throw pg_upgrade away after all the sweat we put into making it work ;-)
I guess we could throw the physical file into a comment, and somehow
read that in pg_upgrade, but it seems too error-prone. I am sure Vadim
will come up with something to break pg_upgrade soon anyway. It is a
nifty feature while we have it.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Bruce Momjian <pgman@candle.pha.pa.us> writes:
Yes, I had forgotten. The new table file names will make pg_upgrade
useless in the future.Hmm ... that's an implication I hadn't thought about. I wonder how much
work it would be to get pg_upgrade to rename table files. Be a shame to
throw pg_upgrade away after all the sweat we put into making it work ;-)
Seems impossible. The physical file names are not dumped by pg_dump, so
there is really no way to re-assocate the files with the table names.
Looks like a lost cause.
Well, we'd need to modify the pg_dump format so that the OIDs of the
tables are recorded, but given that it doesn't seem impossible.
I suppose tablespaces might complicate the situation to the point where
it wasn't worth the trouble, though.
Given Vadim's plans for WAL and smgr changes, at least the next two
version updates likely won't be updatable with pg_upgrade anyway.
However, we've seen a couple of times recently when pg_upgrade was
useful as a recovery tool for system-table corruption, and that's why
I'm unhappy about the prospect of just discarding it...
regards, tom lane
At 18:08 10/07/00 -0400, Tom Lane wrote:
Well, we'd need to modify the pg_dump format so that the OIDs of the
tables are recorded, but given that it doesn't seem impossible.
Already are (in text it's in the comments, and in the other formats it's
part of the data in the TOC.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.C.N. 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
Given Vadim's plans for WAL and smgr changes, at least the next two
version updates likely won't be updatable with pg_upgrade anyway.
However, we've seen a couple of times recently when pg_upgrade was
useful as a recovery tool for system-table corruption, and that's why
I'm unhappy about the prospect of just discarding it...
Agreed. I can see some cases where several types of recovery will be
harder in the new system.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
-----Original Message-----
From: pgsql-hackers-owner@hub.org [mailto:pgsql-hackers-owner@hub.org]On
Behalf Of Tom LaneTim Perdue <tperdue@valinux.com> writes:
-now hopefully I can create my indexes and be back in business
I vacuumed here and it worked. I did not use my "old" pg_log file - what
did I lose?Hard to tell. Any tuples that weren't already marked on disk as "known
committed" have probably gone missing, because their originating
transaction IDs likely won't be shown as committed in the new pg_log.
So I'd look for missing tuples from recent transactions in the old DB.
Hmm,this may be more serious.
MVCC doesn't see committed(marked HEAP_XMIN_COMMITTED) but
not yet committed(t_xmin > CurrentTransactionId) tuples.
He will see them in the future.
Regards.
Hiroshi Inoue
Inoue@tpf.co.jp
-----Original Message-----
From: pgsql-hackers-owner@hub.org [mailto:pgsql-hackers-owner@hub.org]On
Behalf Of Tom LaneTim Perdue <tperdue@valinux.com> writes:
-now hopefully I can create my indexes and be back in business
I vacuumed here and it worked. I did not use my "old" pg_log
file - what
did I lose?
Hard to tell. Any tuples that weren't already marked on disk as "known
committed" have probably gone missing, because their originating
transaction IDs likely won't be shown as committed in the new pg_log.
So I'd look for missing tuples from recent transactions in the old DB.Hmm,this may be more serious.
MVCC doesn't see committed(marked HEAP_XMIN_COMMITTED) but
not yet committed(t_xmin > CurrentTransactionId) tuples.
He will see them in the future.
P.S.
This is the main reason that I once proposed to call
'pg_ctl stop' to stop postmaster in pg_upgrade before/after
moving pg_log and pg_varibale.
There was a dicussion to recycle OIDs.
It's impossible to recycle XIDs.
Regards.
Hiroshi Inoue
"Hiroshi Inoue" <Inoue@tpf.co.jp> writes:
I vacuumed here and it worked. I did not use my "old" pg_log file - what
did I lose?Hard to tell. Any tuples that weren't already marked on disk as "known
committed" have probably gone missing, because their originating
transaction IDs likely won't be shown as committed in the new pg_log.
So I'd look for missing tuples from recent transactions in the old DB.
Hmm,this may be more serious.
MVCC doesn't see committed(marked HEAP_XMIN_COMMITTED) but
not yet committed(t_xmin > CurrentTransactionId) tuples.
He will see them in the future.
But he did a vacuum --- won't that get rid of any tuples that aren't
currently considered committed?
regards, tom lane