restoring a file system backed-up data dir
My test server's sw/raid array recently died where I kept my PostgreSQL data directory. I have
both a full dump of the database and a file system back-up of the data directory.
I tried to restore my file system back-up first since it is a "fresher" copy. However, I am
unable to start postgres. The exit status I get from pg_ctl is "127" but I haven't yet found any
documentation relating to it exit-codes. There are no logs file created when I try to start up
postgresql, so I am a little unsure as to what direction I should take.
Should I just make do with the older pg_dump file and re-initdb or is there a way to salvage the
restored file system back-up.
Thanks for any help.
Regards,
Richard Broersma Jr.
oops, my mistake the exit code is 0 not 127.
--- Richard Broersma Jr <rabroersma@yahoo.com> wrote:
Show quoted text
My test server's sw/raid array recently died where I kept my PostgreSQL data directory. I have
both a full dump of the database and a file system back-up of the data directory.I tried to restore my file system back-up first since it is a "fresher" copy. However, I am
unable to start postgres. The exit status I get from pg_ctl is "127" but I haven't yet found
any
documentation relating to it exit-codes. There are no logs file created when I try to start up
postgresql, so I am a little unsure as to what direction I should take.Should I just make do with the older pg_dump file and re-initdb or is there a way to salvage the
restored file system back-up.Thanks for any help.
Regards,
Richard Broersma Jr.
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
oops, my mistake the exit code is 0 not 127.
My test server's sw/raid array recently died where I kept my PostgreSQL data directory. I
have
both a full dump of the database and a file system back-up of the data directory.
I tried to restore my file system back-up first since it is a "fresher" copy. However, I am
unable to start postgres. The exit status I get from pg_ctl is "127" but I haven't yet found
any
documentation relating to it exit-codes. There are no logs file created when I try to startup
postgresql, so I am a little unsure as to what direction I should take.
Should I just make do with the older pg_dump file and re-initdb or is there a way to salvage
the
restored file system back-up.
Thanks for any help.
I found the correct log file.
2006-10-10 04:57:45 PDT% LOG: could not open file "pg_xlog/000000010000000000000055"
(log file 0, segment 85): No such file or directory
2006-10-10 04:57:45 PDT% LOG: could not open file "pg_xlog/000000010000000000000055"
(log file 0, segment 85): No such file or directory
2006-10-10 04:57:45 PDT% LOG: database system was shut down at 2006-09-26 17:11:35 PDT
2006-10-10 04:57:45 PDT% LOG: invalid primary checkpoint record
2006-10-10 04:57:45 PDT% LOG: invalid secondary checkpoint record
2006-10-10 04:57:45 PDT% LOG: logger shutting down
2006-10-10 04:57:45 PDT% LOG: startup process (PID 5953) was terminated by signal 6
2006-10-10 04:57:45 PDT% PANIC: could not locate a valid checkpoint record
On Tue, Oct 10, 2006 at 05:31:08PM -0700, Richard Broersma Jr wrote:
My test server's sw/raid array recently died where I kept my PostgreSQL data directory. I have
both a full dump of the database and a file system back-up of the data directory.I tried to restore my file system back-up first since it is a "fresher" copy. However, I am
unable to start postgres. The exit status I get from pg_ctl is "127" but I haven't yet found any
documentation relating to it exit-codes. There are no logs file created when I try to start up
postgresql, so I am a little unsure as to what direction I should take.Should I just make do with the older pg_dump file and re-initdb or is there a way to salvage the
restored file system back-up.
You can't simply take a filesystem backup and hope it will work; it
won't. You have to either:
shut down the database during the backup
take a filesystem snapshot
use PITR
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
Richard Broersma Jr <rabroersma@yahoo.com> writes:
I found the correct log file.
2006-10-10 04:57:45 PDT% LOG: could not open file "pg_xlog/000000010000000000000055"
(log file 0, segment 85): No such file or directory
2006-10-10 04:57:45 PDT% LOG: could not open file "pg_xlog/000000010000000000000055"
(log file 0, segment 85): No such file or directory
2006-10-10 04:57:45 PDT% LOG: database system was shut down at 2006-09-26 17:11:35 PDT
2006-10-10 04:57:45 PDT% LOG: invalid primary checkpoint record
2006-10-10 04:57:45 PDT% LOG: invalid secondary checkpoint record
2006-10-10 04:57:45 PDT% LOG: logger shutting down
2006-10-10 04:57:45 PDT% LOG: startup process (PID 5953) was terminated by signal 6
2006-10-10 04:57:45 PDT% PANIC: could not locate a valid checkpoint record
This says that pg_control is out of sync with the pg_xlog files, which
is not real surprising for a filesystem-level backup. You could try
forcing the issue with pg_resetxlog, but you'll very likely end up with
a non-self-consistent database. The pg_dump backup is a better bet.
If you are really desperate to recover the latest changes, try
pg_resetxlog then pg_dump, and diff the dump file against your good
pg_dump to see which changes you want to believe and apply. But I'd
still say you want to initdb and restore from the pg_dump backup before
going forward.
regards, tom lane
... so what if the database size is above 20 GB, do we have to do
pg_dump each at periodics time to get reliable backup?
Show quoted text
On 10/11/06, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Richard Broersma Jr <rabroersma@yahoo.com> writes:
I found the correct log file.
2006-10-10 04:57:45 PDT% LOG: could not open file "pg_xlog/000000010000000000000055"
(log file 0, segment 85): No such file or directory
2006-10-10 04:57:45 PDT% LOG: could not open file "pg_xlog/000000010000000000000055"
(log file 0, segment 85): No such file or directory
2006-10-10 04:57:45 PDT% LOG: database system was shut down at 2006-09-26 17:11:35 PDT
2006-10-10 04:57:45 PDT% LOG: invalid primary checkpoint record
2006-10-10 04:57:45 PDT% LOG: invalid secondary checkpoint record
2006-10-10 04:57:45 PDT% LOG: logger shutting down
2006-10-10 04:57:45 PDT% LOG: startup process (PID 5953) was terminated by signal 6
2006-10-10 04:57:45 PDT% PANIC: could not locate a valid checkpoint recordThis says that pg_control is out of sync with the pg_xlog files, which
is not real surprising for a filesystem-level backup. You could try
forcing the issue with pg_resetxlog, but you'll very likely end up with
a non-self-consistent database. The pg_dump backup is a better bet.If you are really desperate to recover the latest changes, try
pg_resetxlog then pg_dump, and diff the dump file against your good
pg_dump to see which changes you want to believe and apply. But I'd
still say you want to initdb and restore from the pg_dump backup before
going forward.regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
The pg_dump backup is a better bet.
If you are really desperate to recover the latest changes, try
pg_resetxlog then pg_dump, and diff the dump file against your good
pg_dump to see which changes you want to believe and apply. But I'd
still say you want to initdb and restore from the pg_dump backup before
going forward.
Thanks for all of the advice. Fortunately, this is a lesson I am learning on my practice server.
I would hate to think of the predicament I would be in if this happened to my server at work.
Here, none of the data is critical, I just didn't want to re-created all of the schemas that I am
loading test data to.
But lesson learned, I am going to a closer look at my back-up strategy.
Regards,
Richard Broersma Jr.
On Wed, Oct 11, 2006 at 11:13:21AM +0700, Luki Rustianto wrote:
... so what if the database size is above 20 GB, do we have to do
pg_dump each at periodics time to get reliable backup?
No, you can also use Point In Time Recovery (PITR).
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)