HOLD THE PRESSES!! ... pg_dump from v7.0.3 can't import to v7.1?
v7.0.3 database:
trends_acctng=# \d
List of relations
Name | Type | Owner
-------------+-------+-------
accounts | table | pgsql
admin | table | pgsql
calls | table | pgsql
comments | table | pgsql
cookies | table | pgsql
credit_card | table | pgsql
credits | table | pgsql
logs | table | pgsql
personal | table | pgsql
radhist | table | pgsql
radlog | table | pgsql
remote_host | table | pgsql
static_ip | table | pgsql
users | table | pgsql
(14 rows)
v7.1 database:
trends_acctng=# \d
List of relations
Name | Type | Owner
--------------------------+----------+---------
buy | table | jeff
buy_bid_seq | sequence | jeff
clients_c_id_seq | sequence | jeff
cppvad_clients | table | jeff
cppvad_clients_cc_id_seq | sequence | jeff
cppvad_info | table | jeff
cppvad_info_cid_seq | sequence | jeff
download | table | jeff
download_dlid_seq | sequence | jeff
exchange | table | jeff
exchange_exid_seq | sequence | jeff
gallery | table | scrappy
listing | table | area902
listing_lid_seq | sequence | area902
ndict10 | table | pgsql
ndict11 | table | pgsql
ndict12 | table | pgsql
ndict16 | table | pgsql
ndict2 | table | pgsql
ndict3 | table | pgsql
ndict32 | table | pgsql
ndict4 | table | pgsql
ndict5 | table | pgsql
ndict6 | table | pgsql
ndict7 | table | pgsql
ndict8 | table | pgsql
ndict9 | table | pgsql
projects | table | scrappy
thepress | table | jeff
thepress_id_seq | sequence | jeff
ticket | table | pgsql
ticket_comments | table | pgsql
ticket_ticket_id_seq | sequence | pgsql
ticket_times | table | pgsql
(34 rows)
all I did was use pg_dumpall from v7.0.3 to dump to a text file, and
"psql template1 < dumpfile" to load it back in again ...
obviously this doesn't work like it has in the past?
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
On Tue, 10 Apr 2001, The Hermit Hacker wrote:
all I did was use pg_dumpall from v7.0.3 to dump to a text file, and
"psql template1 < dumpfile" to load it back in again ...obviously this doesn't work like it has in the past?
Marc --
Was there an error message during restore?
I've been dumping/restoring w/7.1 since long before beta, w/o real
problems, but haven't been doing this w/7.0.3 stuff. But still, psql
should give you some error messages.
(I'm sure you know this, but for the benefit of others on the list)
In Linux, I usually use the command)
psql dbname < dumpfile 2>&1 | grep ERROR
so that I don't miss any errors among the all the NOTICEs
--
Joel Burton <jburton@scw.org>
Director of Information Systems, Support Center of Washington
No errors, nothing ... here is the backend:
%bin/postmaster -D /usr/local/pgsql/data
DEBUG: database system was shut down at 2001-04-10 15:04:08 ADT
DEBUG: CheckPoint record at (0, 1522068)
DEBUG: Redo record at (0, 1522068); Undo record at (0, 0); Shutdown TRUE
DEBUG: NextTransactionId: 615; NextOid: 18720
DEBUG: database system is in production state
DEBUG: copy: line 445, XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG: MoveOfflineLogs: remove 0000000000000000
and I ran the restore in 'script' to save everything, and as:
psql -q template1 < pg_dumpall.out
and there are no errors in the resultant file ...
For all intensive purposes, the restore *looked* clean ... but, going back
and looking at the dump file, the dump wasn't clean *puzzled look*
I'm going to have to look at this some more, but its pg_dumpall in v7.0.3
that is dumping the wrong data, not the restore :(
all 77 databases got dump'd as the same database:
You are now connected to database wind.
wind=# \d
List of relations
Name | Type | Owner
--------------------------+----------+---------
buy | table | jeff
buy_bid_seq | sequence | jeff
clients_c_id_seq | sequence | jeff
cppvad_clients | table | jeff
cppvad_clients_cc_id_seq | sequence | jeff
cppvad_info | table | jeff
cppvad_info_cid_seq | sequence | jeff
download | table | jeff
download_dlid_seq | sequence | jeff
exchange | table | jeff
exchange_exid_seq | sequence | jeff
gallery | table | scrappy
listing | table | area902
listing_lid_seq | sequence | area902
ndict10 | table | pgsql
ndict11 | table | pgsql
ndict12 | table | pgsql
ndict16 | table | pgsql
ndict2 | table | pgsql
ndict3 | table | pgsql
ndict32 | table | pgsql
ndict4 | table | pgsql
ndict5 | table | pgsql
ndict6 | table | pgsql
ndict7 | table | pgsql
ndict8 | table | pgsql
ndict9 | table | pgsql
projects | table | scrappy
thepress | table | jeff
thepress_id_seq | sequence | jeff
ticket | table | pgsql
ticket_comments | table | pgsql
ticket_ticket_id_seq | sequence | pgsql
ticket_times | table | pgsql
(34 rows)
wind=# \connect viper
You are now connected to database viper.
viper=# \d
List of relations
Name | Type | Owner
--------------------------+----------+---------
buy | table | jeff
buy_bid_seq | sequence | jeff
clients_c_id_seq | sequence | jeff
cppvad_clients | table | jeff
cppvad_clients_cc_id_seq | sequence | jeff
cppvad_info | table | jeff
cppvad_info_cid_seq | sequence | jeff
download | table | jeff
download_dlid_seq | sequence | jeff
exchange | table | jeff
exchange_exid_seq | sequence | jeff
gallery | table | scrappy
listing | table | area902
listing_lid_seq | sequence | area902
ndict10 | table | pgsql
ndict11 | table | pgsql
ndict12 | table | pgsql
ndict16 | table | pgsql
ndict2 | table | pgsql
ndict3 | table | pgsql
ndict32 | table | pgsql
ndict4 | table | pgsql
ndict5 | table | pgsql
ndict6 | table | pgsql
ndict7 | table | pgsql
ndict8 | table | pgsql
ndict9 | table | pgsql
projects | table | scrappy
thepress | table | jeff
thepress_id_seq | sequence | jeff
ticket | table | pgsql
ticket_comments | table | pgsql
ticket_ticket_id_seq | sequence | pgsql
ticket_times | table | pgsql
(34 rows)
neat ...
On Tue, 10 Apr 2001, Joel Burton wrote:
On Tue, 10 Apr 2001, The Hermit Hacker wrote:
all I did was use pg_dumpall from v7.0.3 to dump to a text file, and
"psql template1 < dumpfile" to load it back in again ...obviously this doesn't work like it has in the past?
Marc --
Was there an error message during restore?
I've been dumping/restoring w/7.1 since long before beta, w/o real
problems, but haven't been doing this w/7.0.3 stuff. But still, psql
should give you some error messages.(I'm sure you know this, but for the benefit of others on the list)
In Linux, I usually use the command)psql dbname < dumpfile 2>&1 | grep ERROR
so that I don't miss any errors among the all the NOTICEs
--
Joel Burton <jburton@scw.org>
Director of Information Systems, Support Center of Washington
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
okay, not sure how we should document this, but apparently pg_dumpall
doesn't work as the man page at:
http://www.postgresql.org/users-lounge/docs/7.0/user/app-pgdumpall.htm
appears to suggest:
==========================
%pg_dumpall -h pgsql
psql: No pg_hba.conf entry for host localhost, user pgsql, database
template1
\connect template1
select datdba into table tmp_pg_shadow from pg_database where
datname = 'template1';
delete from pg_shadow where usesysid <> tmp_pg_shadow.datdba;
drop table tmp_pg_shadow;
copy pg_shadow from stdin;
psql: No pg_hba.conf entry for host localhost, user pgsql, database
template1
\.
delete from pg_group;
copy pg_group from stdin;
psql: No pg_hba.conf entry for host localhost, user pgsql, database
template1
\.
psql: No pg_hba.conf entry for host localhost, user pgsql, database
template1
========================
Now, I swore I did a 'setenv PGHOST db.hub.org' to get around it, and it
still failed, but now its working ... most confusing :(
But, still, pg_dumpall doesn't appear to accept the -h option in v7.0.3
...
On Tue, 10 Apr 2001, The Hermit Hacker wrote:
No errors, nothing ... here is the backend:
%bin/postmaster -D /usr/local/pgsql/data
DEBUG: database system was shut down at 2001-04-10 15:04:08 ADT
DEBUG: CheckPoint record at (0, 1522068)
DEBUG: Redo record at (0, 1522068); Undo record at (0, 0); Shutdown TRUE
DEBUG: NextTransactionId: 615; NextOid: 18720
DEBUG: database system is in production state
DEBUG: copy: line 445, XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG: XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG: MoveOfflineLogs: remove 0000000000000000and I ran the restore in 'script' to save everything, and as:
psql -q template1 < pg_dumpall.out
and there are no errors in the resultant file ...
For all intensive purposes, the restore *looked* clean ... but, going back
and looking at the dump file, the dump wasn't clean *puzzled look*I'm going to have to look at this some more, but its pg_dumpall in v7.0.3
that is dumping the wrong data, not the restore :(all 77 databases got dump'd as the same database:
You are now connected to database wind.
wind=# \d
List of relations
Name | Type | Owner
--------------------------+----------+---------
buy | table | jeff
buy_bid_seq | sequence | jeff
clients_c_id_seq | sequence | jeff
cppvad_clients | table | jeff
cppvad_clients_cc_id_seq | sequence | jeff
cppvad_info | table | jeff
cppvad_info_cid_seq | sequence | jeff
download | table | jeff
download_dlid_seq | sequence | jeff
exchange | table | jeff
exchange_exid_seq | sequence | jeff
gallery | table | scrappy
listing | table | area902
listing_lid_seq | sequence | area902
ndict10 | table | pgsql
ndict11 | table | pgsql
ndict12 | table | pgsql
ndict16 | table | pgsql
ndict2 | table | pgsql
ndict3 | table | pgsql
ndict32 | table | pgsql
ndict4 | table | pgsql
ndict5 | table | pgsql
ndict6 | table | pgsql
ndict7 | table | pgsql
ndict8 | table | pgsql
ndict9 | table | pgsql
projects | table | scrappy
thepress | table | jeff
thepress_id_seq | sequence | jeff
ticket | table | pgsql
ticket_comments | table | pgsql
ticket_ticket_id_seq | sequence | pgsql
ticket_times | table | pgsql
(34 rows)
wind=# \connect viper
You are now connected to database viper.
viper=# \d
List of relations
Name | Type | Owner
--------------------------+----------+---------
buy | table | jeff
buy_bid_seq | sequence | jeff
clients_c_id_seq | sequence | jeff
cppvad_clients | table | jeff
cppvad_clients_cc_id_seq | sequence | jeff
cppvad_info | table | jeff
cppvad_info_cid_seq | sequence | jeff
download | table | jeff
download_dlid_seq | sequence | jeff
exchange | table | jeff
exchange_exid_seq | sequence | jeff
gallery | table | scrappy
listing | table | area902
listing_lid_seq | sequence | area902
ndict10 | table | pgsql
ndict11 | table | pgsql
ndict12 | table | pgsql
ndict16 | table | pgsql
ndict2 | table | pgsql
ndict3 | table | pgsql
ndict32 | table | pgsql
ndict4 | table | pgsql
ndict5 | table | pgsql
ndict6 | table | pgsql
ndict7 | table | pgsql
ndict8 | table | pgsql
ndict9 | table | pgsql
projects | table | scrappy
thepress | table | jeff
thepress_id_seq | sequence | jeff
ticket | table | pgsql
ticket_comments | table | pgsql
ticket_ticket_id_seq | sequence | pgsql
ticket_times | table | pgsql
(34 rows)neat ...
On Tue, 10 Apr 2001, Joel Burton wrote:
On Tue, 10 Apr 2001, The Hermit Hacker wrote:
all I did was use pg_dumpall from v7.0.3 to dump to a text file, and
"psql template1 < dumpfile" to load it back in again ...obviously this doesn't work like it has in the past?
Marc --
Was there an error message during restore?
I've been dumping/restoring w/7.1 since long before beta, w/o real
problems, but haven't been doing this w/7.0.3 stuff. But still, psql
should give you some error messages.(I'm sure you know this, but for the benefit of others on the list)
In Linux, I usually use the command)psql dbname < dumpfile 2>&1 | grep ERROR
so that I don't miss any errors among the all the NOTICEs
--
Joel Burton <jburton@scw.org>
Director of Information Systems, Support Center of WashingtonMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
The Hermit Hacker writes:
okay, not sure how we should document this, but apparently pg_dumpall
doesn't work as the man page at:http://www.postgresql.org/users-lounge/docs/7.0/user/app-pgdumpall.htm
appears to suggest:
Now, I swore I did a 'setenv PGHOST db.hub.org' to get around it, and it
still failed, but now its working ... most confusing :(But, still, pg_dumpall doesn't appear to accept the -h option in v7.0.3
Exactly right. Options to pg_dumpall are only "pg_dump options", which
means things like -o or -d. But pg_dumpall also runs a few psqls, which
don't see any of this.
Btw., it would really seem like a neat feature if a given pg_dump suite
would also handle the respective previous version. Otherwise we're in a
situation like now where we've got a shiny new pg_dump but people that
want to upgrade are still stuck with the broken 7.0 incarnation.
--
Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/
On Wed, 11 Apr 2001, Peter Eisentraut wrote:
The Hermit Hacker writes:
okay, not sure how we should document this, but apparently pg_dumpall
doesn't work as the man page at:http://www.postgresql.org/users-lounge/docs/7.0/user/app-pgdumpall.htm
appears to suggest:
Now, I swore I did a 'setenv PGHOST db.hub.org' to get around it, and it
still failed, but now its working ... most confusing :(But, still, pg_dumpall doesn't appear to accept the -h option in v7.0.3
Exactly right. Options to pg_dumpall are only "pg_dump options", which
means things like -o or -d. But pg_dumpall also runs a few psqls, which
don't see any of this.
Okay, but, according to the man page, -h <host> *is* a pg_dump option ...
pg_dump [ dbname ]
pg_dump [ -h host ] [ -p port ]
[ -t table ]
[ -a ] [ -c ] [ -d ] [ -D ] [ -i ] [ -n ] [ -N ]
[ -o ] [ -s ] [ -u ] [ -v ] [ -x ]
[ dbname ]
Thus spake The Hermit Hacker
all 77 databases got dump'd as the same database:
Personally I never use pg_dumpall. It is easy to write a script to get
the list of databases and use pg_dump to dump them individually. In fact
I like dumping individual tables if I can. Mostly I like the ability to
fix one table if there is a problem. Finding and fixing one 7 row table
in a multi-gigabyte files really sucks. :-)
--
D'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
At 01:09 11/04/01 +0200, Peter Eisentraut wrote:
Btw., it would really seem like a neat feature if a given pg_dump suite
would also handle the respective previous version.
This has been in the back of my mind for some time, and is why I initially
backported my pg_dump changes to 7.0. Unfortunately, I did not continue,
and the backport wa targetted at 7.0, not 7.1.
I would be willing to try to get a 7.0 compatibility mode going, perhaps as
a patch/contrib after 7.1 (or before, depending on release). There is
probably not that much effort involved; the main changes to the DB
interface are in the use of formatType and the function manager defintions
- at least I think that's the case...
Peter: what options are there for getting formatType working in 7.0?
Also, just in case people can think of other dump-related changes from 7.0,
to 7.1, I have included a list below:
- LOs stored differently (one or two line change)
- formatType (a few places, but it's significant)
- function manager (SQL substitution should work here, I hope, since the
fmgr can detect the right protocol to use)
- detection of relations that are views - old isViewRule would need to be
resurrected
- last builtin OID derivation changed
- Handling of ZPBITOID & VARBITOID types? Not sure if this is OK for 7.0
Unfortunately, I have not paid much attention to internal changes from 7.0
to 7.1, so I have no idea what else was changed.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
At 06:27 11/04/01 -0400, D'Arcy J.M. Cain wrote:
Finding and fixing one 7 row table
in a multi-gigabyte files really sucks. :-)
At least in 7.1 you can dump the who DB to a file/tape, then extract one
table from the dump file easily...
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
On Tue, 10 Apr 2001, Joel Burton wrote:
On Tue, 10 Apr 2001, The Hermit Hacker wrote:
all I did was use pg_dumpall from v7.0.3 to dump to a text file, and
"psql template1 < dumpfile" to load it back in again ...obviously this doesn't work like it has in the past?
Marc --
Was there an error message during restore?
I've been dumping/restoring w/7.1 since long before beta, w/o real
problems, but haven't been doing this w/7.0.3 stuff. But still, psql
should give you some error messages.(I'm sure you know this, but for the benefit of others on the list)
In Linux, I usually use the command)psql dbname < dumpfile 2>&1 | grep ERROR
so that I don't miss any errors among the all the NOTICEs
I recall having a problem when Marc moved the server away from
hub.org and to db.hub.org. I couldn't import the database I
exported from 7.0.x into it without first creating the sequences.
Could this be something related - although I thought that had gotten
fixed.
Vince.
--
==========================================================================
Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net
56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directory http://www.camping-usa.com
Online Giftshop Superstore http://www.cloudninegifts.com
==========================================================================
At 17:18 11/04/01 +0200, Peter Eisentraut wrote:
What I meant was that whenever the backend changes in a way that mandates
pg_dump changes we would leave the old way in place and only add a new
case to handle the new backend.
That's what I had in mind as well; I gave up on the backport because it
seemed pointless (as you suggest).
This would invariably introduce code bloat, but that could probably be
managed by a modular design within pg_dump, plus perhaps removing support
for really old versions once in a while.
I was thinking that with time these version-specific cases will reduce (eg.
definition schemas will help), and that we could put all the DB interface
into separate modules.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
Import Notes
Reply to msg id not found: Pine.LNX.4.30.0104111710460.1201-100000@peter.localdomain
Philip Warner writes:
At 01:09 11/04/01 +0200, Peter Eisentraut wrote:
Btw., it would really seem like a neat feature if a given pg_dump suite
would also handle the respective previous version.This has been in the back of my mind for some time, and is why I initially
backported my pg_dump changes to 7.0. Unfortunately, I did not continue,
and the backport wa targetted at 7.0, not 7.1.
This is not really what I had in mind. Backporting enhancements would
only serve the users that manually installed the enhancements. Actually,
it's quite idiosyncratic, because the point of a new release is to publish
enhancements.
What I meant was that whenever the backend changes in a way that mandates
pg_dump changes we would leave the old way in place and only add a new
case to handle the new backend. Stupid example:
switch (backend_version)
{
case 71:
result = PQexex("select * from pg_class;");
break;
case 72:
result = PQexec("select * from pg_newnameforpgclass;");
break;
}
This would invariably introduce code bloat, but that could probably be
managed by a modular design within pg_dump, plus perhaps removing support
for really old versions once in a while.
--
Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/
Peter Eisentraut <peter_e@gmx.net> writes:
Btw., it would really seem like a neat feature if a given pg_dump suite
would also handle the respective previous version. Otherwise we're in a
situation like now where we've got a shiny new pg_dump but people that
want to upgrade are still stuck with the broken 7.0 incarnation.
No more stuck than they were if they had needed to reload from their
dump files into 7.0.
I really doubt that it's worth going out of our way to try to keep
pg_dump compatible with obsolete backends. If we had infinite manpower,
then sure, but I think the time is better spent elsewhere.
regards, tom lane