(A) native Windows port
Hackers,
as some of you figured already, Katie Ward and I are working fulltime on
PostgreSQL and are actually doing a native Win32 port. This port is not
based on CygWIN, Apache or any other compatibility library but uses 100%
native Windows functionality only.
We already have it far enough to create and drop databases, tables and
of course do the usual stuff (like INSERT, UPDATE, DELETE and SELECT).
But there is still plenty of work, so don't worry, all of you will have
a chance to leave their finger- and/or footprints.
What I want to start today is discussion about project coordination and
code management. Our proposal is to provide a diff first. I have no clue
when exactly this will happen, but assuming the usual PostgreSQL
schedule behaviour I would say it's measured in weeks :-). A given is
that we will contribute this work under the BSD license.
We will upload the diff to developer.postgresql.org and post a link
together with build instructions to hackers. After some discussion we
can create a CVS branch and apply that patch to there. Everyone who
wants to contribute to the Win32 port can then work in that branch.
Katie and I will take care that changes in trunk will periodically get
merged into the Win32 branch.
This model guarantees that we don't change the mainstream PostgreSQL
until the developers community decides to follow this road and choose
this implementation as the PostgreSQL Win32 port. At that point we can
merge the Win32 port into the trunk and ship it with the next release.
As for project coordination, I am willing to setup and maintain a page
similar to the (horribly outdated) ones that I did for Toast and RI.
Summarizing project status, pointing to resources, instructions, maybe a
roadmap, TODO, you name it.
Comments? Suggestions?
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #
-----Original Message-----
From: Jan Wieck [mailto:JanWieck@Yahoo.com]
Sent: 26 June 2002 15:45
To: HACKERS
Subject: [HACKERS] (A) native Windows portAs for project coordination, I am willing to setup and
maintain a page similar to the (horribly outdated) ones that
I did for Toast and RI. Summarizing project status, pointing
to resources, instructions, maybe a roadmap, TODO, you name it.Comments? Suggestions?
Great, can't wait to see your work.
I can probably sort out an installer shortly after you have the first
code available - that way we can work out kinks in a binary
distribution, as well as hopefully get some more testers who may not
have compilers etc on their windows boxes. Let me know if you'd like me
to work on that...
Regards, Dave.
Import Notes
Resolved by subject fallback
Jan Wieck wrote:
As for project coordination, I am willing to setup and maintain a page
similar to the (horribly outdated) ones that I did for Toast and RI.
Summarizing project status, pointing to resources, instructions, maybe a
roadmap, TODO, you name it.
Great. Please see roadmap in TODO.detail/win32 for a list of items and
possible approaches.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
As for project coordination, I am willing to setup and maintain a page
similar to the (horribly outdated) ones that I did for Toast and RI.
Summarizing project status, pointing to resources, instructions, maybe a
roadmap, TODO, you name it.
I am willing to supply a complete, friendly, powerful and pretty installer
program, based on NSIS.
http://www.winamp.com/nsdn/nsis/index.jhtml
I suggest that pgAdmin is included in the install process. Imagine it - a
win32 person downloads a single .exe, with contents bzip2'd. They run the
installer, it asks them to agree to license, shows splash screen, asks them
where to install it, gets them to supply an installation password and
installs pgadmin. It could set up a folder in their start menu with
start/stop, edit configs, uninstall and run pgadmin.
It would all work out of the box and would do wonderful things for the
Postgres community.
Chris
On Wednesday 26 June 2002 11:48 pm, Christopher Kings-Lynne wrote:
I suggest that pgAdmin is included in the install process. Imagine it - a
win32 person downloads a single .exe, with contents bzip2'd. They run the
installer, it asks them to agree to license, shows splash screen, asks them
where to install it, gets them to supply an installation password and
installs pgadmin. It could set up a folder in their start menu with
start/stop, edit configs, uninstall and run pgadmin.
It would all work out of the box and would do wonderful things for the
Postgres community.
I like this idea, but let me just bring one little issue to note: are you
going to handle upgrades, and if so, how? How are you going to do a major
version upgrade?
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11
It would all work out of the box and would do wonderful things for the
Postgres community.I like this idea, but let me just bring one little issue to note: are you
going to handle upgrades, and if so, how? How are you going to
do a major
version upgrade?
Well, the easiest way would be to get them to uninstall the old version
first, but I'm sure it can be worked out. Perhaps even we shouldn't
overwrite the old version anyway?
Chris
How does the upgrade work on UNIX? Is there anything available apart from
reading the release note?
----- Original Message -----
From: "Christopher Kings-Lynne" <chriskl@familyhealth.com.au>
To: "Lamar Owen" <lamar.owen@wgcr.org>; "Jan Wieck" <JanWieck@Yahoo.com>;
"HACKERS" <pgsql-hackers@postgresql.org>
Sent: Tuesday, July 02, 2002 12:48 PM
Subject: Re: [HACKERS] (A) native Windows port
It would all work out of the box and would do wonderful things for the
Postgres community.I like this idea, but let me just bring one little issue to note: are
you
Show quoted text
going to handle upgrades, and if so, how? How are you going to
do a major
version upgrade?Well, the easiest way would be to get them to uninstall the old version
first, but I'm sure it can be worked out. Perhaps even we shouldn't
overwrite the old version anyway?Chris
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
Christopher Kings-Lynne wrote:
It would all work out of the box and would do wonderful things for the
Postgres community.I like this idea, but let me just bring one little issue to note: are you
going to handle upgrades, and if so, how? How are you going to
do a major
version upgrade?Well, the easiest way would be to get them to uninstall the old version
first, but I'm sure it can be worked out. Perhaps even we shouldn't
overwrite the old version anyway?
The question is not how to replace some .EXE and .DLL files or modify
something in the registry. The question is what to do with the existing
databases in the case of a catalog version change. You have to dump and
restore.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #
On Tuesday 02 July 2002 09:52 am, Jan Wieck wrote:
Christopher Kings-Lynne wrote:
It would all work out of the box and would do wonderful things for
the Postgres community.
I like this idea, but let me just bring one little issue to note: are
you going to handle upgrades, and if so, how? How are you going to do
a major
version upgrade?
Well, the easiest way would be to get them to uninstall the old version
first, but I'm sure it can be worked out. Perhaps even we shouldn't
overwrite the old version anyway?
The question is not how to replace some .EXE and .DLL files or modify
something in the registry. The question is what to do with the existing
databases in the case of a catalog version change. You have to dump and
restore.
Now, riddle me this: we're going to explain the vagaries of
dump/initdb/restore to a typical Windows user, and further explain why the
dump won't necessarily restore because of a bug in the older version's
dump....
The typical Windows user is going to barf when confronted with our extant
'upgrade' process. While I really could not care less if PostgreSQL goes to
Windows or not, I am of a mind to support the Win32 effort if it gets an
upgrade path done so that everyone can upgrade sanely. At least the Windows
installer can check for existing database structures and ask what to do --
the RPM install cannot do this. In fact, the Windows installer *must* check
for an existing database installation, or we're going to get fried by typical
Windows users.
And if having a working, usable, Win32 native port gets the subject of good
upgrading higher up the priority list, BY ALL MEANS LET'S SUPPORT WIN32
NATIVELY! :-) (and I despise Win32....)
But it shouldn't be an installer issue -- this is an issue which cause pain
for all of our users, not just Windows or RPM (or Debian) users. Upgrading
(pg_upgrade is a start -- but it's not going to work as written on Windows)
needs to be core functionality. If I can't easily upgrade my database, what
good are new features going to do for me?
Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great deal
of promise for seamless binary 'in place' upgrading. He has been able to
write code to read multiple versions' database structures -- proving that it
CAN be done.
Windows programs such as Lotus Organizer, Microsoft Access, Lotus Approach,
and others, allow you to convert the old to the new as part of initial
startup. This will be a prerequisite for wide acceptance in the Windows
world, methinks.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11
The question is not how to replace some .EXE and .DLL files or modify
something in the registry. The question is what to do with the existing
databases in the case of a catalog version change. You have to dump and
restore.
pg_upgrade?
Otherwise: no upgrades persay, but you can intall the new version into a new
directory and then have an automated pg_dump / restore between the old and
the new. This would require a lot of disk space, but I don't see any other
clean way to automate it.
Lamar Owen wrote:
[...]
And if having a working, usable, Win32 native port gets the subject of good
upgrading higher up the priority list, BY ALL MEANS LET'S SUPPORT WIN32
NATIVELY! :-) (and I despise Win32....)
Hehehe :-)
[...]
Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great deal
of promise for seamless binary 'in place' upgrading. He has been able to
write code to read multiple versions' database structures -- proving that it
CAN be done.
Unfortunately it's not the on-disk binary format of files that causes
the big problems. Our dump/initdb/restore sequence is also the solution
for system catalog changes. If we add/remove internal functions, there
will be changes to pg_proc. When the representation of parsetrees
changes, there will be changes to pg_rewrite (dunno how to convert
that). Consider adding another attribute to pg_class. You'd have to add
a row in pg_attribute, possibly (because it likely isn't added at the
end) increment the attno for 50% of all pg_attribute entries, and of
course insert an attribute in the middle of all existing pg_class rows
... ewe.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #
On Tuesday 02 July 2002 03:14 pm, Jan Wieck wrote:
Lamar Owen wrote:
[...]
Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great
deal of promise for seamless binary 'in place' upgrading. He has been
able to write code to read multiple versions' database structures --
proving that it CAN be done.
Unfortunately it's not the on-disk binary format of files that causes
the big problems. Our dump/initdb/restore sequence is also the solution
for system catalog changes.
Hmmm. They get in there via the bki interface, right? Is there an OID issue
with these? Could differential BKI files be possible, with known system
catalog changes that can be applied via a 'patchdb' utility? I know pretty
much how pg_upgrade is doing things now -- and, frankly, it's a little bit of
a kludge.
Yes, I do understand the things a dump restore does on somewhat of a detailed
level. I know the restore repopulates the entries in the system catalogs for
the restored data, etc, etc.
Currently dump/restore handles the catalog changes. But by what other means
could we upgrade the system catalog in place?
Our very extensibility is our weakness for upgrades. Can it be worked around?
Anyone have any ideas?
Improving pg_upgrade may be the ticket -- but if the on-disk binary format
changes (like it has before), then something will have to do the binary
format translation -- something like pg_fsck.
Incidentally, pg_fsck, or a program like it, should be in the core
distribution. Maybe not named pg_fsck, as our database isn't a filesystem,
but pg_dbck, or pg_dbcheck, pr pg_dbfix, or similar. Although pg_fsck is
more of a pg_dbdump.
I've seen too many people bitten by upgrades gone awry. The more we can do in
the regard, the better.
And the Windows user will likely demand it. I never thought I'd be grateful
for a Win32 native PostgreSQL port... :-)
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11
Le Jeudi 27 Juin 2002 05:48, Christopher Kings-Lynne a écrit :
I am willing to supply a complete, friendly, powerful and pretty installer
program, based on NSIS.
Maybe you should contact Dave Page, who wrote pgAdmin2 and the ODBC
installers. Maybe you can both work on the installer.
By the way, when will Dave be added to the main developper list? He wrote 99%
of pgAdmin on his own.
Cheers, Jean-Michel POURE
On Tue, 2002-07-02 at 21:50, Lamar Owen wrote:
On Tuesday 02 July 2002 03:14 pm, Jan Wieck wrote:
Lamar Owen wrote:
[...]
Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great
deal of promise for seamless binary 'in place' upgrading. He has been
able to write code to read multiple versions' database structures --
proving that it CAN be done.Unfortunately it's not the on-disk binary format of files that causes
the big problems. Our dump/initdb/restore sequence is also the solution
for system catalog changes.Hmmm. They get in there via the bki interface, right? Is there an OID issue
with these? Could differential BKI files be possible, with known system
catalog changes that can be applied via a 'patchdb' utility? I know pretty
much how pg_upgrade is doing things now -- and, frankly, it's a little bit of
a kludge.Yes, I do understand the things a dump restore does on somewhat of a detailed
level. I know the restore repopulates the entries in the system catalogs for
the restored data, etc, etc.Currently dump/restore handles the catalog changes. But by what other means
could we upgrade the system catalog in place?Our very extensibility is our weakness for upgrades. Can it be worked around?
Anyone have any ideas?
Perhaps we can keep an old postgres binary + old backend around and then
use it in single-user mode to do a pg_dump into our running backend.
IIRC Access does its upgrade databse by copying old databse to new.
Our approach could be like
$OLD/postgres -D $OLD_DATA <pg_dump_cmds | $NEW/postgres -D NEW_BACKEND
or perhaps, while old backend is still running:
pg_dumpall | path_to_new_backend/bin/postgres
I dont think we should assume that we will be able to do an upgrade
while we have less free space than currently used by databases (or at
least by data - indexes can be added later)
Trying to do an in-place upgrade is an interesting CS project, but any
serious DBA will have backups, so they can do
$ psql < dumpfile
Speeding up COPY FROM could be a good thing (perhaps enabling it to run
without any checks and outside transactions when used in loading dumps)
And home users will have databases small enough that they should have
enough free space to have both old and new version for some time.
What we do need is more-or-less solid upgrade path using pg_dump
BTW, how hard would it be to move pg_dump inside the backend (perhaps
using a dynamically loaded function to save space when not used) so that
it could be used like COPY ?
pg> DUMP table [ WITH 'other cmdline options' ] TO stdout ;
pg> DUMP * [ WITH 'other cmdline options' ] TO stdout ;
----------------
Hannu
Lamar Owen wrote:
On Tuesday 02 July 2002 03:14 pm, Jan Wieck wrote:
Lamar Owen wrote:
[...]
Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great
deal of promise for seamless binary 'in place' upgrading. He has been
able to write code to read multiple versions' database structures --
proving that it CAN be done.Unfortunately it's not the on-disk binary format of files that causes
the big problems. Our dump/initdb/restore sequence is also the solution
for system catalog changes.Hmmm. They get in there via the bki interface, right? Is there an OID issue
with these? Could differential BKI files be possible, with known system
catalog changes that can be applied via a 'patchdb' utility? I know pretty
much how pg_upgrade is doing things now -- and, frankly, it's a little bit of
a kludge.
Sure, if it wasn't a kludge, I wouldn't have written it. ;-)
Does everyone remember my LIKE indexing kludge in gram.y? Until people
found a way to get it into the optimizer, it did its job. I guess
that's where pg_upgrade is at this point.
Actually, how can pg_upgrade be improved?
Also, we have committed to making file format changes for 7.3, so it
seems pg_upgrade will not be useful for that release unless we get some
binary conversion tool working.
Yes, I do understand the things a dump restore does on somewhat of a detailed
level. I know the restore repopulates the entries in the system catalogs for
the restored data, etc, etc.Currently dump/restore handles the catalog changes. But by what other means
could we upgrade the system catalog in place?Our very extensibility is our weakness for upgrades. Can it be worked around?
Anyone have any ideas?Improving pg_upgrade may be the ticket -- but if the on-disk binary format
changes (like it has before), then something will have to do the binary
format translation -- something like pg_fsck.
Yep.
Incidentally, pg_fsck, or a program like it, should be in the core
distribution. Maybe not named pg_fsck, as our database isn't a filesystem,
but pg_dbck, or pg_dbcheck, pr pg_dbfix, or similar. Although pg_fsck is
more of a pg_dbdump.I've seen too many people bitten by upgrades gone awry. The more we can do in
the regard, the better.
I should mention that 7.3 will have pg_depend, which should make our
post-7.3 reload process much cleaner because we will not have dangling
objects as often.
And the Windows user will likely demand it. I never thought I'd be grateful
for a Win32 native PostgreSQL port... :-)
Yea, the trick is to get an something working that will require minimal
change from release to release.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Hannu Krosing wrote:
Our very extensibility is our weakness for upgrades. Can it be worked around?
Anyone have any ideas?Perhaps we can keep an old postgres binary + old backend around and then
use it in single-user mode to do a pg_dump into our running backend.
That brings up an interesting idea. Right now we dump the entire
database out to a file, delete the old database, and load in the file.
What if we could move over one table at a time? Copy out the table,
load it into the new database, then delete the old table and move on to
the next. That would allow use to upgrade having free space for just
the largest table. Another idea would be to record and remove all
indexes in the old database. That certainly would save disk space
during the upgrade.
However, the limiting factor is that we don't have a mechanism to have
both databases running at the same time currently. Seems this may be
the direction to head in.
BTW, how hard would it be to move pg_dump inside the backend (perhaps
using a dynamically loaded function to save space when not used) so that
it could be used like COPY ?pg> DUMP table [ WITH 'other cmdline options' ] TO stdout ;
pg> DUMP * [ WITH 'other cmdline options' ] TO stdout ;
Intersting idea, but I am not sure what that buys us. Having pg_dump
separate makes maintenance easier.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Hannu Krosing wrote:
However, the limiting factor is that we don't have a mechanism to have
both databases running at the same time currently.How so ?
AFAIK I can run as many backends as I like (up to some practical limit)
on the same comuter at the same time, as long as they use different
ports and different data directories.
We don't have an automated system for doing this. Certainly it is done
all the time.
Intersting idea, but I am not sure what that buys us. Having pg_dump
separate makes maintenance easier.can pg_dump connect to single-user-mode backend ?
Uh, no, I don't think so.
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Import Notes
Reply to msg id not found: 1025714151.23474.106.camel@taru.tm.ee | Resolved by subject fallback
On Wed, 2002-07-03 at 17:28, Bruce Momjian wrote:
Hannu Krosing wrote:
Our very extensibility is our weakness for upgrades. Can it be worked around?
Anyone have any ideas?Perhaps we can keep an old postgres binary + old backend around and then
use it in single-user mode to do a pg_dump into our running backend.That brings up an interesting idea. Right now we dump the entire
database out to a file, delete the old database, and load in the file.What if we could move over one table at a time? Copy out the table,
load it into the new database, then delete the old table and move on to
the next. That would allow use to upgrade having free space for just
the largest table. Another idea would be to record and remove all
indexes in the old database. That certainly would save disk space
during the upgrade.However, the limiting factor is that we don't have a mechanism to have
both databases running at the same time currently.
How so ?
AFAIK I can run as many backends as I like (up to some practical limit)
on the same comuter at the same time, as long as they use different
ports and different data directories.
Seems this may be
the direction to head in.BTW, how hard would it be to move pg_dump inside the backend (perhaps
using a dynamically loaded function to save space when not used) so that
it could be used like COPY ?pg> DUMP table [ WITH 'other cmdline options' ] TO stdout ;
pg> DUMP * [ WITH 'other cmdline options' ] TO stdout ;
Intersting idea, but I am not sure what that buys us. Having pg_dump
separate makes maintenance easier.
can pg_dump connect to single-user-mode backend ?
--------------------
Hannu
On Wednesday 03 July 2002 12:09 pm, Bruce Momjian wrote:
Hannu Krosing wrote:
AFAIK I can run as many backends as I like (up to some practical limit)
on the same comuter at the same time, as long as they use different
ports and different data directories.
We don't have an automated system for doing this. Certainly it is done
all the time.
Good. Dialog. This is better than what I am used to when I bring up
upgrading. :-)
Bruce, pg_upgrade isn't as kludgey as what I have been doing with the RPMset
for these nearly three years.
No, what I envisioned was a standalone dumper that can produce dump output
without having a backend at all. If this dumper knows about the various
binary formats, and knows how to get my data into a form I can then restore
reliably, I will be satisfied. If it can be easily automated so much the
better. Doing it table by table would be ok as well.
I'm looking for a sequence such as:
----
PGDATA=location/of/data/base
TEMPDATA=location/of/temp/space/on/same/file/system
mv $PGDATA/* $TEMPDATA
initdb -D $PGDATA
pg_dbdump $TEMPDATA |pg_restore {with its associated options, etc}
With an rm -rf of $TEMPDATA much further down the pike.....
Keys to this working:
1.) Must not require the old version executable backend. There are a number
of reasons why this might be, but the biggest is due to the way much
upgrading works in practice -- the old executables are typically gone by the
time the new package is installed.
2.) Uses pg_dbdump of the new version. This dumper can be tailored to provide
the input pg_restore wants to see. The dump-restore sequence has always had
dumped-data version mismatch as its biggest problem -- there have been issues
before where you would have to install the new version of pg_dump to run
against the old backend. This is unacceptable in the real world of binary
packages.
One other usability note: why can't postmaster perform the steps of an initdb
if -D points to an empty directory? It's not that much code, is it? (I know
that one extra step isn't backbreaking, but I'm looking at this from a rank
newbie's point of view -- or at least I'm trying to look at it in that way,
as it's been a while since I was a rank newbie at PostgreSQL) Oh well, just
a random thought.
But I believe a backend-independent data dumper would be very useful in many
contexts, particularly those where a backend cannot be run for whatever
reason, but you need your data (corrupted system catalogs, high system load,
whatever). Upgrading is just one of those contexts.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11
Lamar Owen wrote:
On Wednesday 03 July 2002 12:09 pm, Bruce Momjian wrote:
Hannu Krosing wrote:
AFAIK I can run as many backends as I like (up to some practical limit)
on the same comuter at the same time, as long as they use different
ports and different data directories.We don't have an automated system for doing this. Certainly it is done
all the time.Good. Dialog. This is better than what I am used to when I bring up
upgrading. :-)Bruce, pg_upgrade isn't as kludgey as what I have been doing with the RPMset
for these nearly three years.No, what I envisioned was a standalone dumper that can produce dump output
without having a backend at all. If this dumper knows about the various
binary formats, and knows how to get my data into a form I can then restore
reliably, I will be satisfied. If it can be easily automated so much the
better. Doing it table by table would be ok as well.
The problem with a standalone dumper is that you would have to recode
this for every release, with little testing possible. Having the old
backend active saves us that step. If we get it working, we can use it
over and over again for each release with little work on our part.
Keys to this working:
1.) Must not require the old version executable backend. There are a number
of reasons why this might be, but the biggest is due to the way much
upgrading works in practice -- the old executables are typically gone by the
time the new package is installed.
Oh, that is a problem. We would have to require the old executables.
2.) Uses pg_dbdump of the new version. This dumper can be tailored to provide
the input pg_restore wants to see. The dump-restore sequence has always had
dumped-data version mismatch as its biggest problem -- there have been issues
before where you would have to install the new version of pg_dump to run
against the old backend. This is unacceptable in the real world of binary
packages.One other usability note: why can't postmaster perform the steps of an initdb
if -D points to an empty directory? It's not that much code, is it? (I know
that one extra step isn't backbreaking, but I'm looking at this from a rank
newbie's point of view -- or at least I'm trying to look at it in that way,
as it's been a while since I was a rank newbie at PostgreSQL) Oh well, just
a random thought.
The issue is that if you have PGDATA pointed to the wrong place, it
creates a new instance automatically. Could be strange for people, but
we could prompt them to run initdb I guess.
But I believe a backend-independent data dumper would be very useful in many
contexts, particularly those where a backend cannot be run for whatever
reason, but you need your data (corrupted system catalogs, high system load,
whatever). Upgrading is just one of those contexts.
Yes, but who wants to write one of those for every release? That is
where we get stuck, and with our limited resources, it is desirable to
encourage people to work on it?
--
Bruce Momjian | http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026