7.1 Release Date

Started by Miguel Omar Carvajalover 25 years ago36 messagesgeneral
Jump to latest
#1Miguel Omar Carvajal
omar@carvajal.com

Hi there,
When will Postgresql 7.1 be released?

Miguel

#2The Hermit Hacker
scrappy@hub.org
In reply to: Miguel Omar Carvajal (#1)
Re: 7.1 Release Date

On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:

Hi there,
When will Postgresql 7.1 be released?

right now, we're looking at October-ish for going beta, so most likely
November-ish for a release ...

In reply to: The Hermit Hacker (#2)
Re: 7.1 Release Date

The Hermit Hacker <scrappy@hub.org> writes:

On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:

Hi there,
When will Postgresql 7.1 be released?

right now, we're looking at October-ish for going beta, so most likely
November-ish for a release ...

Will there be a clean upgrade path this time, or
yet another dump-initdb-restore procedure?

Unclean upgrades are one of major disadvantages of postgresql FTTB,
IMHO.
--
Trond Eivind Glomsr�d
Red Hat, Inc.

#4The Hermit Hacker
scrappy@hub.org
In reply to: Trond Eivind Glomsrød (#3)
Re: 7.1 Release Date

On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr���d wrote:

The Hermit Hacker <scrappy@hub.org> writes:

On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:

Hi there,
When will Postgresql 7.1 be released?

right now, we're looking at October-ish for going beta, so most likely
November-ish for a release ...

Will there be a clean upgrade path this time, or
yet another dump-initdb-restore procedure?

IMHO, upgrading a database server is like upgrading an operating system
... you scheduale downtime, back it all up and upgrade ...

there is the pg_upgrade script that is available, that some ppl have had
varying degrees of success with, but I've personally never used ...

In reply to: The Hermit Hacker (#4)
Re: 7.1 Release Date

The Hermit Hacker <scrappy@hub.org> writes:

On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr� d wrote:

The Hermit Hacker <scrappy@hub.org> writes:

On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:

Hi there,
When will Postgresql 7.1 be released?

right now, we're looking at October-ish for going beta, so most likely
November-ish for a release ...

Will there be a clean upgrade path this time, or
yet another dump-initdb-restore procedure?

IMHO, upgrading a database server is like upgrading an operating system
... you scheduale downtime, back it all up and upgrade ...

The problem is, this doesn't play that well with upgrading the
database when upgrading the OS, like in most Linux distributions.

--
Trond Eivind Glomsr�d
Red Hat, Inc.

#6The Hermit Hacker
scrappy@hub.org
In reply to: Trond Eivind Glomsrød (#5)
Re: 7.1 Release Date

On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr���d wrote:

The Hermit Hacker <scrappy@hub.org> writes:

On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr��� d wrote:

The Hermit Hacker <scrappy@hub.org> writes:

On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:

Hi there,
When will Postgresql 7.1 be released?

right now, we're looking at October-ish for going beta, so most likely
November-ish for a release ...

Will there be a clean upgrade path this time, or
yet another dump-initdb-restore procedure?

IMHO, upgrading a database server is like upgrading an operating system
... you scheduale downtime, back it all up and upgrade ...

The problem is, this doesn't play that well with upgrading the
database when upgrading the OS, like in most Linux distributions.

why not? pg_dump;pkrm old;pkadd new;load ... no?

I use both Solaris and FreeBSD, and its pretty much "that simple" for both
of those ...

In reply to: The Hermit Hacker (#6)
Re: 7.1 Release Date

The Hermit Hacker <scrappy@hub.org> writes:

On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr� d wrote:

The Hermit Hacker <scrappy@hub.org> writes:

On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr� d wrote:

The Hermit Hacker <scrappy@hub.org> writes:

On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:

Hi there,
When will Postgresql 7.1 be released?

right now, we're looking at October-ish for going beta, so most likely
November-ish for a release ...

Will there be a clean upgrade path this time, or
yet another dump-initdb-restore procedure?

IMHO, upgrading a database server is like upgrading an operating system
... you scheduale downtime, back it all up and upgrade ...

The problem is, this doesn't play that well with upgrading the
database when upgrading the OS, like in most Linux distributions.

why not? pg_dump;pkrm old;pkadd new;load ... no?

Because the system is down during this upgrade - the database isn't
running. Also, automated dump might lead to data loss if space becomes
an issue.

--
Trond Eivind Glomsr�d
Red Hat, Inc.

#8The Hermit Hacker
scrappy@hub.org
In reply to: Trond Eivind Glomsrød (#7)
Re: 7.1 Release Date

On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr���d wrote:

The Hermit Hacker <scrappy@hub.org> writes:

On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr��� d wrote:

The Hermit Hacker <scrappy@hub.org> writes:

On 29 Aug 2000, Trond Eivind [iso-8859-1] Glomsr��� d wrote:

The Hermit Hacker <scrappy@hub.org> writes:

On Mon, 28 Aug 2000, Miguel Omar Carvajal wrote:

Hi there,
When will Postgresql 7.1 be released?

right now, we're looking at October-ish for going beta, so most likely
November-ish for a release ...

Will there be a clean upgrade path this time, or
yet another dump-initdb-restore procedure?

IMHO, upgrading a database server is like upgrading an operating system
... you scheduale downtime, back it all up and upgrade ...

The problem is, this doesn't play that well with upgrading the
database when upgrading the OS, like in most Linux distributions.

why not? pg_dump;pkrm old;pkadd new;load ... no?

Because the system is down during this upgrade - the database isn't
running. Also, automated dump might lead to data loss if space becomes
an issue.

woah, I'm confused here ... are you saying that you want to upgrade the
database server at the same time, and in conjunction with, upgrading the
Operating System?

#9Tom Lane
tgl@sss.pgh.pa.us
In reply to: Trond Eivind Glomsrød (#3)
Re: 7.1 Release Date

teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:

Will there be a clean upgrade path this time, or
yet another dump-initdb-restore procedure?

Still TBD, I think --- right now pg_upgrade would still work, but if
Vadim finishes WAL there's going to have to be a dump/reload for that.

Another certain dump/reload in the foreseeable future will come from
adding tablespace support/changing file naming conventions.

Unclean upgrades are one of major disadvantages of postgresql FTTB,
IMHO.

You can always stick to Postgres 6.5 :-). There are certain features
that just cannot be added without redoing the on-disk table format.
I don't think we will ever want to promise "no more dump/reload";
if we do, it will mean that Postgres has stopped improving.

regards, tom lane

In reply to: The Hermit Hacker (#2)
Re: 7.1 Release Date

Tom Lane <tgl@sss.pgh.pa.us> writes:

teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:

Will there be a clean upgrade path this time, or
yet another dump-initdb-restore procedure?

Still TBD, I think --- right now pg_upgrade would still work, but if
Vadim finishes WAL there's going to have to be a dump/reload for that.

Another certain dump/reload in the foreseeable future will come from
adding tablespace support/changing file naming conventions.

Unclean upgrades are one of major disadvantages of postgresql FTTB,
IMHO.

You can always stick to Postgres 6.5 :-). There are certain features
that just cannot be added without redoing the on-disk table format.
I don't think we will ever want to promise "no more dump/reload";
if we do, it will mean that Postgres has stopped improving.

Not necesarrily - one could either design a on disk format with room
for expansion or create migration tools to add new fields.

--
Trond Eivind Glomsr�d
Red Hat, Inc.

#11Tom Lane
tgl@sss.pgh.pa.us
In reply to: Trond Eivind Glomsrød (#10)
Re: 7.1 Release Date

teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:

Tom Lane <tgl@sss.pgh.pa.us> writes:

You can always stick to Postgres 6.5 :-). There are certain features
that just cannot be added without redoing the on-disk table format.
I don't think we will ever want to promise "no more dump/reload";
if we do, it will mean that Postgres has stopped improving.

Not necesarrily - one could either design a on disk format with room
for expansion or create migration tools to add new fields.

"Room for expansion" isn't necessarily the issue --- sometimes you
just have to fix wrong decisions. The table-file-naming business is
a perfect example.

Migration tools might ease the pain, sure (though I'd still recommend
doing a full backup before a major version upgrade, just on safety
grounds; so the savings afforded by a tool might not be all that much).

Up to now, the attitude of the developer community has mostly been
that our TODO list is a mile long and we'd rather spend our limited
time on bug fixes and new features than on migration tools --- both
because it seemed like the right set of priorities for the project,
and because fixes/features are fun while tools are just work ;-).
But perhaps that is an area where Great Bridge and PostgreSQL Inc can
make some contributions using support-contract funding.

regards, tom lane

#12Lamar Owen
lamar.owen@wgcr.org
In reply to: The Hermit Hacker (#2)
Re: 7.1 Release Date

Tom Lane wrote:

Migration tools might ease the pain, sure (though I'd still recommend
doing a full backup before a major version upgrade, just on safety
grounds; so the savings afforded by a tool might not be all that much).

What is needed, IMHO, is a replacement to the pg_upgrade script that can
do the following:
1.) Read _any_ previous version's format data files;
2.) Write the current version's data files (without a running
postmaster).

This replacement (call it pg_upgrade to confuse everybody) would be
called as: pg_upgrade OLDPGDATA NEWPGDATA and would simply Do The Right
Thing for that directory -- including making an ACSII dump (command line
switch, perhaps), checking disk space, robust error detection, and
_seamless_ upgrading of system catalogs and indices (all it needs to do
is call initdb on the NEWPGDATA tree, right?). The key is seamless.
The second key is _without_ a running postmaster. Much of pg_dump's
code would be needed as well, to generate an ASCII dump.

Now, this new pg_upgrade would have to know a great deal about data file
formats (but, of course, since we're on CVS, getting the old code to do
the old formats is as simple as checking out the old version, right?).

HOWEVER, I see no two ways around the fact that a core developer needs
to be the one to do this utility. In particular, the developer to write
this utility needs to know the backend code as well or better than any
other developer -- and, Tom, that person sounds like you.

Now, it _may_ be possible for another developer to do this -- and, if I
thought my grasp of the backend was good enough I would go ahead and
volunteer -- in fact, if I can get the help I need to do it, and the
time to do it in, I _will_ volunteer. Of course, it will take me much
longer to make a working tool, as I'm going to have to learn what Tom
(and others) already know -- but I am willing to put in the time to make
this work _right_. This upgrade issue has been a thorn in my side far
too long.

And, to answer the questions: currently, the RPM's have to upgrade the
way they do due to the fact that they are called during an OS upgrade
cycle -- if you are running RedHat 6.2 with the 6.5.3-6 PostgreSQL RPM's
installed, and you upgrade to Pinstripe (the RH 7 public beta), which
give you 7.0.2 RPM's, the binaries necessary to extract the data from
PGDATA are going to be wiped away by the upgrade -- currently, they are
being backed up by the RPM's pre-install script so that an upgrade
script can then call them into service after the hapless user has
figured out that PostgreSQL doesn't upgrade smoothly. This is fine and
good as long as Pinstripe can run the old binaries -- which might not be
true for the next dot-oh RedHat upgrade!

Actually, that is true _now_ is a RedHat 4.x user attempts to upgrade to
Pinstripe -- correct me if I'm wrong, Trond.

We NEED this 'pg_upgrade'-on-steroids program that simply Does The Right
Thing. Furthermore, with a little work, this program could be used to
salvage broken databases. But imagine upgrading from Postgres95 1.01 to
PostgreSQL 7.1.0 with a single pg_upgrade command AFTER loading 7.1.0
(besides, there's many bugs in pre-6.3 pg_dump, right? A dump/restore
won't work there anyway). Imagine a simple upgrade for those folks who
use large objects. It should be doable.

Note that ANY RPM-based distribution is going to have this same
problem. Yes, Tom, the RPM-based OS's upgrade procedures are
brain-dead. But, it can also be argued that our dump/restore upgrade
procedure is also brain-dead.

I think it's high time that the dump/initdb/restore cycle needs to be
retired as a normal upgrading step.

Or, to put it into 'fighting words', 'mysql doesn't have this problem.'

--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

#13Brook Milligan
brook@biology.nmsu.edu
In reply to: Lamar Owen (#12)
Re: 7.1 Release Date

We NEED this 'pg_upgrade'-on-steroids program that simply Does The Right
Thing.

I think it's high time that the dump/initdb/restore cycle needs to be
retired as a normal upgrading step.

YOU (i.e., people relying on the RH stuff to do everything at once)
may need such a thing, but it seems like you are overstating the case
just a bit. If this project gets adopted by core developers, it would
seem to conflict drastically with the goal of developing the core
functionality. Thus, it's not quite "high time" for this.

There is nothing inherently different (other than implementation
details) about the basic procedure for upgrading the database as
compared to upgrading user data of any sort. In each case, you need
to go through the steps of 1) dump data to a secure place, 2) destroy
the old stuff, 3) add new stuff, and 4) restore the old data. In the
case of "normal" user data (home directories and such) the
dump/restore sequence can be performed using exactly those commands or
tar or dd or whatever. In the case of the database we have the
pg_dump/psql commands. In either case, the person doing the upgrade
must have enough of a clue to have made an appropriate dump in the
first place before trashing their system. If the person lacks such a
clue, the solution is education (e.g., make the analogy explicit, show
the tools required, make pg_dump more robust, ...) not redirecting the
precious resources of core developers to duplicate the database system
in a standalone program for upgrades.

Cheers,
Brook

#14Alfred Perlstein
bright@wintelcom.net
In reply to: Brook Milligan (#13)
Re: 7.1 Release Date

* Brook Milligan <brook@biology.nmsu.edu> [000829 12:07] wrote:

We NEED this 'pg_upgrade'-on-steroids program that simply Does The Right
Thing.

I think it's high time that the dump/initdb/restore cycle needs to be
retired as a normal upgrading step.

YOU (i.e., people relying on the RH stuff to do everything at once)
may need such a thing, but it seems like you are overstating the case
just a bit. If this project gets adopted by core developers, it would
seem to conflict drastically with the goal of developing the core
functionality. Thus, it's not quite "high time" for this.

There is nothing inherently different (other than implementation
details) about the basic procedure for upgrading the database as
compared to upgrading user data of any sort. In each case, you need
to go through the steps of 1) dump data to a secure place, 2) destroy
the old stuff, 3) add new stuff, and 4) restore the old data. In the
case of "normal" user data (home directories and such) the
dump/restore sequence can be performed using exactly those commands or
tar or dd or whatever. In the case of the database we have the
pg_dump/psql commands. In either case, the person doing the upgrade
must have enough of a clue to have made an appropriate dump in the
first place before trashing their system. If the person lacks such a
clue, the solution is education (e.g., make the analogy explicit, show
the tools required, make pg_dump more robust, ...) not redirecting the
precious resources of core developers to duplicate the database system
in a standalone program for upgrades.

Actually you make the process sound way too evil, a slightly more
complex system can leave you fully operational if anything goes wrong:

install new postgresql
start new version on alternate port
suspend updating data (but not queries)
do a direct pg_dump into the new version
(i think you need to export PG_PORT to use the alternate port)
suspend all queries
shutdown old version
restart new version on default port
resume queries

if (problems == 0) {
resume updates;
} else {
stop updates and queries;
shutdown new
restart old
resume normal operations
}

Ok, it's a LOT more complex, but with careful planning pain may be
kept to an acceptable minimum.

--
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
"I have the heart of a child; I keep it in a jar on my desk."

In reply to: The Hermit Hacker (#2)
Re: 7.1 Release Date

Lamar Owen <lamar.owen@wgcr.org> writes:

And, to answer the questions: currently, the RPM's have to upgrade the
way they do due to the fact that they are called during an OS upgrade
cycle -- if you are running RedHat 6.2 with the 6.5.3-6 PostgreSQL RPM's
installed, and you upgrade to Pinstripe (the RH 7 public beta), which
give you 7.0.2 RPM's, the binaries necessary to extract the data from
PGDATA are going to be wiped away by the upgrade -- currently, they are
being backed up by the RPM's pre-install script so that an upgrade
script can then call them into service after the hapless user has
figured out that PostgreSQL doesn't upgrade smoothly. This is fine and
good as long as Pinstripe can run the old binaries -- which might not be
true for the next dot-oh RedHat upgrade!

Actually, that is true _now_ is a RedHat 4.x user attempts to upgrade to
Pinstripe -- correct me if I'm wrong, Trond.

For Red Hat 4.x, that would be true - we don't support the ancient
libc5 anymore (OTOH, we didn't include Postgres95 at the time either).

Note that ANY RPM-based distribution is going to have this same
problem.

Not just RPM-based - any distribution who upgrades when the system is
offline.

Yes, Tom, the RPM-based OS's upgrade procedures are brain-dead.

No, it's not - it's just not making assumptions like "enough space is
present to dump everything somewhere" (if you have a multiGB database,
dumping it to upgrade sounds like a bad idea), "the database server is
running, so I can just dump the data" etc.

But, it can also be argued that our dump/restore upgrade procedure
is also brain-dead.

This is basically "no upgrade path. But you can dump your old data
before upgrading. And you can insert data in the new database".

--
Trond Eivind Glomsr�d
Red Hat, Inc.

In reply to: The Hermit Hacker (#2)
Re: 7.1 Release Date

Brook Milligan <brook@biology.nmsu.edu> writes:

YOU (i.e., people relying on the RH stuff to do everything at once)
may need such a thing, but it seems like you are overstating the case
just a bit. If this project gets adopted by core developers, it would
seem to conflict drastically with the goal of developing the core
functionality.

Upgradability is also functionality.

There is nothing inherently different (other than implementation
details) about the basic procedure for upgrading the database as
compared to upgrading user data of any sort. In each case, you need
to go through the steps of 1) dump data to a secure place, 2) destroy
the old stuff, 3) add new stuff, and 4) restore the old data. In the
case of "normal" user data (home directories and such) the
dump/restore sequence can be performed using exactly those commands or
tar or dd or whatever.

You usually don't do that at all - the home directories and the users'
data stay just the way they are.

--
Trond Eivind Glomsr�d
Red Hat, Inc.

#17Lamar Owen
lamar.owen@wgcr.org
In reply to: The Hermit Hacker (#2)
Re: 7.1 Release Date

Brook Milligan wrote:

We NEED this 'pg_upgrade'-on-steroids program that simply Does The Right
Thing.

I think it's high time that the dump/initdb/restore cycle needs to be
retired as a normal upgrading step.

YOU (i.e., people relying on the RH stuff to do everything at once)
may need such a thing, but it seems like you are overstating the case
just a bit. If this project gets adopted by core developers, it would
seem to conflict drastically with the goal of developing the core
functionality. Thus, it's not quite "high time" for this.

Does a dump/restore from 6.5.3 to 7.1.0 properly handle large objectcs
yet? (I know Philip Warner is working on it -- but that is NOT going to
help the person running and old version wanting to upgrade). I would
dare say that there are more users of PostgreSQL running on RedHat than
all other platforms combined.

That's fine -- if PostgreSQL doesn't want to cater to newbies who simply
want it to work, then someone else will cater to them. Personally, I
believe the 'newbie niche' is one of many niches that PostgreSQL fills
very effectively -- until the hapless newbie upgrades his OS and trashes
his database in the process. Then he goes and gets someone else's
database and badmouths PostgreSQL. (as to those other niches, I benched
my OpenACS installation yesterday at 10.5 pages per second -- where each
page involved 7-10 SQL queries -- with a concurrent load of 50
connections. PostgreSQL's speed and scalability are major benefits --
it's relative ease of installation and administration are other major
benefits).

Education is nice -- but, tell me, first of all, how is the newbie to
find it? Release notes that don't get put on the disk until it's
already too late to do a proper dump/restore? Sure, old hands at
PostgreSQL know the drill -- I know to uncheck PostgreSQL during OS
upgrades. But, even that doesn't help if the new version of the OS
can't run the old version's binaries.

This is not the first time I've mentioned this -- nor is it the first
time it has been called into question. This upgrading issue is already
wearing thin at RedHat (or didn't you notice Trond's message) -- it
would not surprise me in the least to see PostgreSQL dropped from the
RedHat distribution in favor of InterBase or MySQL if this issue isn't
fixed for 7.1. Sure, it's their loss -- unless you actually want
PostgreSQL to be more popular, which I would like. Even if RedHat drops
PostgreSQL, I'm likely to remain with it -- at least until InterBase's
AOLserver driver is up to par, and OpenACS is fully ported over to
InterBase. Well, even then I'll likely remain with PostgreSQL, as it
works, I know it (relatively well), and the development community is
great to work with.

first place before trashing their system. If the person lacks such a
clue, the solution is education (e.g., make the analogy explicit, show
the tools required, make pg_dump more robust, ...) not redirecting the
precious resources of core developers to duplicate the database system
in a standalone program for upgrades.

No one outside the PostgreSQL developer community understands why it is
such an issue to require dump/restore at _every_single_ minor update --
ooops, sorry, major update where their minor is our major. Or, to put
it differently -- mysql doesn't have this problem. Sure, mysql has
plenty of problems, but this isn't one of them.

Did you also miss where I'm willing to do the legwork myself? I'm to
that point of aggravation over this -- but, then again, I get the 100+
emails a week about the RPM set, and I get the ire of newbies who are
dumbfounded that they have to be _that_careful_ during updates. Maybe I
_am_ a little too vehement over this -- but, I am not alone. I know
Trond shares my frustration -- amongst others.

Just how long would such a program take to write, anyway? Probably not
nearly as long as you might suspect, as all such a program is is a
translator, taking input in one format and rewriting it to another
format. You just have to know what to translate and how to translate --
there are details of course (such as pg_log handling), but the basics
are already coded in the existing backends of the many versions. There's
no SQL parsing or executing to deal with -- just reading in one format
and writing in another.

In fact, you would only need to support upgrades from 9 versions (1.01,
1.09, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5,7.0) to make this work -- and some of
those versions have the same binary format (am I right on that, Tom,
Bruce, Thomas, or Vadim?). IIRC, the binary format changed at 6.5 -- so
you basically have pre-6.5 and post-6.5 data to worry with, as the other
changes that require the dump/initdb/restore are system catalog issues,
right? Since the new pg_upgrade would do an initdb as part of its
operation (in the new directory), the old system catalogs will only have
to be read for certain things, I would think.

Comments?

If we don't do it, someone else will. Yes, maybe I overstated the issue
-- unless you agree that RedHat's continued distribution of PostgreSQL
is a good thing.

If such a program were already written, wouldn't you use it, Brook?

--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

#18Lamar Owen
lamar.owen@wgcr.org
In reply to: The Hermit Hacker (#2)
Re: 7.1 Release Date

Trond Eivind Glomsr�d wrote:

For Red Hat 4.x, that would be true - we don't support the ancient
libc5 anymore (OTOH, we didn't include Postgres95 at the time either).

There were RPM's at the time, though -- I ran 6.1.1 for nearly a year on
RedHat 4.1 until I upgraded to RH 5 (which shipped 6.2.1) after being
cracked. Good thing it was a reinstall from scratch -- those 6.1.1 RPMs
were _very_ different from what RedHat shipped in 5.0.

Note that ANY RPM-based distribution is going to have this same
problem.

Not just RPM-based - any distribution who upgrades when the system is
offline.

Like Debian. Of course, the RPM postgresql-dump script came from the
Debian packages -- so Oliver knows where I'm coming from. However,
Debian upgrading is more intelligent in many areas than RPM upgrading
is.

Yes, Tom, the RPM-based OS's upgrade procedures are brain-dead.

No, it's not - it's just not making assumptions like "enough space is
present to dump everything somewhere" (if you have a multiGB database,
dumping it to upgrade sounds like a bad idea), "the database server is
running, so I can just dump the data" etc.

'Brain-dead' meaning WRT upgrading RPMs...:
1.) I can't start a backend to dump data if the RPM is installing under
anaconda;
2.) I can't check to see if a backend is running (as an RPM pre or post
script can't use ps or cat /proc reliably (according to Jeff Johnson) if
that pre or post script is running under anaconda);
3.) I can't even check to see if the RPM is installing under anaconda!
(ie, to have a more interactive upgrade if the RPM -U is from the
command line, a check for the dump, or a confirmation from the user that
he/she knows what they're getting ready to do) -- in fact, I would
prefer to abort the upgrade of postgresql RPM's in anaconda as it
currently stands -- but that might easily abort the whole install!
4.) I'm not guaranteed of package upgrade order with split packages;
5.) I'm not even guaranteed to have basic system commands available,
unless I Prereq: them in the RPM (which is the fix for that);
6.) The installation chroot system is flakey (again, according to Jeff
Johnson) -- the least things you do, the better. My current backing up
of the old executables was really more than Jeff wanted to see. Maybe
this is fixed in Pinstripe.
7.) The requirements and script orders are not as well documented as one
might want.
8.) If I need to do complex operations to upgrade a package, it
shouldn't be a problem to do so in a pre install script -- but it is a
big problem. There _are_ other packages that require some _interesting_
steps to upgrade....

But, it can also be argued that our dump/restore upgrade procedure
is also brain-dead.

This is basically "no upgrade path. But you can dump your old data
before upgrading. And you can insert data in the new database".

Vegetable upgrades. You have really trimmed it to essentials --
PostgreSQL has no upgrade path in actuality. I seem to remember several
messages to this list in the past about problems with restoring data
dumped under older versions....

Upgrades should just be this simple:
Install new version.
Start new version's postmaster, which issues a 'pg_upgrade' in safest
mode.
If pg_upgrade fails for any reason, get DBA intervention, otherwise,
just start the postmaster already!

This could just as easily be:
Install new version.
Run pg_upgrade if required.
Start postmaster, and it just runs.

It SHOULD be that simple. It CAN be that simple. Effort HAS been
expended already on this issue -- there is a pg_upgrade script already
written that tries to do some of this, but without actually translating
the contents of the relation files. Maybe we should file this as a bug
against pg_upgrade :-).
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

#19Andrew Sullivan
sullivana@bpl.on.ca
In reply to: Lamar Owen (#17)
Re: 7.1 Release Date

On Tue, Aug 29, 2000 at 03:33:48PM -0400, Lamar Owen wrote:

This upgrading issue is already wearing thin at RedHat (or didn't
you notice Trond's message) -- it would not surprise me in the
least to see PostgreSQL dropped from the RedHat distribution in
favor of InterBase or MySQL if this issue isn't fixed for 7.1.

Why don't they just do a test, and then echo an explanation of why
the old Postgres can't be updated? That's the way it works in
Debian, and I don't see anything wrong with it. I can't believe that
Red Hat figures it's package management is so good that it will
handle all cases, and then blame everyone else when the package
management breaks the packages.

A

-- 
Andrew Sullivan                                      Computer Services
<sullivana@bpl.on.ca>                        Burlington Public Library
+1 905 639 3611 x158                                   2331 New Street
                                   Burlington, Ontario, Canada L7R 1J4
In reply to: The Hermit Hacker (#2)
Re: 7.1 Release Date

Lamar Owen <lamar.owen@wgcr.org> writes:

'Brain-dead' meaning WRT upgrading RPMs...:
1.) I can't start a backend to dump data if the RPM is installing under
anaconda;

You can try, but I don't see it as a good idea.

2.) I can't check to see if a backend is running (as an RPM pre or post
script can't use ps or cat /proc reliably (according to Jeff Johnson) if
that pre or post script is running under anaconda);

This should work, I think.

3.) I can't even check to see if the RPM is installing under anaconda!

That should be irrelavant, actually - RPM is designed to be
non-interactive. The best place to do this would probably be in the
condrestart, which is usually run when upgrading and restarts the
server if it is already running.

(ie, to have a more interactive upgrade if the RPM -U is from the
command line, a check for the dump, or a confirmation from the user that
he/she knows what they're getting ready to do)

rpm is non-interactive by design.

4.) I'm not guaranteed of package upgrade order with split packages;

Prereq versions of the other components.

5.) I'm not even guaranteed to have basic system commands available,
unless I Prereq: them in the RPM (which is the fix for that);

Yup.

6.) The installation chroot system is flakey (again, according to Jeff
Johnson) -- the least things you do, the better.

No. Yes.

7.) The requirements and script orders are not as well documented as one
might want.

More documentation is being worked on.

Upgrades should just be this simple:
Install new version.
Start new version's postmaster, which issues a 'pg_upgrade' in safest
mode.
If pg_upgrade fails for any reason, get DBA intervention, otherwise,
just start the postmaster already!

I would love that.

--
Trond Eivind Glomsr�d
Red Hat, Inc.

#21Karl DeBisschop
kdebisschop@range.infoplease.com
In reply to: The Hermit Hacker (#2)
#22Lamar Owen
lamar.owen@wgcr.org
In reply to: The Hermit Hacker (#2)
#23Tom Lane
tgl@sss.pgh.pa.us
In reply to: Lamar Owen (#17)
#24Lamar Owen
lamar.owen@wgcr.org
In reply to: The Hermit Hacker (#2)
#25Tom Lane
tgl@sss.pgh.pa.us
In reply to: Lamar Owen (#24)
#26Bill Barnes
bbarnes@operamail.com
In reply to: Tom Lane (#25)
#27Lamar Owen
lamar.owen@wgcr.org
In reply to: Bill Barnes (#26)
#28Elmar Haneke
elmar@haneke.de
In reply to: The Hermit Hacker (#2)
#29Sander Steffann
steffann@nederland.net
In reply to: The Hermit Hacker (#2)
#30g
brian@wuwei.govshops.com
In reply to: Tom Lane (#11)
#31Tille, Andreas
TilleA@rki.de
In reply to: Karl DeBisschop (#21)
#32Bruce Momjian
bruce@momjian.us
In reply to: Lamar Owen (#12)
#33Jim Mercer
jim@reptiles.org
In reply to: Bruce Momjian (#32)
#34Bruce Momjian
bruce@momjian.us
In reply to: Jim Mercer (#33)
#35Jim Mercer
jim@reptiles.org
In reply to: Bruce Momjian (#34)
In reply to: Bruce Momjian (#32)