pg_dump versus ancient server versions
While doing some desultory testing, I realized that the commit
I just pushed (92316a458) broke pg_dump against 8.0 servers:
$ pg_dump -p5480 -s regression
pg_dump: error: schema with OID 11 does not exist
The reason turns out to be something I'd long forgotten about: except
for the few "bootstrap" catalogs, our system catalogs didn't use to
have fixed OIDs. That changed at 7c13781ee, but 8.0 predates that.
So when pg_dump reads a catalog on 8.0, it gets some weird number for
"tableoid", and the logic I just put into common.c's findNamespaceByOid
et al fails to find the resulting DumpableObjects.
So my first thought was just to revert 92316a458 and give up on it as
a bad idea. However ... does anyone actually still care about being
able to dump from such ancient servers? In addition to this issue,
I'm thinking of the discussion at [1]/messages/by-id/20211022055939.z6fihsm7hdzbjttf@alap3.anarazel.de about wanting to use unnest()
in pg_dump, and of what we would need to do instead in pre-8.4 servers
that lack that. Maybe it'd be better to move up pg_dump's minimum
supported server version to 8.4 or 9.0, and along the way whack a
few more lines of its backward-compatibility hacks. If there is
anyone out there still using an 8.x server, they could use its
own pg_dump whenever they get around to migration.
Another idea would be to ignore "tableoid" and instead use the OIDs
we're expecting, but that's way too ugly for my taste, especially
given the rather thin argument for committing 92316a458 at all.
Anyway, I think the default answer is "revert 92316a458 and keep the
compatibility goalposts where they are". But I wanted to open up a
discussion to see if anyone likes the other approach better.
regards, tom lane
[1]: /messages/by-id/20211022055939.z6fihsm7hdzbjttf@alap3.anarazel.de
On Fri, Oct 22, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Anyway, I think the default answer is "revert 92316a458 and keep the
compatibility goalposts where they are". But I wanted to open up a
discussion to see if anyone likes the other approach better.[1]
/messages/by-id/20211022055939.z6fihsm7hdzbjttf@alap3.anarazel.de
I'd rather drop legacy support than revert. Even if the benefit of
92316a456 of is limited to refactoring the fact it was committed is enough
for me to feel it is a worthwhile improvement. It's still yet another five
years before there won't be a supported release that can dump/restore this
- so 20 years for someone to have upgraded without having to go to the (not
that big a) hassle of installing an out-of-support version as a stop-over.
In short, IMO, the bar for this kind of situation should be 10 releases at
most - 5 of which would be in support at the time the patch goes in. We
don't have to actively drop support of older stuff but anything older
shouldn't be preventing new commits.
David J.
On Fri, Oct 22, 2021 at 6:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
So my first thought was just to revert 92316a458 and give up on it as
a bad idea. However ... does anyone actually still care about being
able to dump from such ancient servers?
I think I recently heard about an 8.4 server still out there in the
wild, but AFAICR it's been a long time since I've heard about anything
older.
It seems to me that if you're upgrading by a dozen server versions in
one shot, it's not a totally crazy idea that you might want to do it
in steps, or use the pg_dump for the version you have and then hack
the dump. I kind of wonder if there's really any hope of a pain-free
upgrade across that many versions anyway. There are things that can
bite you despite all the work we've put into pg_dump, like having
objects that depend on system objects whose definition has changed
over the years, plus implicit casting differences, operator precedence
changes, => getting deprecated, lots of GUC changes, etc. You are
going to be able to upgrade in the end, but it's probably going to
take some work. So I'm not really sure that giving up pg_dump
compatibility for versions that old is losing as much as it may seem.
Another thing to think about in that regard: how likely is that
PostgreSQL 7.4 and PostgreSQL 15 both compile and run on the same
operating system? I suspect the answer is "not very." I seem to recall
Greg Stark trying to compile really old versions of PostgreSQL for a
conference talk some years ago, and he got back to a point where it
just became impossible to make work on modern toolchains even with a
decent amount of hackery. One tends to think of C as about as static a
thing as can be, but that's kind of missing the point. On my laptop
for example, my usual configure invocation fails on 7.4 with:
checking for SSL_library_init in -lssl... no
configure: error: library 'ssl' is required for OpenSSL
In fact, I get that same failure on every branch older than 9.2. I
expect I could work around that by disabling SSL or finding an older
version of OpenSSL that works the way those branches expect, but that
might not be the only problem, either. Now I understand you could
have PostgreSQL 15 on a new box and PostgreSQL 7.x on an ancient one
and connect via the network, and it would in all fairness be cool if
that Just Worked. But I suspect that even if that did happen in the
lab, reality wouldn't often be so kind.
--
Robert Haas
EDB: http://www.enterprisedb.com
"David G. Johnston" <david.g.johnston@gmail.com> writes:
On Fri, Oct 22, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Anyway, I think the default answer is "revert 92316a458 and keep the
compatibility goalposts where they are". But I wanted to open up a
discussion to see if anyone likes the other approach better.
... IMO, the bar for this kind of situation should be 10 releases at
most - 5 of which would be in support at the time the patch goes in. We
don't have to actively drop support of older stuff but anything older
shouldn't be preventing new commits.
Yeah. I checked into when it was that we dropped pre-8.0 support
from pg_dump, and the answer is just about five years ago (64f3524e2).
So moving the bar forward by five releases isn't at all out of line.
8.4 would be eight years past EOL by the time v15 comes out.
One of the arguments for the previous change was that it was getting
very hard to build old releases on modern platforms, thus making it
hard to do any compatibility testing. I believe the same is starting
to become true of the 8.x releases, though I've not tried personally
to build any of them in some time. (The executables I'm using for
them date from 2014 or earlier, and have not been recompiled in
subsequent platform upgrades ...) Anyway it's definitely not free
to continue to support old source server versions.
regards, tom lane
Robert Haas <robertmhaas@gmail.com> writes:
Another thing to think about in that regard: how likely is that
PostgreSQL 7.4 and PostgreSQL 15 both compile and run on the same
operating system? I suspect the answer is "not very." I seem to recall
Greg Stark trying to compile really old versions of PostgreSQL for a
conference talk some years ago, and he got back to a point where it
just became impossible to make work on modern toolchains even with a
decent amount of hackery.
Right. The toolchains keep moving, even if the official language
definition doesn't. For grins, I just checked out REL8_4_STABLE
on my M1 Mac, and found that it only gets this far:
checking test program... ok
checking whether long int is 64 bits... no
checking whether long long int is 64 bits... no
configure: error: Cannot find a working 64-bit integer type.
which turns out to be down to a configure-script issue we fixed
some years ago, ie using exit() without a prototype:
conftest.c:158:3: error: implicitly declaring library function 'exit' with type\
'void (int) __attribute__((noreturn))' [-Werror,-Wimplicit-function-declaratio\
n]
exit(! does_int64_work());
^
I notice that the configure script is also selecting some warning
switches that this compiler doesn't much like, plus it doesn't
believe 2.6.x flex is usable. So that's *at least* three things
that'd have to be hacked even to get to a successful configure run.
Individually such issues are (usually) not very painful, but when
you have to recreate all of them at once it's a daunting project.
So if I had to rebuild 8.4 from scratch right now, I would not be
a happy camper. That seems like a good argument for not deeming
it to be something we still have to support.
regards, tom lane
On 2021-Oct-22, Robert Haas wrote:
In fact, I get that same failure on every branch older than 9.2. I
expect I could work around that by disabling SSL or finding an older
version of OpenSSL that works the way those branches expect, but that
might not be the only problem, either.
I just tried to build 9.1. My config line there doesn't have ssl, but I
do get this in the compile stage:
gram.c:69:25: error: conflicting types for ‘base_yylex’
69 | #define yylex base_yylex
| ^~~~~~~~~~
scan.c:15241:12: note: in expansion of macro ‘yylex’
15241 | extern int yylex \
| ^~~~~
In file included from /pgsql/source/REL9_1_STABLE/src/backend/parser/gram.y:60:
/pgsql/source/REL9_1_STABLE/src/include/parser/gramparse.h:66:12: note: previous declaration of ‘base_yylex’ was here
66 | extern int base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp,
| ^~~~~~~~~~
gram.c:69:25: error: conflicting types for ‘base_yylex’
69 | #define yylex base_yylex
| ^~~~~~~~~~
scan.c:15244:21: note: in expansion of macro ‘yylex’
15244 | #define YY_DECL int yylex \
| ^~~~~
scan.c:15265:1: note: in expansion of macro ‘YY_DECL’
15265 | YY_DECL
| ^~~~~~~
In file included from /pgsql/source/REL9_1_STABLE/src/backend/parser/gram.y:60:
/pgsql/source/REL9_1_STABLE/src/include/parser/gramparse.h:66:12: note: previous declaration of ‘base_yylex’ was here
66 | extern int base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp,
| ^~~~~~~~~~
make[3]: *** [../../../src/Makefile.global:655: gram.o] Error 1
--
Álvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/
"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte"
(Ijon Tichy en Viajes, Stanislaw Lem)
On Fri, Oct 22, 2021 at 7:51 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
I just tried to build 9.1. My config line there doesn't have ssl, but I
do get this in the compile stage:
Hmm.
You know, one thing we could think about doing is patching some of the
older branches to make them compile on modern machines. That would not
only be potentially useful for people who are upgrading from ancient
versions, but also for hackers trying to do research on the origin of
bugs or performance problems, and also for people who are trying to
maintain some kind of backward compatibility or other and want to test
against old versions.
I don't know whether that's really worth the effort and I expect Tom
will say that it's not. If he does say that, he may be right. But I
think if I were trying to extract my data from an old 7.4 database, I
think I'd find it a lot more useful if I could make 9.0 or 9.2 or
something compile and talk to it than if I had to use v15 and hope
that held together somehow. It doesn't really make sense to try to
keep compatibility of any sort with versions we can no longer test
against.
--
Robert Haas
EDB: http://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes:
You know, one thing we could think about doing is patching some of the
older branches to make them compile on modern machines. That would not
only be potentially useful for people who are upgrading from ancient
versions, but also for hackers trying to do research on the origin of
bugs or performance problems, and also for people who are trying to
maintain some kind of backward compatibility or other and want to test
against old versions.
Yeah. We have done that in the past; I thought more than once,
but right now the only case I can find is d13f41d21/105f3ef49.
There are some other post-EOL commits in git, but I think the
others were mistakes from over-enthusiastic back-patching, while
that one was definitely an intentional portability fix for EOL'd
versions.
I don't know whether that's really worth the effort and I expect Tom
will say that it's not. If he does say ,that, he may be right.
Hmm ... I guess the question is how much work we feel like putting
into that, and how we'd track whether old branches still work,
and on what platforms. It could easily turn into a time sink
that's not justified by the value. I do see your point that there's
some value in it; I'm just not sure about the cost/benefit ratio.
One thing we could do that would help circumscribe the costs is to say
"we are not going to consider issues involving new compiler warnings
or bugs caused by more-aggressive optimization". We could mechanize
that pretty effectively by changing configure shortly after a branch's
EOL to select -O0 and no extra warning flags, so that anyone building
from branch tip would get those switch choices.
(I have no idea what this might look like on the Windows side, but
I'm concerned by the fact that we seem to need fixes every time a
new Visual Studio major version comes out.)
regards, tom lane
On 2021-Oct-24, Robert Haas wrote:
You know, one thing we could think about doing is patching some of the
older branches to make them compile on modern machines. That would not
only be potentially useful for people who are upgrading from ancient
versions, but also for hackers trying to do research on the origin of
bugs or performance problems, and also for people who are trying to
maintain some kind of backward compatibility or other and want to test
against old versions.
I think it is worth *some* effort, at least as far back as we want to
claim that we maintain pg_dump and/or psql compatibility, assuming it is
not too onerous. For instance, I wouldn't want to clutter buildfarm or
CI dashboards with testing these branches, unless it is well isolated
from regular ones; we shouldn't commit anything that's too invasive; and
we shouldn't make any claims about supportability of these abandoned
branches.
As an example, I did backpatch one such fix to 8.3 (just over a year)
and 8.2 (four years) after they had closed -- see d13f41d21538 and
105f3ef492ab.
--
Álvaro Herrera 39°49'30"S 73°17'W — https://www.EnterpriseDB.com/
"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente"
On Fri, 2021-10-22 at 19:26 -0400, Robert Haas wrote:
On Fri, Oct 22, 2021 at 6:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
So my first thought was just to revert 92316a458 and give up on it as
a bad idea. However ... does anyone actually still care about being
able to dump from such ancient servers?I think I recently heard about an 8.4 server still out there in the
wild, but AFAICR it's been a long time since I've heard about anything
older.
I had a customer with 8.3 in the not too distant past, but that need not
stop the show. If necessary, they can dump with 8.3 and restire that.
Yours,
Laurenz Albe
On 10/22/21 19:30, Tom Lane wrote:
"David G. Johnston" <david.g.johnston@gmail.com> writes:
On Fri, Oct 22, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Anyway, I think the default answer is "revert 92316a458 and keep the
compatibility goalposts where they are". But I wanted to open up a
discussion to see if anyone likes the other approach better.... IMO, the bar for this kind of situation should be 10 releases at
most - 5 of which would be in support at the time the patch goes in. We
don't have to actively drop support of older stuff but anything older
shouldn't be preventing new commits.Yeah. I checked into when it was that we dropped pre-8.0 support
from pg_dump, and the answer is just about five years ago (64f3524e2).
So moving the bar forward by five releases isn't at all out of line.
8.4 would be eight years past EOL by the time v15 comes out.One of the arguments for the previous change was that it was getting
very hard to build old releases on modern platforms, thus making it
hard to do any compatibility testing. I believe the same is starting
to become true of the 8.x releases, though I've not tried personally
to build any of them in some time. (The executables I'm using for
them date from 2014 or earlier, and have not been recompiled in
subsequent platform upgrades ...) Anyway it's definitely not free
to continue to support old source server versions.
But we don't need to build them on modern platforms, just run them on
modern platforms, ISTM.
Some months ago I built binaries all the way back to 7.2 that with a
little help run on modern Fedora and Ubuntu systems. I just upgraded my
Fedora system from 31 to 34 and they still run. See
<https://gitlab.com/adunstan/pg-old-bin> One of the intended use cases
was to test pg_dump against old versions.
I'm not opposed to us cutting off support for very old versions,
although I think we should only do that very occasionally (no more than
once every five years, say) unless there's a very good reason. I'm also
not opposed to us making small adjustments to allow us to build old
versions on modern platforms, but if we do that then we should probably
have some buildfarm support for it.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Sun, Oct 24, 2021 at 5:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Hmm ... I guess the question is how much work we feel like putting
into that, and how we'd track whether old branches still work,
and on what platforms. It could easily turn into a time sink
that's not justified by the value. I do see your point that there's
some value in it; I'm just not sure about the cost/benefit ratio.
Right. Well, we could leave it up to people who care to decide how
much work they want to do, perhaps. But I do find it annoying that
pg_dump is supposed to maintain compatibility with server releases
that I can't easily build. Fortunately I don't patch pg_dump very
often, but if I did, it'd be very difficult for me to verify that
things work against really old versions. I know that you (Tom) do a
lot of work of this type though. In my opinion, if you find yourself
working on a project of this type and as part of that you do some
fixes to an older branch to make it compile, maybe you ought to commit
those so that the next person doesn't have the same problem. And maybe
when we add support for newer versions of OpenSSL or Windows, we ought
to consider back-patching those even to unsupported releases if
someone's willing to do the work. If they're not, they're not, but I
think we tend to strongly discourage commits to EOL branches, and I
think maybe we should stop doing that. Not that people should
routinely back-patch bug fixes, but stuff that makes it easier to
build seems fair game.
I don't think we need to worry too much about users getting the wrong
impression. People who want to know what is supported are going to
look at our web site for that information, and they are going to look
for releases. I can't rule out the possibility that someone is going
to build an updated version of 7.4 or 8.2 with whatever patches we
might choose to commit there, but they're unlikely to think that means
those are fully supported branches. And if they somehow do think that
despite all evidence to the contrary, we can just tell them that they
are mistaken.
One thing we could do that would help circumscribe the costs is to say
"we are not going to consider issues involving new compiler warnings
or bugs caused by more-aggressive optimization". We could mechanize
that pretty effectively by changing configure shortly after a branch's
EOL to select -O0 and no extra warning flags, so that anyone building
from branch tip would get those switch choices.
I don't much like the idea of including -O0 because it seems like it
could be confusing. People might not realize that that the build
settings have been changed. I don't think that's really the problem
anyway: anybody who hits compiler warnings in older branches could
decide to fix them (and as long as it's a committer who will be
responsible for their own work, I think that's totally fine) or enable
-O0 locally. I routinely do that when I hit problems on older
branches, and it helps a lot, but the way I see it, that's such an
easy change that there's little reason to make it in the source code.
What's a lot more annoying is if the compile fails altogether, or you
can't even get past the configure step.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Mon, Oct 25, 2021 at 8:29 AM Andrew Dunstan <andrew@dunslane.net> wrote:
But we don't need to build them on modern platforms, just run them on
modern platforms, ISTM.
I don't really agree with this.
Some months ago I built binaries all the way back to 7.2 that with a
little help run on modern Fedora and Ubuntu systems. I just upgraded my
Fedora system from 31 to 34 and they still run. See
<https://gitlab.com/adunstan/pg-old-bin> One of the intended use cases
was to test pg_dump against old versions.
That's cool, but I don't have a Fedora or Ubuntu VM handy, and it does
seem like if people are working on testing against old versions, they
might even want to be able to recompile with debugging statements
added or something. So I think actually compiling is a lot better than
being able to get working binaries from someplace, even though the
latter is better than nothing.
I'm not opposed to us cutting off support for very old versions,
although I think we should only do that very occasionally (no more than
once every five years, say) unless there's a very good reason. I'm also
not opposed to us making small adjustments to allow us to build old
versions on modern platforms, but if we do that then we should probably
have some buildfarm support for it.
Yeah, I think having a small number of buildfarm animals testing very
old versions would be nice. Perhaps we can call them tyrannosaurus,
brontosaurus, triceratops, etc. :-)
--
Robert Haas
EDB: http://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes:
Right. Well, we could leave it up to people who care to decide how
much work they want to do, perhaps. But I do find it annoying that
pg_dump is supposed to maintain compatibility with server releases
that I can't easily build. Fortunately I don't patch pg_dump very
often, but if I did, it'd be very difficult for me to verify that
things work against really old versions. I know that you (Tom) do a
lot of work of this type though. In my opinion, if you find yourself
working on a project of this type and as part of that you do some
fixes to an older branch to make it compile, maybe you ought to commit
those so that the next person doesn't have the same problem.
Well, the answer to that so far is that I've never done such fixes.
I have the last released versions of old branches laying around,
and that's what I test against. It's been sufficient so far, although
if I suddenly needed to do (say) SSL-enabled testing, that would be
a problem because I don't think I built with SSL for any of those
branches.
Because of that angle, I concur with your position that it'd really
be desirable to be able to build old versions on modern platforms.
Even if you've got an old executable, it might be misconfigured for
the purpose you have in mind.
And maybe
when we add support for newer versions of OpenSSL or Windows, we ought
to consider back-patching those even to unsupported releases if
someone's willing to do the work. If they're not, they're not, but I
think we tend to strongly discourage commits to EOL branches, and I
think maybe we should stop doing that. Not that people should
routinely back-patch bug fixes, but stuff that makes it easier to
build seems fair game.
What concerns me here is that we not get into a position where we're
effectively still maintaining EOL'd versions. Looking at the git
history yesterday reminded me that we had such a situation back in
the early 7.x days. I can see that I still occasionally made commits
into 7.1 and 7.2 years after the last releases of those branches,
which ended up being a complete waste of effort. There was no policy
guiding what to back-patch into what branches, partly because we
didn't have a defined EOL policy then. So I want to have a policy
(and a pretty tight one) before I'll go back to doing that.
Roughly speaking, I think the policy should be "no feature bug fixes,
not even security fixes, for EOL'd branches; only fixes that are
minimally necessary to make it build on newer platforms". And
I want to have a sunset provision even for that. Fixing every branch
forevermore doesn't scale.
There's also the question of how we get to a working state in the
first place -- as we found upthread, there's a fair-sized amount
of work to do just to restore buildability right now, for anything
that was EOL'd more than a year or two back. I'm not volunteering
for that, but somebody would have to to get things off the ground.
Also, I concur with Andrew's point that we'd really have to have
buildfarm support. However, this might not be as bad as it seems.
In principle we might just need to add resurrected branches back to
the branches_to_build list. Given my view of what the back-patching
policy ought to be, a new build in an old branch might only be
required a couple of times a year, which would not be an undue
investment of buildfarm resources. (Hmmm ... but disk space could
become a problem, particularly on older machines with not so much
disk. Do we really need to maintain a separate checkout for each
branch? It seems like a fresh checkout from the repo would be
little more expensive than the current copy-a-checkout process.)
regards, tom lane
On Mon, Oct 25, 2021 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
What concerns me here is that we not get into a position where we're
effectively still maintaining EOL'd versions. Looking at the git
history yesterday reminded me that we had such a situation back in
the early 7.x days. I can see that I still occasionally made commits
into 7.1 and 7.2 years after the last releases of those branches,
which ended up being a complete waste of effort. There was no policy
guiding what to back-patch into what branches, partly because we
didn't have a defined EOL policy then. So I want to have a policy
(and a pretty tight one) before I'll go back to doing that.Roughly speaking, I think the policy should be "no feature bug fixes,
not even security fixes, for EOL'd branches; only fixes that are
minimally necessary to make it build on newer platforms". And
I want to have a sunset provision even for that. Fixing every branch
forevermore doesn't scale.
Sure, but you can ameliorate that a lot by just saying it's something
people have the *option* to do, not something anybody is *expected* to
do. I agree it's best if we continue to discourage back-patching bug
fixes into supported branches, but I also think we don't need to be
too stringent about this. What I think we don't want is, for example,
somebody working at company X deciding to back-patch all the bug fixes
that customers of company X cares about into our back-branches, but
not the other ones. But on the other hand if somebody is trying to
benchmark or test compatibility an old branch and it keeps crashing
because of some bug, telling them that they're not allowed to fix that
bug because it's not a sufficiently-minimal change to a dead branch is
kind of ridiculous. In other words, if you try to police every change
anyone wants to make, e.g. "well I know that would help YOU build on a
newer platform but it doesn't seem like it meets the criteria of the
minimum necessary change to make it build on a newer platform," then
you might as well just give up now. Nobody cares about the older
branches enough to put work into fixing whatever's wrong and then
having to argue about whether that work ought to be thrown away
anyway.
There's also the question of how we get to a working state in the
first place -- as we found upthread, there's a fair-sized amount
of work to do just to restore buildability right now, for anything
that was EOL'd more than a year or two back. I'm not volunteering
for that, but somebody would have to to get things off the ground.
Right.
Also, I concur with Andrew's point that we'd really have to have
buildfarm support. However, this might not be as bad as it seems.
In principle we might just need to add resurrected branches back to
the branches_to_build list. Given my view of what the back-patching
policy ought to be, a new build in an old branch might only be
required a couple of times a year, which would not be an undue
investment of buildfarm resources. (Hmmm ... but disk space could
become a problem, particularly on older machines with not so much
disk. Do we really need to maintain a separate checkout for each
branch? It seems like a fresh checkout from the repo would be
little more expensive than the current copy-a-checkout process.)
I suppose it would be useful if we had the ability to do new runs only
when the source code has changed...
--
Robert Haas
EDB: http://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes:
On Mon, Oct 25, 2021 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Roughly speaking, I think the policy should be "no feature bug fixes,
not even security fixes, for EOL'd branches; only fixes that are
minimally necessary to make it build on newer platforms". And
I want to have a sunset provision even for that. Fixing every branch
forevermore doesn't scale.
Sure, but you can ameliorate that a lot by just saying it's something
people have the *option* to do, not something anybody is *expected* to
do. I agree it's best if we continue to discourage back-patching bug
fixes into supported branches, but I also think we don't need to be
too stringent about this.
Actually, I think we do. If I want to test against 7.4, ISTM I want
to test against the last released 7.4 version, not something with
arbitrary later changes. Otherwise, what exactly is the point?
In principle we might just need to add resurrected branches back to
the branches_to_build list. Given my view of what the back-patching
policy ought to be, a new build in an old branch might only be
required a couple of times a year, which would not be an undue
investment of buildfarm resources.
I suppose it would be useful if we had the ability to do new runs only
when the source code has changed...
Uh, don't we have that already? I know you can configure a buildfarm
animal to force a run at least every-so-often, but it's not required,
and I don't think it's even the default.
regards, tom lane
On 2021-Oct-25, Tom Lane wrote:
Roughly speaking, I think the policy should be "no feature bug fixes,
not even security fixes, for EOL'd branches; only fixes that are
minimally necessary to make it build on newer platforms". And
I want to have a sunset provision even for that. Fixing every branch
forevermore doesn't scale.
Agreed. I think dropping such support at the same time we drop
psql/pg_dump support is a decent answer to that. That meets the stated
purpose of being able to test such support, and also it moves forward
according to subjective choice per development needs.
Also, I concur with Andrew's point that we'd really have to have
buildfarm support. However, this might not be as bad as it seems.
In principle we might just need to add resurrected branches back to
the branches_to_build list.
Well, we would add them to *some* list, but not to the one used by stock
BF members -- not only because of the diskspace issue but also because
of the time to build. I suggest that we should have a separate
list-of-branches file that would only be used by BF members especially
configured to do so; and hopefully we won't allow more than a handful
animals to do that but rather a well-chosen subset, and also maybe allow
only GCC rather than try to support other compilers. (There's no need
to ensure compilability on any Windows platform, for example.)
--
Álvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/
"Ed is the standard text editor."
http://groups.google.com/group/alt.religion.emacs/msg/8d94ddab6a9b0ad3
On Mon, Oct 25, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Actually, I think we do. If I want to test against 7.4, ISTM I want
to test against the last released 7.4 version, not something with
arbitrary later changes. Otherwise, what exactly is the point?
1. You're free to check out any commit you like.
2. Nothing I said can reasonably be confused with "let's allow
arbitrary later changes."
Uh, don't we have that already? I know you can configure a buildfarm
animal to force a run at least every-so-often, but it's not required,
and I don't think it's even the default.
Oh, OK. I wonder how that plays with the buildfarm status page's
desire to drop old results that are more than 30 days old. I guess
you'd just need to force a run at least every 28 days or something.
--
Robert Haas
EDB: http://www.enterprisedb.com
On 10/25/21 11:09, Robert Haas wrote:
On Mon, Oct 25, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Actually, I think we do. If I want to test against 7.4, ISTM I want
to test against the last released 7.4 version, not something with
arbitrary later changes. Otherwise, what exactly is the point?1. You're free to check out any commit you like.
2. Nothing I said can reasonably be confused with "let's allow
arbitrary later changes."Uh, don't we have that already? I know you can configure a buildfarm
animal to force a run at least every-so-often, but it's not required,
and I don't think it's even the default.
Yes, in fact its rather discouraged. The default is just to build when
there's a code change detected.
Oh, OK. I wonder how that plays with the buildfarm status page's
desire to drop old results that are more than 30 days old. I guess
you'd just need to force a run at least every 28 days or something.
Well, we could do that, or we could modify the way the server does the
status. The table it's based on has the last 500 records for each branch
for each animal, so the data is there.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes:
On Mon, Oct 25, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Actually, I think we do. If I want to test against 7.4, ISTM I want
to test against the last released 7.4 version, not something with
arbitrary later changes. Otherwise, what exactly is the point?
1. You're free to check out any commit you like.
Yeah, and get something that won't build. If there's any point
to this work at all, it has to be that we'll maintain the closest
possible buildable approximation to the last released version.
Oh, OK. I wonder how that plays with the buildfarm status page's
desire to drop old results that are more than 30 days old. I guess
you'd just need to force a run at least every 28 days or something.
I don't think it's a problem. If we haven't committed anything to
branch X in a month, it's likely not interesting. It might be worth
having a way to get the website to show results further back than
a month, but that doesn't need to be in the default view.
regards, tom lane