Bison 3.0 updates
Buildfarm member anchovy has been failing for the last couple of days,
evidently because its owner just couldn't wait to adopt bison 3.0,
which is all of 3 days old. The failures look like
cubeparse.y:42.1-13: warning: deprecated directive, use '%name-prefix' [-Wdeprecated]
%name-prefix="cube_yy"
^^^^^^^^^^^^^
(which looks like 3.0 isn't actually ready for prime time, but at least
it's only a warning)
cubeparse.c:163:5: error: conflicting types for 'cube_yyparse'
int cube_yyparse (void);
^
cubeparse.y:32:5: note: previous declaration of 'cube_yyparse' was here
int cube_yyparse(void *result);
^
A look in the Bison release notes explains this one: they stopped
supporting YYPARSE_PARAM, which contrib/cube and contrib/seg both use.
The recommended replacement is %parse-param, which is certainly a whole
lot cleaner: it lets you specify the datatype of the extra parser
parameter, instead of having it default to "void *". This option also
changes the signature of yyerror(), but that's not a problem.
At first I thought this was going to make us go through a tool upgrade
exercise, because I couldn't find %parse-param in the documentation for
bison 1.875, which is our oldest supported version. But further
research shows that %parse-param actually was introduced in 1.875,
they just forgot to document it :-(.
So I propose the attached patch, which I've verified still works with
1.875. I don't plan to install 3.0 just to test this, but I assume it's
OK there.
I'm thinking we should apply this to all supported branches, in case
somebody gets the idea to build an older branch with bleeding-edge
tools. Any objections?
regards, tom lane
Attachments:
bison-parse-params.patchtext/x-patch; charset=us-asciiDownload+102-101
Tom,
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Buildfarm member anchovy has been failing for the last couple of days,
[...]
I'm thinking we should apply this to all supported branches, in case
somebody gets the idea to build an older branch with bleeding-edge
tools. Any objections?
Certainly looks cleaner to me, and if it works for older bison then it
seems reasonable to back-patch it.
However, I comment on this mainly because anchovy has had issues with
9.1 and older for some time, which looks like an issue with GCC 4.8.0.
Did you happen to resolve or identify what is happening there..?
Thanks!
Stephen
Stephen Frost <sfrost@snowman.net> writes:
However, I comment on this mainly because anchovy has had issues with
9.1 and older for some time, which looks like an issue with GCC 4.8.0.
Did you happen to resolve or identify what is happening there..?
Yeah, we know about that:
/messages/by-id/14242.1365200084@sss.pgh.pa.us
The bottom line was:
It looks like our choices are (1) teach configure to enable
-fno-aggressive-loop-optimizations if the compiler recognizes it,
or (2) back-port commit 8137f2c32322c624e0431fac1621e8e9315202f9.
I am in favor of fixing the back branches via (1), because it's less
work and much less likely to break third-party extensions. Some other
people argued for (2), but I've not seen any patch emerge from them,
and you can bet I'm not going to do it.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Stephen Frost <sfrost@snowman.net> writes:
However, I comment on this mainly because anchovy has had issues with
9.1 and older for some time, which looks like an issue with GCC 4.8.0.
Did you happen to resolve or identify what is happening there..?Yeah, we know about that:
/messages/by-id/14242.1365200084@sss.pgh.pa.us
Ah, right, read the thread but didn't attach it to anchovy.
The bottom line was:
It looks like our choices are (1) teach configure to enable
-fno-aggressive-loop-optimizations if the compiler recognizes it,
or (2) back-port commit 8137f2c32322c624e0431fac1621e8e9315202f9.I am in favor of fixing the back branches via (1), because it's less
work and much less likely to break third-party extensions. Some other
people argued for (2), but I've not seen any patch emerge from them,
and you can bet I'm not going to do it.
Yea, just passing -fno-aggressive-loop-optimizations seems like the
safest and best option to me also..
Thanks,
Stephen
On 2013-07-29 07:11:13 -0400, Stephen Frost wrote:
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
Stephen Frost <sfrost@snowman.net> writes:
However, I comment on this mainly because anchovy has had issues with
9.1 and older for some time, which looks like an issue with GCC 4.8.0.
Did you happen to resolve or identify what is happening there..?Yeah, we know about that:
/messages/by-id/14242.1365200084@sss.pgh.pa.usAh, right, read the thread but didn't attach it to anchovy.
The bottom line was:
It looks like our choices are (1) teach configure to enable
-fno-aggressive-loop-optimizations if the compiler recognizes it,
or (2) back-port commit 8137f2c32322c624e0431fac1621e8e9315202f9.I am in favor of fixing the back branches via (1), because it's less
work and much less likely to break third-party extensions. Some other
people argued for (2), but I've not seen any patch emerge from them,
and you can bet I'm not going to do it.Yea, just passing -fno-aggressive-loop-optimizations seems like the
safest and best option to me also..
I think we need to do both. There very well might be other optimizations
made based on the unreachability information. Like concluding some
conditional block isn't reachable. (1) would be back branches only,
right?
If others aggree I'll happily (ok, that's a blatant lie), provide
patches if that actually helps $committer.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@2ndquadrant.com> writes:
On 2013-07-29 07:11:13 -0400, Stephen Frost wrote:
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
The bottom line was:
It looks like our choices are (1) teach configure to enable
-fno-aggressive-loop-optimizations if the compiler recognizes it,
or (2) back-port commit 8137f2c32322c624e0431fac1621e8e9315202f9.I am in favor of fixing the back branches via (1), because it's less
work and much less likely to break third-party extensions. Some other
people argued for (2), but I've not seen any patch emerge from them,
and you can bet I'm not going to do it.
Yea, just passing -fno-aggressive-loop-optimizations seems like the
safest and best option to me also..
I think we need to do both. There very well might be other optimizations
made based on the unreachability information.
If we turn off the optimization, that will fix any other cases as well,
no? So why would we risk breaking third-party code by back-porting the
struct declaration changes?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2013-07-29 08:02:49 -0400, Tom Lane wrote:
Andres Freund <andres@2ndquadrant.com> writes:
On 2013-07-29 07:11:13 -0400, Stephen Frost wrote:
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
The bottom line was:
It looks like our choices are (1) teach configure to enable
-fno-aggressive-loop-optimizations if the compiler recognizes it,
or (2) back-port commit 8137f2c32322c624e0431fac1621e8e9315202f9.I am in favor of fixing the back branches via (1), because it's less
work and much less likely to break third-party extensions. Some other
people argued for (2), but I've not seen any patch emerge from them,
and you can bet I'm not going to do it.Yea, just passing -fno-aggressive-loop-optimizations seems like the
safest and best option to me also..I think we need to do both. There very well might be other optimizations
made based on the unreachability information.If we turn off the optimization, that will fix any other cases as well,
no? So why would we risk breaking third-party code by back-porting the
struct declaration changes?
The -fno-agressive-loop thingie afaics only controls the optimization
with regard to loopey constructs, not in general. I *think* there are
independent hazards with general unreachability detection. Not sure
whether they trigger at -O2 or only at -O3 though.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@2ndquadrant.com> writes:
On 2013-07-29 08:02:49 -0400, Tom Lane wrote:
If we turn off the optimization, that will fix any other cases as well,
no? So why would we risk breaking third-party code by back-porting the
struct declaration changes?
The -fno-agressive-loop thingie afaics only controls the optimization
with regard to loopey constructs, not in general. I *think* there are
independent hazards with general unreachability detection. Not sure
whether they trigger at -O2 or only at -O3 though.
I'm not excited about breaking code in order to fix optimization bugs
that are purely hypothetical (and for which there's no particular reason
to believe that the proposed change would fix them anyway). If we were
seeing such things in the field it would be a different story.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2013-07-29 08:17:46 -0400, Tom Lane wrote:
Andres Freund <andres@2ndquadrant.com> writes:
On 2013-07-29 08:02:49 -0400, Tom Lane wrote:
If we turn off the optimization, that will fix any other cases as well,
no? So why would we risk breaking third-party code by back-porting the
struct declaration changes?The -fno-agressive-loop thingie afaics only controls the optimization
with regard to loopey constructs, not in general. I *think* there are
independent hazards with general unreachability detection. Not sure
whether they trigger at -O2 or only at -O3 though.I'm not excited about breaking code in order to fix optimization bugs
that are purely hypothetical (and for which there's no particular reason
to believe that the proposed change would fix them anyway). If we were
seeing such things in the field it would be a different story.
Well, given the problems we're discussing here, it's not all that
hypothetical. Obviously the compiler *does* have information it believes
to proof some code to be unreachable. And we know that information comes
from the way the catalog structs are declared.
I don't really fancy finding more and more optimizations we didn't think
about and disabling them one by one after annoying debugging sessions.
Also, just about all code that would get broken by that change would be
code that's already pretty much broken.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@2ndquadrant.com> writes:
On 2013-07-29 08:17:46 -0400, Tom Lane wrote:
I'm not excited about breaking code in order to fix optimization bugs
that are purely hypothetical (and for which there's no particular reason
to believe that the proposed change would fix them anyway). If we were
seeing such things in the field it would be a different story.
Well, given the problems we're discussing here, it's not all that
hypothetical. Obviously the compiler *does* have information it believes
to proof some code to be unreachable.
The known case here has nothing to do with unreachability IMO; it has
to do with concluding that a loop can be unrolled into exactly one
execution. But in any case, there is no point in arguing about what
hypothetical versions of gcc might do in hypothetical cases. We have
experimental evidence that -fno-aggressive-loop-optimizations fixes the
observed problem with gcc 4.8.0. So we can apply a one-line patch that
fixes the observed problem and seems 100% safe, or we can apply a very
large patch that might possibly fix problems that we don't know exist,
and might also break third-party code. Given the available evidence
the choice seems pretty clear to me. If you want to provide concrete
proof that there are additional optimization hazards then that would
change the tradeoff.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 07/29/2013 01:05 AM, Tom Lane wrote:
Buildfarm member anchovy has been failing for the last couple of days,
evidently because its owner just couldn't wait to adopt bison 3.0,
which is all of 3 days old. The failures look like
Reminder to buildfarm animal owners: if you upgrade software please make
sure your buildfarm members are still working. And if the upgrade
involves the OS or the compiler, please use the udate_personality.pl
script to update the server.
cheers
andrew
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2013-07-29 08:44:56 -0400, Tom Lane wrote:
Andres Freund <andres@2ndquadrant.com> writes:
On 2013-07-29 08:17:46 -0400, Tom Lane wrote:
I'm not excited about breaking code in order to fix optimization bugs
that are purely hypothetical (and for which there's no particular reason
to believe that the proposed change would fix them anyway). If we were
seeing such things in the field it would be a different story.Well, given the problems we're discussing here, it's not all that
hypothetical. Obviously the compiler *does* have information it believes
to proof some code to be unreachable.The known case here has nothing to do with unreachability IMO; it has
to do with concluding that a loop can be unrolled into exactly one
execution.
If I understand the optimization correctly what it does is building a
flow control model of a loop and concluding which its iterations can be
reached assuming standard conforming code. In this case, since it knows
(from looking at the underlying data structures) that the upper bound of
the iteration has to be 1 it decides to stop there. If oidvector had
values[2] it would make two iterations.
But in any case, there is no point in arguing about what
hypothetical versions of gcc might do in hypothetical cases. We have
experimental evidence that -fno-aggressive-loop-optimizations fixes the
observed problem with gcc 4.8.0.
Well, it seems to me like we don't have particularly good experience
with fixing compiler optimization issues by disabling some specific
optimization. They tend to crop up again under the umbrella of another
feature a compiler version or two later.
FWIW cherry-picking bf66bfb4cd726b6ddab3fe2718648f65a7106149 and
e58f3181fdaacc91d4cc1bd98a4a8ad7d286544c fixes the issue for me
(After fixing trivial conflicts in the latter).
32 files changed, 98 insertions(+), 50 deletions(-)
So we can apply a one-line patch that
fixes the observed problem and seems 100% safe, or we can apply a very
large patch that might possibly fix problems that we don't know exist,
and might also break third-party code. Given the available evidence
the choice seems pretty clear to me. If you want to provide concrete
proof that there are additional optimization hazards then that would
change the tradeoff.
No, don't really want to spend more time on this. We'll see whether it
crops up again or not.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 07/29/2013 01:05 AM, Tom Lane wrote:
Buildfarm member anchovy has been failing for the last couple of days,
evidently because its owner just couldn't wait to adopt bison 3.0,
which is all of 3 days old.
Hmm? Anchovy is upgrading automatically to newest Arch Linux packages daily.
I assumed that's a good thing -- the purpose of build farm is to test
PostgreSQL in various different real-life environments? Arch Linux is
one such environment that adopts new packages very quickly. If Arch
users are unable to build PostgreSQL then surely it's good to be
notified by the build farm before real users start reporting problems?
I don't mean to sound reluctant, I'm open to suggestions, but please
help me understand why this is bad.
On Mon, Jul 29, 2013 at 4:59 PM, Andrew Dunstan <andrew@dunslane.net> wrote:
Reminder to buildfarm animal owners: if you upgrade software please make
sure your buildfarm members are still working.
If I had checked the machine's status manually after upgrading, the
best course of action would be to report the incompatibility to
PostgreSQL mailing lists. The end result is the same, in that sense,
it was "working". Manual upgrades would only delay that reporting, not
prevent it?
I realize there have been problems with anchovy that are my own fault
and I'm sorry about that. I check the buildfarm status every week or
two.
And if the upgrade involves
the OS or the compiler, please use the udate_personality.pl script to update
the server.
Is it OK to run update_personality.pl automatically every day from crontab?
Regards,
Marti
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 07/29/2013 10:28 AM, Marti Raudsepp wrote:
Hi,
On 07/29/2013 01:05 AM, Tom Lane wrote:
Buildfarm member anchovy has been failing for the last couple of days,
evidently because its owner just couldn't wait to adopt bison 3.0,
which is all of 3 days old.Hmm? Anchovy is upgrading automatically to newest Arch Linux packages daily.
I assumed that's a good thing -- the purpose of build farm is to test
PostgreSQL in various different real-life environments? Arch Linux is
one such environment that adopts new packages very quickly. If Arch
users are unable to build PostgreSQL then surely it's good to be
notified by the build farm before real users start reporting problems?I don't mean to sound reluctant, I'm open to suggestions, but please
help me understand why this is bad.
The buildfarm is principally designed to detect when some change in the
Postgres code breaks something. As such, the expectation is that the
animals will provide a fairly stable platform. A totally moving target
will present us with false positives, since the failure could be due to
no action of ours at all.
A machine that can auto-upgrade anything at any time is really not very
fit for the purpose. What that would be testing is not if a change in
Postgres code has broken something, but if the host packaging system has
broken something.
And if the upgrade involves
the OS or the compiler, please use the udate_personality.pl script to update
the server.Is it OK to run update_personality.pl automatically every day from crontab?
No. It is designed to support a fairly small number of changes.
cheers
andrew
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@2ndquadrant.com> writes:
FWIW cherry-picking bf66bfb4cd726b6ddab3fe2718648f65a7106149 and
e58f3181fdaacc91d4cc1bd98a4a8ad7d286544c fixes the issue for me
(After fixing trivial conflicts in the latter).
I've already spent more time on this than I wanted to, but just for
the archives' sake: neither of those commits exist in the master
repo.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2013-07-29 10:46:41 -0400, Andrew Dunstan wrote:
On 07/29/2013 10:28 AM, Marti Raudsepp wrote:
Hi,
On 07/29/2013 01:05 AM, Tom Lane wrote:
Buildfarm member anchovy has been failing for the last couple of days,
evidently because its owner just couldn't wait to adopt bison 3.0,
which is all of 3 days old.Hmm? Anchovy is upgrading automatically to newest Arch Linux packages daily.
I assumed that's a good thing -- the purpose of build farm is to test
PostgreSQL in various different real-life environments? Arch Linux is
one such environment that adopts new packages very quickly. If Arch
users are unable to build PostgreSQL then surely it's good to be
notified by the build farm before real users start reporting problems?I don't mean to sound reluctant, I'm open to suggestions, but please
help me understand why this is bad.The buildfarm is principally designed to detect when some change in the
Postgres code breaks something. As such, the expectation is that the animals
will provide a fairly stable platform. A totally moving target will present
us with false positives, since the failure could be due to no action of ours
at all.A machine that can auto-upgrade anything at any time is really not very fit
for the purpose. What that would be testing is not if a change in Postgres
code has broken something, but if the host packaging system has broken
something.
FWIW given the evidence here anchovy seems to do a good job. Both the
gcc 4.8 and the bison 3.0 problems are things the project needs to know
about and that need to get fixed. And none of the other animals seems to
run anything really up2date.
Also, afaik Arch has a rolling release model, so there isn't any other
specific release that should be tested instead.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2013-07-29 10:52:10 -0400, Tom Lane wrote:
Andres Freund <andres@2ndquadrant.com> writes:
FWIW cherry-picking bf66bfb4cd726b6ddab3fe2718648f65a7106149 and
e58f3181fdaacc91d4cc1bd98a4a8ad7d286544c fixes the issue for me
(After fixing trivial conflicts in the latter).I've already spent more time on this than I wanted to, but just for
the archives' sake: neither of those commits exist in the master
repo.
Hrmpf, sorry for that. Posting the hashes *after* cherry-picking them
ontop REL9_1_STABLE obviously isn't very helpful. The correct ones are:
8a3f745f160d8334ad978676828d3926ac949f43
8137f2c32322c624e0431fac1621e8e9315202f9
Those definitely are reachable via gitweb.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andrew Dunstan <andrew@dunslane.net> writes:
On 07/29/2013 10:28 AM, Marti Raudsepp wrote:
Hmm? Anchovy is upgrading automatically to newest Arch Linux packages daily.
I assumed that's a good thing -- the purpose of build farm is to test
PostgreSQL in various different real-life environments? Arch Linux is
one such environment that adopts new packages very quickly. If Arch
users are unable to build PostgreSQL then surely it's good to be
notified by the build farm before real users start reporting problems?
The buildfarm is principally designed to detect when some change in the
Postgres code breaks something. As such, the expectation is that the
animals will provide a fairly stable platform. A totally moving target
will present us with false positives, since the failure could be due to
no action of ours at all.
I can see both sides of this. It's definitely nice to get early warning
when toolchain changes create new problems for Postgres, but it's not
clear that the buildfarm is the best way to get such notifications.
The big problem here is that it might take a long time before we have
a suitable fix, and in the meantime that buildfarm member is basically
useless: it can't tell us anything new, and even if it tried, probably
nobody would notice because they'd not realize the failure cause changed.
We've had cases before where a buildfarm member that was failing for
some known reason was unable to detect a later-introduced bug, so this
isn't hypothetical. (And I'm not too thrilled that we've let the
back-branch failures on anchovy persist for months, though I suppose
that's not as bad as a breakage in HEAD.)
Ideally we'd not rotate toolchain updates into the buildfarm until
they'd been checked out in some other way. On the other hand, this
doesn't seem like a big enough problem to justify building some
different infrastructure for it, so I'm not seeing a practical way
to do better.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Jul 29, 2013 at 5:53 PM, Andres Freund <andres@2ndquadrant.com> wrote:
Both the
gcc 4.8 and the bison 3.0 problems are things the project needs to know
about
Perl 5.18 too: /messages/by-id/2825.1370226552@sss.pgh.pa.us
Marti
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 07/29/2013 11:05 AM, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
On 07/29/2013 10:28 AM, Marti Raudsepp wrote:
Hmm? Anchovy is upgrading automatically to newest Arch Linux packages daily.
I assumed that's a good thing -- the purpose of build farm is to test
PostgreSQL in various different real-life environments? Arch Linux is
one such environment that adopts new packages very quickly. If Arch
users are unable to build PostgreSQL then surely it's good to be
notified by the build farm before real users start reporting problems?The buildfarm is principally designed to detect when some change in the
Postgres code breaks something. As such, the expectation is that the
animals will provide a fairly stable platform. A totally moving target
will present us with false positives, since the failure could be due to
no action of ours at all.I can see both sides of this. It's definitely nice to get early warning
when toolchain changes create new problems for Postgres, but it's not
clear that the buildfarm is the best way to get such notifications.The big problem here is that it might take a long time before we have
a suitable fix, and in the meantime that buildfarm member is basically
useless: it can't tell us anything new, and even if it tried, probably
nobody would notice because they'd not realize the failure cause changed.
We've had cases before where a buildfarm member that was failing for
some known reason was unable to detect a later-introduced bug, so this
isn't hypothetical. (And I'm not too thrilled that we've let the
back-branch failures on anchovy persist for months, though I suppose
that's not as bad as a breakage in HEAD.)Ideally we'd not rotate toolchain updates into the buildfarm until
they'd been checked out in some other way. On the other hand, this
doesn't seem like a big enough problem to justify building some
different infrastructure for it, so I'm not seeing a practical way
to do better.
There has to be something between "set in stone and never changes" and
"auto-updates everything every 24 hours" that would suit us.
I'm toying with the idea of a check_upgrade mode for the buildfarm
client where it wouldn't do a git pull, but would report changes if the
build result was different from the previous result. You'd run this
immediately after pulling new changes into your OS. Other, brighter
ideas welcome.
cheers
andrew
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers