failing to build preproc.c on solaris with sun studio
Hi,
I tried PG on the gcc compile farm solaris 11.31 host. When compiling with sun
studio I can build the backend etc, but preproc.c fails to compile:
ccache /opt/developerstudio12.6/bin/cc -m64 -Xa -g -v -O0 -D_POSIX_PTHREAD_SEMANTICS -mt -D_REENTRANT -D_THREAD_SAFE -I../include -I../../../../src/interfaces/ecpg/include -I. -I. -I../../../../src/interfaces/ecpg/ecpglib -I../../../../src/interfaces/libpq -I../../../../src/include -D_POSIX_PTHREAD_SEMANTICS -c -o preproc.o preproc.c
Assertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear
Assertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear
cc: Fatal error in /opt/developerstudio12.6/lib/compilers/bin/acomp
cc: Status 134
the assertion is just a consequence of running out of memory, I believe, acomp
is well over 20GB at that point.
However I see that wrasse doesn't seem to have that problem. Which leaves me a
bit confused, because I think that's the same machine and compiler binary.
Noah, did you encounter this before / do anything to avoid this?
Greetings,
Andres Freund
On Sat, Aug 06, 2022 at 02:07:24PM -0700, Andres Freund wrote:
I tried PG on the gcc compile farm solaris 11.31 host. When compiling with sun
studio I can build the backend etc, but preproc.c fails to compile:ccache /opt/developerstudio12.6/bin/cc -m64 -Xa -g -v -O0 -D_POSIX_PTHREAD_SEMANTICS -mt -D_REENTRANT -D_THREAD_SAFE -I../include -I../../../../src/interfaces/ecpg/include -I. -I. -I../../../../src/interfaces/ecpg/ecpglib -I../../../../src/interfaces/libpq -I../../../../src/include -D_POSIX_PTHREAD_SEMANTICS -c -o preproc.o preproc.c
Assertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear
Assertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear
cc: Fatal error in /opt/developerstudio12.6/lib/compilers/bin/acomp
cc: Status 134the assertion is just a consequence of running out of memory, I believe, acomp
is well over 20GB at that point.However I see that wrasse doesn't seem to have that problem. Which leaves me a
bit confused, because I think that's the same machine and compiler binary.Noah, did you encounter this before / do anything to avoid this?
Yes. Drop --enable-debug, and override TMPDIR to some disk-backed location.
From the earliest days of wrasse, the compiler used too much RAM to build
preproc.o with --enable-debug. As of 2021-04, the compiler's "acomp" phase
needed 10G in one process, and later phases needed 11.6G across two processes.
Compilation wrote 3.7G into TMPDIR. Since /tmp consumes RAM+swap, overriding
TMPDIR relieved 3.7G of RAM pressure. Even with those protections, wrasse
intermittently reaches the 14G limit I impose (via "ulimit -v 14680064"). I
had experimented with different optimization levels, but that didn't help.
Hi,
On 2022-08-06 16:09:24 -0700, Noah Misch wrote:
On Sat, Aug 06, 2022 at 02:07:24PM -0700, Andres Freund wrote:
I tried PG on the gcc compile farm solaris 11.31 host. When compiling with sun
studio I can build the backend etc, but preproc.c fails to compile:ccache /opt/developerstudio12.6/bin/cc -m64 -Xa -g -v -O0 -D_POSIX_PTHREAD_SEMANTICS -mt -D_REENTRANT -D_THREAD_SAFE -I../include -I../../../../src/interfaces/ecpg/include -I. -I. -I../../../../src/interfaces/ecpg/ecpglib -I../../../../src/interfaces/libpq -I../../../../src/include -D_POSIX_PTHREAD_SEMANTICS -c -o preproc.o preproc.c
Assertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear
Assertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear
cc: Fatal error in /opt/developerstudio12.6/lib/compilers/bin/acomp
cc: Status 134the assertion is just a consequence of running out of memory, I believe, acomp
is well over 20GB at that point.However I see that wrasse doesn't seem to have that problem. Which leaves me a
bit confused, because I think that's the same machine and compiler binary.Noah, did you encounter this before / do anything to avoid this?
Yes. Drop --enable-debug, and override TMPDIR to some disk-backed location.
Thanks - that indeed helped...
From the earliest days of wrasse, the compiler used too much RAM to build
preproc.o with --enable-debug. As of 2021-04, the compiler's "acomp" phase
needed 10G in one process, and later phases needed 11.6G across two processes.
Compilation wrote 3.7G into TMPDIR. Since /tmp consumes RAM+swap, overriding
TMPDIR relieved 3.7G of RAM pressure. Even with those protections, wrasse
intermittently reaches the 14G limit I impose (via "ulimit -v 14680064"). I
had experimented with different optimization levels, but that didn't help.
Yikes. And it's not like newer compiler versions are likely to be forthcoming
(12.6 is newest and is from 2017...). Wonder if we should just require gcc on
solaris... There's a decent amount of stuff we could rip out in that case.
I was trying to build on solaris because I was seeing if we could get rid of
with_gnu_ld, motivated by making the meson build generate a working
Makefile.global for pgxs' benefit.
Greetings,
Andres Freund
Andres Freund <andres@anarazel.de> writes:
On 2022-08-06 16:09:24 -0700, Noah Misch wrote:
From the earliest days of wrasse, the compiler used too much RAM to build
preproc.o with --enable-debug. As of 2021-04, the compiler's "acomp" phase
needed 10G in one process, and later phases needed 11.6G across two processes.
Compilation wrote 3.7G into TMPDIR. Since /tmp consumes RAM+swap, overriding
TMPDIR relieved 3.7G of RAM pressure. Even with those protections, wrasse
intermittently reaches the 14G limit I impose (via "ulimit -v 14680064"). I
had experimented with different optimization levels, but that didn't help.
Yikes. And it's not like newer compiler versions are likely to be forthcoming
(12.6 is newest and is from 2017...). Wonder if we should just require gcc on
solaris... There's a decent amount of stuff we could rip out in that case.
Seems like it's only a matter of time before we add enough stuff to
the grammar that the build fails, period.
However, I wonder why exactly it's so large, and why the backend's gram.o
isn't an even bigger problem. Maybe an effort to cut preproc.o's code
size could yield dividends?
FWIW, my late and unlamented animal gaur was also showing unhappiness with
the size of preproc.o, manifested as a boatload of warnings like
/var/tmp//cc0MHZPD.s:11594: Warning: .stabn: description field '109d3' too big, try a different debug format
which did not happen with gram.o.
Even on a modern Linux:
$ size src/backend/parser/gram.o
text data bss dec hex filename
656568 0 0 656568 a04b8 src/backend/parser/gram.o
$ size src/interfaces/ecpg/preproc/preproc.o
text data bss dec hex filename
912005 188 7348 919541 e07f5 src/interfaces/ecpg/preproc/preproc.o
So there's something pretty bloated there. It doesn't seem like
ecpg's additional productions should justify a nigh 50% code
size increase.
regards, tom lane
On Sun, Aug 7, 2022 at 11:52 AM Andres Freund <andres@anarazel.de> wrote:
Yikes. And it's not like newer compiler versions are likely to be forthcoming
(12.6 is newest and is from 2017...). Wonder if we should just require gcc on
solaris... There's a decent amount of stuff we could rip out in that case.
Independently of the RAM requirements topic, I totally agree that
doing extra work to support a compiler that hasn't had a release in 5
years doesn't seem like time well spent.
On Sat, Aug 06, 2022 at 08:05:14PM -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2022-08-06 16:09:24 -0700, Noah Misch wrote:
From the earliest days of wrasse, the compiler used too much RAM to build
preproc.o with --enable-debug. As of 2021-04, the compiler's "acomp" phase
needed 10G in one process, and later phases needed 11.6G across two processes.
Compilation wrote 3.7G into TMPDIR. Since /tmp consumes RAM+swap, overriding
TMPDIR relieved 3.7G of RAM pressure. Even with those protections, wrasse
intermittently reaches the 14G limit I impose (via "ulimit -v 14680064"). I
had experimented with different optimization levels, but that didn't help.Yikes. And it's not like newer compiler versions are likely to be forthcoming
(12.6 is newest and is from 2017...). Wonder if we should just require gcc on
solaris... There's a decent amount of stuff we could rip out in that case.Seems like it's only a matter of time before we add enough stuff to
the grammar that the build fails, period.
I wouldn't worry about that enough to work hard in advance. The RAM usage can
grow by about 55% before that's a problem. (The 14G ulimit can tolerate a
raise.) By then, the machine may be gone or have more RAM. Perhaps even
Bison will have changed its code generation. If none of those happen, I could
switch to gcc, hack things to use gcc for just preproc.o, etc.
So there's something pretty bloated there. It doesn't seem like
ecpg's additional productions should justify a nigh 50% code
size increase.
True.
Hi,
On 2022-08-06 20:05:14 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2022-08-06 16:09:24 -0700, Noah Misch wrote:
From the earliest days of wrasse, the compiler used too much RAM to build
preproc.o with --enable-debug. As of 2021-04, the compiler's "acomp" phase
needed 10G in one process, and later phases needed 11.6G across two processes.
Compilation wrote 3.7G into TMPDIR. Since /tmp consumes RAM+swap, overriding
TMPDIR relieved 3.7G of RAM pressure. Even with those protections, wrasse
intermittently reaches the 14G limit I impose (via "ulimit -v 14680064"). I
had experimented with different optimization levels, but that didn't help.Yikes. And it's not like newer compiler versions are likely to be forthcoming
(12.6 is newest and is from 2017...). Wonder if we should just require gcc on
solaris... There's a decent amount of stuff we could rip out in that case.Seems like it's only a matter of time before we add enough stuff to
the grammar that the build fails, period.
Yea, it doesn't look too far off.
However, I wonder why exactly it's so large, and why the backend's gram.o
isn't an even bigger problem. Maybe an effort to cut preproc.o's code
size could yield dividends?
gram.c also compiles slowly and uses a lot of memory. Roughly ~8GB memory at
the peak (just watching top) and 1m40s (with debugging disabled, temp files on
disk etc).
I don't entirely know what parse.pl actually tries to achieve. The generated
output looks more different from gram.y than I'd have imagined.
It's certainly interesting that it ends up rougly 30% larger .c bison
output. Which roughly matches the difference in memory usage.
FWIW, my late and unlamented animal gaur was also showing unhappiness with
the size of preproc.o, manifested as a boatload of warnings like
/var/tmp//cc0MHZPD.s:11594: Warning: .stabn: description field '109d3' too big, try a different debug format
which did not happen with gram.o.
I suspect we're going to have to do something about the gram.c size on its
own. It's already the slowest compilation step by a lot, even on modern
compilers.
Greetings,
Andres Freund
Hi,
On 2022-08-06 17:25:52 -0700, Noah Misch wrote:
On Sat, Aug 06, 2022 at 08:05:14PM -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
Yikes. And it's not like newer compiler versions are likely to be forthcoming
(12.6 is newest and is from 2017...). Wonder if we should just require gcc on
solaris... There's a decent amount of stuff we could rip out in that case.Seems like it's only a matter of time before we add enough stuff to
the grammar that the build fails, period.I wouldn't worry about that enough to work hard in advance. The RAM usage can
grow by about 55% before that's a problem. (The 14G ulimit can tolerate a
raise.) By then, the machine may be gone or have more RAM. Perhaps even
Bison will have changed its code generation. If none of those happen, I could
switch to gcc, hack things to use gcc for just preproc.o, etc.
Sure, we can hack around it in some way. But if we need such hackery to
compile postgres with a compiler, what's the point of supporting that
compiler? It's not like sunpro provides with awesome static analysis or such.
Greetings,
Andres Freund
On Sat, Aug 06, 2022 at 05:43:50PM -0700, Andres Freund wrote:
On 2022-08-06 17:25:52 -0700, Noah Misch wrote:
On Sat, Aug 06, 2022 at 08:05:14PM -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
Yikes. And it's not like newer compiler versions are likely to be forthcoming
(12.6 is newest and is from 2017...). Wonder if we should just require gcc on
solaris... There's a decent amount of stuff we could rip out in that case.Seems like it's only a matter of time before we add enough stuff to
the grammar that the build fails, period.I wouldn't worry about that enough to work hard in advance. The RAM usage can
grow by about 55% before that's a problem. (The 14G ulimit can tolerate a
raise.) By then, the machine may be gone or have more RAM. Perhaps even
Bison will have changed its code generation. If none of those happen, I could
switch to gcc, hack things to use gcc for just preproc.o, etc.Sure, we can hack around it in some way. But if we need such hackery to
compile postgres with a compiler, what's the point of supporting that
compiler? It's not like sunpro provides with awesome static analysis or such.
To have a need to decide that, PostgreSQL would need to grow preproc.o such
that it causes 55% higher RAM usage, and the sunpro buildfarm members extant
at that time would need to have <= 32 GiB RAM. I give a 15% chance of
reaching such conditions, and we don't gain much by deciding in advance. I'd
prefer to focus on decisions affecting more-probable outcomes.
On 2022-08-06 17:59:54 -0700, Noah Misch wrote:
On Sat, Aug 06, 2022 at 05:43:50PM -0700, Andres Freund wrote:
Sure, we can hack around it in some way. But if we need such hackery to
compile postgres with a compiler, what's the point of supporting that
compiler? It's not like sunpro provides with awesome static analysis or such.To have a need to decide that, PostgreSQL would need to grow preproc.o such
that it causes 55% higher RAM usage, and the sunpro buildfarm members extant
at that time would need to have <= 32 GiB RAM. I give a 15% chance of
reaching such conditions, and we don't gain much by deciding in advance. I'd
prefer to focus on decisions affecting more-probable outcomes.
My point wasn't about the future - *today* a compile with normal settings
doesn't work, on a machine with a reasonable amount of ram. Who does it help
if one person can get postgres to compile with some applied magic - normal
users won't.
And it's not a cost free thing to support, e.g. I tried to build because
solaris with suncc forces me to generate with_gnu_ld when generating a
compatible Makefile.global for pgxs with meson.
Noah Misch <noah@leadboat.com> writes:
On Sat, Aug 06, 2022 at 05:43:50PM -0700, Andres Freund wrote:
Sure, we can hack around it in some way. But if we need such hackery to
compile postgres with a compiler, what's the point of supporting that
compiler? It's not like sunpro provides with awesome static analysis or such.
To have a need to decide that, PostgreSQL would need to grow preproc.o such
that it causes 55% higher RAM usage, and the sunpro buildfarm members extant
at that time would need to have <= 32 GiB RAM. I give a 15% chance of
reaching such conditions, and we don't gain much by deciding in advance. I'd
prefer to focus on decisions affecting more-probable outcomes.
I think it's the same rationale as with other buildfarm animals
representing niche systems: we make the effort to support them
in order to avoid becoming locked into a software monoculture.
There's not that many compilers in the farm besides gcc/clang/MSVC,
so I feel anyplace we can find one is valuable.
As per previous discussion, it may well be that gcc/clang are
dominating the field so thoroughly that nobody wants to develop
competitors anymore. So in the long run this may be a dead end.
But it's hard to be sure about that. For now, as long as
somebody's willing to do the work to support a compiler that's
not gcc/clang, we should welcome it.
regards, tom lane
On Sat, Aug 06, 2022 at 06:09:27PM -0700, Andres Freund wrote:
On 2022-08-06 17:59:54 -0700, Noah Misch wrote:
On Sat, Aug 06, 2022 at 05:43:50PM -0700, Andres Freund wrote:
Sure, we can hack around it in some way. But if we need such hackery to
compile postgres with a compiler, what's the point of supporting that
compiler? It's not like sunpro provides with awesome static analysis or such.To have a need to decide that, PostgreSQL would need to grow preproc.o such
that it causes 55% higher RAM usage, and the sunpro buildfarm members extant
at that time would need to have <= 32 GiB RAM. I give a 15% chance of
reaching such conditions, and we don't gain much by deciding in advance. I'd
prefer to focus on decisions affecting more-probable outcomes.My point wasn't about the future - *today* a compile with normal settings
doesn't work, on a machine with a reasonable amount of ram. Who does it help
if one person can get postgres to compile with some applied magic - normal
users won't.
To me, 32G is on the low side of reasonable, and omitting --enable-debug isn't
that magical. (The TMPDIR hack is optional, but I did it to lessen harm to
other users of that shared machine.)
And it's not a cost free thing to support, e.g. I tried to build because
solaris with suncc forces me to generate with_gnu_ld when generating a
compatible Makefile.global for pgxs with meson.
There may be a strong argument along those lines. Let's suppose you were to
write that revoking sunpro support would save four weeks of Andres Freund time
in the meson adoption project. I bet a critical mass of people would like
that trade. That's orthogonal to preproc.o compilation RAM usage.
Noah Misch <noah@leadboat.com> writes:
On Sat, Aug 06, 2022 at 06:09:27PM -0700, Andres Freund wrote:
And it's not a cost free thing to support, e.g. I tried to build because
solaris with suncc forces me to generate with_gnu_ld when generating a
compatible Makefile.global for pgxs with meson.
There may be a strong argument along those lines. Let's suppose you were to
write that revoking sunpro support would save four weeks of Andres Freund time
in the meson adoption project. I bet a critical mass of people would like
that trade. That's orthogonal to preproc.o compilation RAM usage.
IMO, it'd be entirely reasonable for Andres to say that *he* doesn't
want to fix the meson build scripts for niche platform X. Then
it'd be up to people who care about platform X to make that happen.
Given the current plan of supporting the Makefiles for some years
more, there wouldn't even be any great urgency in that.
regards, tom lane
Hi,
On 2022-08-06 22:55:14 -0400, Tom Lane wrote:
IMO, it'd be entirely reasonable for Andres to say that *he* doesn't
want to fix the meson build scripts for niche platform X. Then
it'd be up to people who care about platform X to make that happen.
Given the current plan of supporting the Makefiles for some years
more, there wouldn't even be any great urgency in that.
The "problem" in this case is that maintaining pgxs compatibility, as we'd
discussed at pgcon, requires emitting stuff for all the @whatever@ things in
Makefile.global.in, including with_gnu_ld. Which lead me down the rabbithole
of trying to build on solaris, with sun studio, to see if we could just remove
with_gnu_ld (and some others).
There's a lot of replacements that really aren't needed for pgxs, including
with_gnu_ld (after the patch I just sent on the "baggage" thread). I tried to
think of a way to have a 'missing' equivalent for variables filled with bogus
contents, to trigger an error when they're used. But I don't think there's
such a thing?
I haven't "really" tried because recent-ish python fails to configure on
solaris without modifications, and patching python's configure was further
than I wanted to go, but I don't forsee much issues supporting building on
solaris with gcc.
Baring minor adjustments (for e.g. dragonflybsd vs freebsd), there's two
currently "supported" OS that require some work:
- AIX, due to the symbol import / export & linking differences
- cygwin, although calling that supported right now is a stretch... I don't
think it'd be too hard, but ...
Greetings,
Andres Freund
On Sat, Aug 06, 2022 at 08:12:54PM -0700, Andres Freund wrote:
The "problem" in this case is that maintaining pgxs compatibility, as we'd
discussed at pgcon, requires emitting stuff for all the @whatever@ things in
Makefile.global.in, including with_gnu_ld. Which lead me down the rabbithole
of trying to build on solaris, with sun studio, to see if we could just remove
with_gnu_ld (and some others).There's a lot of replacements that really aren't needed for pgxs, including
with_gnu_ld (after the patch I just sent on the "baggage" thread). I tried to
think of a way to have a 'missing' equivalent for variables filled with bogus
contents, to trigger an error when they're used. But I don't think there's
such a thing?
For some patterns of variable use, this works:
badvar = $(error do not use badvar)
ok:
echo hello
bad:
echo $(badvar)
Andres Freund <andres@anarazel.de> writes:
On 2022-08-06 22:55:14 -0400, Tom Lane wrote:
IMO, it'd be entirely reasonable for Andres to say that *he* doesn't
want to fix the meson build scripts for niche platform X. Then
it'd be up to people who care about platform X to make that happen.
Given the current plan of supporting the Makefiles for some years
more, there wouldn't even be any great urgency in that.
The "problem" in this case is that maintaining pgxs compatibility, as we'd
discussed at pgcon, requires emitting stuff for all the @whatever@ things in
Makefile.global.in, including with_gnu_ld.
Sure, but why can't you just leave that for later by hard-wiring it
to false in the meson build? As long as you don't break the Makefile
build, no one is worse off.
I think if we want to get this past the finish line, we need to
acknowledge that the initial commit isn't going to be perfect.
The whole point of continuing to maintain the Makefiles is to
give us breathing room to fix remaining issues in a leisurely
fashion.
regards, tom lane
Hi,
On 2022-08-07 01:17:22 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2022-08-06 22:55:14 -0400, Tom Lane wrote:
IMO, it'd be entirely reasonable for Andres to say that *he* doesn't
want to fix the meson build scripts for niche platform X. Then
it'd be up to people who care about platform X to make that happen.
Given the current plan of supporting the Makefiles for some years
more, there wouldn't even be any great urgency in that.The "problem" in this case is that maintaining pgxs compatibility, as we'd
discussed at pgcon, requires emitting stuff for all the @whatever@ things in
Makefile.global.in, including with_gnu_ld.Sure, but why can't you just leave that for later by hard-wiring it
to false in the meson build? As long as you don't break the Makefile
build, no one is worse off.
Yea, that's what I am doing now. But it's a fair bit of work figuring out
which values need at least approximately correct values and which not.
It'd be nice if we had an automated way of building a lot of the extensions
out there...
I think if we want to get this past the finish line, we need to
acknowledge that the initial commit isn't going to be perfect.
The whole point of continuing to maintain the Makefiles is to
give us breathing room to fix remaining issues in a leisurely
fashion.
Wholeheartedly agreed.
Greetings,
Andres Freund
On Sun, Aug 7, 2022 at 7:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Even on a modern Linux:
$ size src/backend/parser/gram.o
text data bss dec hex filename
656568 0 0 656568 a04b8 src/backend/parser/gram.o
$ size src/interfaces/ecpg/preproc/preproc.o
text data bss dec hex filename
912005 188 7348 919541 e07f5 src/interfaces/ecpg/preproc/preproc.oSo there's something pretty bloated there. It doesn't seem like
ecpg's additional productions should justify a nigh 50% code
size increase.
Comparing gram.o with preproc.o:
$ objdump -t src/backend/parser/gram.o | grep yy | grep -v
UND | awk '{print $5, $6}' | sort -r | head -n3
000000000003a24a yytable
000000000003a24a yycheck
0000000000013672 base_yyparse
$ objdump -t src/interfaces/ecpg/preproc/preproc.o | grep yy | grep -v
UND | awk '{print $5, $6}' | sort -r | head -n3
000000000004d8e2 yytable
000000000004d8e2 yycheck
000000000002841e base_yyparse
The largest lookup tables are ~25% bigger (other tables are trivial in
comparison), and the function base_yyparse is about double the size,
most of which is a giant switch statement with 2510 / 3912 cases,
respectively. That difference does seem excessive. I've long wondered
if it would be possible / feasible to have more strict separation for
each C, ECPG commands, and SQL. That sounds like a huge amount of
work, though.
Playing around with the compiler flags on preproc.c, I get these
compile times, gcc memory usage as reported by /usr/bin/time -v , and
symbol sizes (non-debug build):
-O2:
time 8.0s
Maximum resident set size (kbytes): 255884
-O1:
time 6.3s
Maximum resident set size (kbytes): 170636
000000000004d8e2 yytable
000000000004d8e2 yycheck
00000000000292de base_yyparse
-O0:
time 2.9s
Maximum resident set size (kbytes): 153148
000000000004d8e2 yytable
000000000004d8e2 yycheck
000000000003585e base_yyparse
Note that -O0 bloats the binary probably because it's not using a jump
table anymore. O1 might be worth it just to reduce build times for
slower animals, even if Noah reported this didn't help the issue
upthread. I suspect it wouldn't slow down production use much since
the output needs to be compiled anyway.
--
John Naylor
EDB: http://www.enterprisedb.com
On 2022-08-07 Su 02:46, Andres Freund wrote:
I think if we want to get this past the finish line, we need to
acknowledge that the initial commit isn't going to be perfect.
The whole point of continuing to maintain the Makefiles is to
give us breathing room to fix remaining issues in a leisurely
fashion.Wholeheartedly agreed.
I'm waiting for that first commit so I can start working on the
buildfarm client changes. Ideally (from my POV) this would happen by
early Sept when I will be leaving on a trip for some weeks, and this
would be a good project to take with me. Is that possible?
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Hi,
On 2022-08-08 11:14:58 -0400, Andrew Dunstan wrote:
I'm waiting for that first commit so I can start working on the
buildfarm client changes. Ideally (from my POV) this would happen by
early Sept when I will be leaving on a trip for some weeks, and this
would be a good project to take with me. Is that possible?
Yes, I think that should be possible. I think what's required before then is
1) a minimal docs patch 2) a discussion about where to store tests results
etc. It'll clearly not be finished, but we agreed that a project like this can
only be done incrementally after a certain stage...
I've been doing a lot of cleanup over the last few days, and I'll send a new
version soon and then kick off the discussion for 2).
Greetings,
Andres Freund