GNU/Hurd portability patches
Hi,
please find attached the current patches required to get master built
and the testsuites run on Debian's hurd-i386 port. I have not had the
time to test the hurd-amd64 port in the same fashion, but will do so
next.
As mentioned in this thread[1]/messages/by-id/685a3ccd.170a0220.2d27e6.2232@mx.google.com, one needs a fairly recent kernel for the
high-resolution timers in order to avoid regression test failures, and
the buildfarm client run does not like debug_parallel_query.
Michael
[1]: /messages/by-id/685a3ccd.170a0220.2d27e6.2232@mx.google.com
Michael Banck <mbanck@gmx.net> writes:
please find attached the current patches required to get master built
and the testsuites run on Debian's hurd-i386 port. I have not had the
time to test the hurd-amd64 port in the same fashion, but will do so
next.
Pushed, after some fooling with the comments and commit messages.
Please go ahead and set up a Hurd buildfarm member, so that it
stays fixed.
regards, tom lane
Hi,
On Tue, Jul 01, 2025 at 12:41:50PM -0400, Tom Lane wrote:
Michael Banck <mbanck@gmx.net> writes:
please find attached the current patches required to get master built
and the testsuites run on Debian's hurd-i386 port. I have not had the
time to test the hurd-amd64 port in the same fashion, but will do so
next.Pushed, after some fooling with the comments and commit messages.
Thanks! Also for back-patching them.
Regarding the comment,
| * If <limits.h> didn't define IOV_MAX, define our own. X/Open requires at
| * least 16. (GNU Hurd apparently feel that they're not bound by X/Open,
| * because they don't define this symbol at all.)
I personally don't care much about those missing limits on the Hurd, but
Thomas mentioned in
CA+hUKG+tqFVY7Fi=WBvZ6-UsATjcPNBDtphDm7YLjevm2kxSvw@mail.gmail.com (and
Samuel Thibault cited the same sentence to me now when I discussed the
commit with him) that POSIX said "A definition of one of the symbolic
constants in the following list shall be omitted from <limits.h> on
specific implementations where the corresponding value is equal to or
greater than the stated minimum, but is unspecified". So "requires at
least 16" might be a bit too strong here, AIUI.
Please go ahead and set up a Hurd buildfarm member, so that it
stays fixed.
Right, will look into this next.
Michael
Michael Banck <mbanck@gmx.net> writes:
On Tue, Jul 01, 2025 at 12:41:50PM -0400, Tom Lane wrote:
| * If <limits.h> didn't define IOV_MAX, define our own. X/Open requires at
| * least 16. (GNU Hurd apparently feel that they're not bound by X/Open,
| * because they don't define this symbol at all.)
I personally don't care much about those missing limits on the Hurd, but
Thomas mentioned in
CA+hUKG+tqFVY7Fi=WBvZ6-UsATjcPNBDtphDm7YLjevm2kxSvw@mail.gmail.com (and
Samuel Thibault cited the same sentence to me now when I discussed the
commit with him) that POSIX said "A definition of one of the symbolic
constants in the following list shall be omitted from <limits.h> on
specific implementations where the corresponding value is equal to or
greater than the stated minimum, but is unspecified". So "requires at
least 16" might be a bit too strong here, AIUI.
Oh, I missed that bit of the spec. I think "requires at least 16"
is correct anyway, but the parenthetical remark isn't really right.
Not sure if it's worth changing --- the end result is the same in
any case.
regards, tom lane
On Tue, Jul 01, 2025 at 10:01:37PM +0200, Michael Banck wrote:
On Tue, Jul 01, 2025 at 12:41:50PM -0400, Tom Lane wrote:
Pushed, after some fooling with the comments and commit messages.
Thanks! Also for back-patching them.
Catching up on this thread after-the-fact, specifically looking at
29213636e6cd as I did the original check tweaked here for O_RDONLY.
Agreed that a backpatch should be OK as done. The buildfarm looks OK
currently.
--
Michael
Hello hackers,
02.07.2025 02:53, Michael Paquier wrote:
Catching up on this thread after-the-fact, specifically looking at
29213636e6cd as I did the original check tweaked here for O_RDONLY.
Agreed that a backpatch should be OK as done. The buildfarm looks OK
currently.
Three months later we can see a number of failures produced by that
animal on several branches, e.g. [1]https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-19%2007%3A29%3A06:
timed out after 3600 secs
---
# +++ regress install-check in src/test/modules/test_shm_mq +++
# using postmaster on /home/demo/build-farm-19.1/buildroot/tmp/buildfarm-GBEDDQ, port 5678
---- hang ----
I've spun up Debian Hurd locally, using [2]https://cdimage.debian.org/cdimage/ports/latest/hurd-amd64/debian-hurd-amd64-20250807.img.tar.xz, and reproduced this, just by
running: `make -s check -C src/test/modules/test_shm_mq/` in a loop.
It's not that easy to see a backtrace of the running processes on that OS,
but with some debug logging (attached), I can see that the test backend
process or shm_mq background worker just gets stuck shortly after poll().
One example:
echo "
log_min_messages = DEBUG5
log_line_prefix = '%m [%p:%l] %q%a '
log_connections = 'true'
log_disconnections = 'true'
log_statement = 'all'
autovacuum = off
" > /tmp/extra.config
for i in {1..100}; do echo "ITERATION $i"; TEMP_CONFIG=/tmp/extra.config \
timeout 60 make -s check -C src/test/modules/test_shm_mq/ || break; done; \
ps -A | grep postgres
...
ITERATION 9
ok 1 - test_shm_mq 14498 ms
ITERATION 10
ok 1 - test_shm_mq 34759 ms
ITERATION 11
# +++ regress check in src/test/modules/test_shm_mq +++
# initializing database system by copying initdb template
# using temp instance on port 58928 with PID 28848
make: *** [../../../../src/makefiles/pgxs.mk:451: check] Terminated
demo 28848 p0 So 0:00.03 postgres -D /home/demo/postgresql/src/test/modules/test_shm_mq/tmp_check/data -F -c
listen_addresses= -k /tmp/pg_regress-McfSYJ
demo 28849 - Ssfo 0:00.06 postgres -D /home/demo/postgresql/src/test/modules/test_shm_mq/tmp_check/data -F -c
listen_addresses= -k /tmp/pg_regress-McfSYJ
demo 28850 - Ssfo 0:00.00 postgres -D /home/demo/postgresql/src/test/modules/test_shm_mq/tmp_check/data -F -c
listen_addresses= -k /tmp/pg_regress-McfSYJ
demo 28851 - Ssfo 0:00.00 postgres -D /home/demo/postgresql/src/test/modules/test_shm_mq/tmp_check/data -F -c
listen_addresses= -k /tmp/pg_regress-McfSYJ
demo 28852 - Ssfo 0:00.00 postgres -D /home/demo/postgresql/src/test/modules/test_shm_mq/tmp_check/data -F -c
listen_addresses= -k /tmp/pg_regress-McfSYJ
demo 28853 - Ssfo 0:00.00 postgres -D /home/demo/postgresql/src/test/modules/test_shm_mq/tmp_check/data -F -c
listen_addresses= -k /tmp/pg_regress-McfSYJ
demo 28855 - Ssfo 0:00.00 postgres -D /home/demo/postgresql/src/test/modules/test_shm_mq/tmp_check/data -F -c
listen_addresses= -k /tmp/pg_regress-McfSYJ
demo 28856 - Ssfo 0:00.00 postgres -D /home/demo/postgresql/src/test/modules/test_shm_mq/tmp_check/data -F -c
listen_addresses= -k /tmp/pg_regress-McfSYJ
demo 28862 - Ssfo 0:01.13 postgres -D /home/demo/postgresql/src/test/modules/test_shm_mq/tmp_check/data -F -c
listen_addresses= -k /tmp/pg_regress-McfSYJ
demo 28866 - Ssfo 0:00.87 postgres -D /home/demo/postgresql/src/test/modules/test_shm_mq/tmp_check/data -F -c
listen_addresses= -k /tmp/pg_regress-McfSYJ
The process 28866 is stuck, it doesn't response to SIGTERM.
src/test/modules/test_shm_mq/log/postmaster.log contains:
2025-09-21 05:48:00.749 BST [28862:23] pg_regress/test_shm_mq LOG: statement: SELECT test_shm_mq(100, (select
string_agg(chr(32+(random()*95)::int), '') from generate_series(1,(100+200*random())::int)), 10000, 1);
2025-09-21 05:48:00.750 BST [28848:59] DEBUG: postmaster received pmsignal signal
2025-09-21 05:48:00.750 BST [28848:60] DEBUG: registering background worker ""
2025-09-21 05:48:00.750 BST [28848:61] DEBUG: assigned pm child slot 240 for bgworker
2025-09-21 05:48:00.750 BST [28848:62] DEBUG: starting background worker process ""
2025-09-21 05:48:00.751 BST [28866:1] DEBUG: find_in_path: trying
"/home/demo/postgresql/tmp_install/usr/local/pgsql/lib/test_shm_mq"
2025-09-21 05:48:00.751 BST [28866:2] DEBUG: find_in_path: trying
"/home/demo/postgresql/tmp_install/usr/local/pgsql/lib/test_shm_mq.so"
...
!!!shm_mq_receive_bytes[28866]| before WaitLatch
!!!WaitEventSetWaitBlock POLL[28866]| rc: 1
^^^ the last message from that process ^^^
Other run:
2025-09-21 06:03:21.041 BST [29427:23] pg_regress/test_shm_mq LOG: statement: SELECT test_shm_mq(100, (select
string_agg(chr(32+(random()*95)::int), '') from generate_series(1,(100+200*random())::int)), 10000, 1);
2025-09-21 06:03:21.042 BST [29412:64] DEBUG: postmaster received pmsignal signal
2025-09-21 06:03:21.042 BST [29412:65] DEBUG: registering background worker ""
2025-09-21 06:03:21.042 BST [29412:66] DEBUG: assigned pm child slot 239 for bgworker
2025-09-21 06:03:21.042 BST [29412:67] DEBUG: starting background worker process ""
2025-09-21 06:03:21.043 BST [29431:1] DEBUG: find_in_path: trying
"/home/demo/postgresql/tmp_install/usr/local/pgsql/lib/test_shm_mq"
2025-09-21 06:03:21.043 BST [29431:2] DEBUG: find_in_path: trying
"/home/demo/postgresql/tmp_install/usr/local/pgsql/lib/test_shm_mq.so"
!!!WaitEventSetWaitBlock POLL[29431]| rc: 1
!!!WaitEventSetWaitBlock[29431] 1
!!!WaitEventSetWaitBlock[29431] 2
!!!WaitEventSetWaitBlock[29431] 3
!!!WaitEventSetWaitBlock[29431] 4
!!!WaitEventSetWaitBlock[29431] 5
^^^ the last message from the process 29431 ^^^
So it seems to me that Hurd is not mature enough yet to test Postgres.
[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-19%2007%3A29%3A06
[2]: https://cdimage.debian.org/cdimage/ports/latest/hurd-amd64/debian-hurd-amd64-20250807.img.tar.xz
Best regards,
Alexander
Attachments:
waitventset-debugging.patchtext/x-patch; charset=UTF-8; name=waitventset-debugging.patchDownload+16-0
Hi,
On Sun, Sep 21, 2025 at 09:00:00AM +0300, Alexander Lakhin wrote:
02.07.2025 02:53, Michael Paquier wrote:
Catching up on this thread after-the-fact, specifically looking at
29213636e6cd as I did the original check tweaked here for O_RDONLY.
Agreed that a backpatch should be OK as done. The buildfarm looks OK
currently.Three months later we can see a number of failures produced by that
animal on several branches, e.g. [1]:
timed out after 3600 secs
Right, I've noticed them as well of course, but did not have time to
take a closer look yet. This timeout in test_shm_mq happens on 32bit
hurd-i386 as well, btw.
It's not that easy to see a backtrace of the running processes on that OS,
but with some debug logging (attached), I can see that the test backend
process or shm_mq background worker just gets stuck shortly after poll().
Thanks for taking a deeper look.
So it seems to me that Hurd is not mature enough yet to test Postgres.
That is a bit harsh; this issue should be looked into, but I would not
say it is not mature enough to test Postgres.
Michael
On Sun, Sep 21, 2025 at 09:00:00AM +0300, Alexander Lakhin wrote:
So it seems to me that Hurd is not mature enough yet to test Postgres.
I'd say that it is likely not mature enough for production. In terms
of testing, that seems kind of OK. However, failures like the one you
are reporting here bring noise in the buildfarm, meaning that we would
perhaps tend to ignore reports that are in fact legit because we don't
really know what would be Hurd-related or Postgres-related. Or we
could get mixes of both.
--
Michael
Michael Paquier <michael@paquier.xyz> writes:
On Sun, Sep 21, 2025 at 09:00:00AM +0300, Alexander Lakhin wrote:
So it seems to me that Hurd is not mature enough yet to test Postgres.
I'd say that it is likely not mature enough for production. In terms
of testing, that seems kind of OK. However, failures like the one you
are reporting here bring noise in the buildfarm, meaning that we would
perhaps tend to ignore reports that are in fact legit because we don't
really know what would be Hurd-related or Postgres-related. Or we
could get mixes of both.
Yeah, I think the tendency would be to write off any failures as
"Hurd teething pains" unless there are similar reports from other
animals. As long as the failure rate is pretty low, I'm okay
with that.
regards, tom lane
On Mon, Sep 22, 2025 at 07:02:25AM +0900, Michael Paquier wrote:
On Sun, Sep 21, 2025 at 09:00:00AM +0300, Alexander Lakhin wrote:
So it seems to me that Hurd is not mature enough yet to test Postgres.
I'd say that it is likely not mature enough for production. In terms
of testing, that seems kind of OK.
Ack.
However, failures like the one you are reporting here bring noise in
the buildfarm, meaning that we would perhaps tend to ignore reports
that are in fact legit because we don't really know what would be
Hurd-related or Postgres-related.
I will keep an eye on it.
There have been two (infrequent) failures in the isoloation tests as
well, which I haven't had time to investigate further:
In PG15:
|test multiple-row-versions ... FAILED (test process exited with exit code 1) 145260 ms
This one is a segfault, it happened twice so far and only on REL_15_STABLE:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-18%2007%3A57%3A40
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-03%2007%3A36%3A45
|setup failed: server closed the connection unexpectedly
| This probably means the server terminated abnormally
| before or while processing the request.
|/hurd/crash: [...]/buildroot/REL_15_STABLE/inst/bin/postgres -D data-C(25142) crashed, signal {no:11, code:2, error:2}, exception {1, code:2, subcode:0}, PCs: {
| 0x1019b4d34 0x1028ee2a0 0x10249e1ac 0x1025da71f 0x102596611 0x10049fe63,
| 0x102497aec 0x1028e674e 0x1024ada52 0x1024aeb2a 0x1024a785b 0x10291f5b7 0x10291f
| 625 0x1024c7242 0x1024986a2 0x1024c72c0,
| 0x102497aec 0x102572ace
| }, killing task
In PG17:
|not ok 98 - stats 2100 ms
|diff -U3 buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/expected/stats_1.out buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/output_iso/results/stats.out
|--- buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/expected/stats_1.out 2025-09-15 22:06:24.000000000 +0100
|+++ buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/output_iso/results/stats.out 2025-09-15 22:23:05.000000000 +0100
|@@ -1445,7 +1445,7 @@
|
| name |pg_stat_get_function_calls|total_above_zero|self_above_zero
| --------------+--------------------------+----------------+---------------
|-test_stat_func| 1|t |t
|+test_stat_func| 1|f |f
| (1 row)
This one happened twice as well, and so far only on REL_17_STABLE:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-15%2021%3A06%3A17
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-13%2008%3A04%3A05
This might be due to the HPET timer not always being monotonic - there
has been an independent report via the Debian autobuilder and a GNU Mach
fix was committed last night, I'll check whether this can be
reproduced/confirmed-fixed with this change:
https://lists.gnu.org/archive/html/bug-hurd/2025-09/msg00020.html
https://salsa.debian.org/hurd-team/gnumach/-/commit/06079a8d212817ee0365f318bd90b67bf56bfb06
Michael
Hello Michael,
Thank you for paying attention to this!
Maybe I was wrong and we can at least categorize these failures -- I hope
their number is finite, but my point was that it's hardly possible to use
the information, that fruitcrow gives us, to improve Postgres.
22.09.2025 10:22, Michael Banck wrote:
On Mon, Sep 22, 2025 at 07:02:25AM +0900, Michael Paquier wrote:
However, failures like the one you are reporting here bring noise in
the buildfarm, meaning that we would perhaps tend to ignore reports
that are in fact legit because we don't really know what would be
Hurd-related or Postgres-related.I will keep an eye on it.
There have been two (infrequent) failures in the isoloation tests as
well, which I haven't had time to investigate further:In PG17:
|not ok 98 - stats 2100 ms
|diff -U3 buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/expected/stats_1.out buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/output_iso/results/stats.out
|--- buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/expected/stats_1.out 2025-09-15 22:06:24.000000000 +0100
|+++ buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/output_iso/results/stats.out 2025-09-15 22:23:05.000000000 +0100
|@@ -1445,7 +1445,7 @@
|
| name |pg_stat_get_function_calls|total_above_zero|self_above_zero
| --------------+--------------------------+----------------+---------------
|-test_stat_func| 1|t |t
|+test_stat_func| 1|f |f
| (1 row)This one happened twice as well, and so far only on REL_17_STABLE:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-15%2021%3A06%3A17
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-13%2008%3A04%3A05This might be due to the HPET timer not always being monotonic - there
has been an independent report via the Debian autobuilder and a GNU Mach
fix was committed last night, I'll check whether this can be
reproduced/confirmed-fixed with this change:https://lists.gnu.org/archive/html/bug-hurd/2025-09/msg00020.html
https://salsa.debian.org/hurd-team/gnumach/-/commit/06079a8d212817ee0365f318bd90b67bf56bfb06
I reproduced the issue locally and found that
/* total elapsed time in this function call */
INSTR_TIME_SET_CURRENT(total);
INSTR_TIME_SUBTRACT(total, fcu->start);
sometimes gives total.ticks = 0.
I tried the test program from [2] and got on my VM:
went backwards 0 out of 10000000 times
(three times)
But I've created my own test program (see attached), which shows:
for i in {1..1000}; do printf "ITERATION $i "; ./tt 100 || break; done
ITERATION 1 t1: 55873639081080, t2: 55873639084090, t2 - t1: 3010 (r: 4950)
ITERATION 2 t1: 55873641019440, t2: 55873641025700, t2 - t1: 6260 (r: 4950)
ITERATION 3 t1: 55873642794200, t2: 55873642797130, t2 - t1: 2930 (r: 4950)
...
ITERATION 23 t1: 55873675001590, t2: 55873675001590, t2 - t1: 0 (r: 4950)
I don't know how to test the patch committed, but if you can, that would
be nice.
Best regards,
Alexander
Attachments:
tt.ctext/x-csrc; charset=UTF-8; name=tt.cDownload
On Mon, Sep 22, 2025 at 11:30:00PM +0300, Alexander Lakhin wrote:
I reproduced the issue locally and found that
/* total elapsed time in this function call */
INSTR_TIME_SET_CURRENT(total);
INSTR_TIME_SUBTRACT(total, fcu->start);
sometimes gives total.ticks = 0.I tried the test program from [2] and got on my VM:
went backwards 0 out of 10000000 times
(three times)But I've created my own test program (see attached), which shows:
for i in {1..1000}; do printf "ITERATION $i "; ./tt 100 || break; done
ITERATION 1 t1: 55873639081080, t2: 55873639084090, t2 - t1: 3010 (r: 4950)
ITERATION 2 t1: 55873641019440, t2: 55873641025700, t2 - t1: 6260 (r: 4950)
ITERATION 3 t1: 55873642794200, t2: 55873642797130, t2 - t1: 2930 (r: 4950)
...
ITERATION 23 t1: 55873675001590, t2: 55873675001590, t2 - t1: 0 (r: 4950)I don't know how to test the patch committed, but if you can, that would
be nice.
We've had this exact same issue of a clock going backwards with one of
the netbsd animals on an older version not supported anymore by
upstream and that has been kicked out of the buildfarm, as far as I
recall. This has created some disturbance in the regression tests
causing EXPLAIN plan outputs we did not expect, in terms of extra
negative signs and the animal showing red periodically. So yes, this
random factor would be annoying in the buildfarm.
--
Michael
Hi,
On Mon, Sep 22, 2025 at 11:30:00PM +0300, Alexander Lakhin wrote:
Maybe I was wrong and we can at least categorize these failures -- I hope
their number is finite, but my point was that it's hardly possible to use
the information, that fruitcrow gives us, to improve Postgres.
Or, for that matter, to improve GNU Mach/Hurd...
22.09.2025 10:22, Michael Banck wrote:
There have been two (infrequent) failures in the isoloation tests as
well, which I haven't had time to investigate further:In PG17:
|not ok 98 - stats 2100 ms
|diff -U3 buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/expected/stats_1.out buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/output_iso/results/stats.out
|--- buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/expected/stats_1.out 2025-09-15 22:06:24.000000000 +0100
|+++ buildroot/REL_17_STABLE/pgsql.build/src/test/isolation/output_iso/results/stats.out 2025-09-15 22:23:05.000000000 +0100
|@@ -1445,7 +1445,7 @@
|
| name |pg_stat_get_function_calls|total_above_zero|self_above_zero
| --------------+--------------------------+----------------+---------------
|-test_stat_func| 1|t |t
|+test_stat_func| 1|f |f
| (1 row)This one happened twice as well, and so far only on REL_17_STABLE:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-15%2021%3A06%3A17
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fruitcrow&dt=2025-09-13%2008%3A04%3A05This might be due to the HPET timer not always being monotonic - there
has been an independent report via the Debian autobuilder and a GNU Mach
fix was committed last night, I'll check whether this can be
reproduced/confirmed-fixed with this change:https://lists.gnu.org/archive/html/bug-hurd/2025-09/msg00020.html
https://salsa.debian.org/hurd-team/gnumach/-/commit/06079a8d212817ee0365f318bd90b67bf56bfb06I reproduced the issue locally and found that
������� /* total elapsed time in this function call */
������� INSTR_TIME_SET_CURRENT(total);
������� INSTR_TIME_SUBTRACT(total, fcu->start);
sometimes gives total.ticks = 0.I tried the test program from [2] and got on my VM:
went backwards 0 out of 10000000 times
(three times)But I've created my own test program (see attached), which shows:
for i in {1..1000}; do printf "ITERATION $i "; ./tt 100 || break; done
�ITERATION 1 t1: 55873639081080, t2: 55873639084090, t2 - t1: 3010 (r: 4950)
�ITERATION 2 t1: 55873641019440, t2: 55873641025700, t2 - t1: 6260 (r: 4950)
�ITERATION 3 t1: 55873642794200, t2: 55873642797130, t2 - t1: 2930 (r: 4950)
...
�ITERATION 23 t1: 55873675001590, t2: 55873675001590, t2 - t1: 0 (r: 4950)I don't know how to test the patch committed, but if you can, that would
be nice.
Thanks for the test. I ran the stats test with the GNU Mach patch and
while it seemed to help, it did error out eventually. However, your test
case is much better/faster and I also see 0 deltas after a few hundred
to a few thousand iterations.
I'll report that on their development list, looks like they have not
plugged all the holes yet..
Michael
Hi,
On Wed, Sep 24, 2025 at 08:31:27AM +0900, Michael Paquier wrote:
We've had this exact same issue of a clock going backwards with one of
the netbsd animals on an older version not supported anymore by
upstream and that has been kicked out of the buildfarm, as far as I
recall. This has created some disturbance in the regression tests
causing EXPLAIN plan outputs we did not expect, in terms of extra
negative signs and the animal showing red periodically.
This was the case initially on 32bit Hurd until I configured it to use
APIC (which is a requirement for HPET timers). So the clock is no longer
going backwards in an obvious way; and apart from the stats isolation
test, I have not seen issues in this area.
So yes, this random factor would be annoying in the buildfarm.
How much timer resolution do we require from the system? GNU Mach seems
to (at least try to) guarantee that the timer won't go backwards, but it
does not guarantee (currently) that two consecutive clock_gettime()
calls will return something different in all cases.
Michael
On Wed, Sep 24, 2025 at 09:41:19AM +0200, Michael Banck wrote:
On Wed, Sep 24, 2025 at 08:31:27AM +0900, Michael Paquier wrote:
So yes, this random factor would be annoying in the buildfarm.
How much timer resolution do we require from the system? GNU Mach seems
to (at least try to) guarantee that the timer won't go backwards, but it
does not guarantee (currently) that two consecutive clock_gettime()
calls will return something different in all cases.
This is the pg_test_timing output on my hurd-i386 VM with
pg_test_timing from HEAD:
Average loop time including overhead: 13866,64 ns
Histogram of timing durations:
<= ns % of total running % count
0 0,0510 0,0510 122
1 0,0000 0,0510 0
3 0,0000 0,0510 0
7 0,0000 0,0510 0
15 0,0000 0,0510 0
31 0,0000 0,0510 0
63 0,0000 0,0510 0
127 0,0000 0,0510 0
255 0,0000 0,0510 0
511 0,0000 0,0510 0
1023 0,0004 0,0514 1
2047 0,0000 0,0514 0
4095 98,9320 98,9834 236681
8191 0,8845 99,8679 2116
16383 0,0393 99,9072 94
32767 0,0343 99,9415 82
[...]
Observed timing durations up to 99,9900%:
ns % of total running % count
0 0,0510 0,0510 122
729 0,0004 0,0514 1
3519 0,0004 0,0518 1
3630 0,0130 0,0648 31
3640 0,1651 0,2299 395
3650 0,7449 0,9748 1782
3660 2,3395 3,3143 5597
Clearly those aren't very precise (running Debian 13 GNU/Linux on the
same host in the same qemu/kvm fashion, I get an average loop time
including overhead of around 30ns), but I assumed that the 122 0ns
entries would be the problem; however Hannu reported back in 2024 that
he saw something similar on his Macbook Air M1:
/messages/by-id/CAMT0RQSbzeJN+nPo_QXib-P62rgez=dJxoaTURcN1FYPoLpQPg@mail.gmail.com
|Per loop time including overhead: 21.54 ns
|Histogram of timing durations:
| <= ns % of total running % count
| 0 49.1655 49.1655 68481688
| 1 0.0000 49.1655 0
| 3 0.0000 49.1655 0
| 7 0.0000 49.1655 0
| 15 0.0000 49.1655 0
| 31 0.0000 49.1655 0
| 63 50.6890 99.8545 70603742
| 127 0.1432 99.9976 199411
| 255 0.0015 99.9991 2065
I wonder what is going on here, was that a fluke or is that not related
to the stats isolation test failure after all? Anybody else tried the
updated pg_test_timing on Apple hardware and could possibly run the tt.c
test case from Alexander?
btw, the stats test failed in a similar way on hamerkop (Windows Server
2016) once, 35 days ago:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2025-08-19%2013%3A56%3A17
Michael
Michael Banck <mbanck@gmx.net> writes:
How much timer resolution do we require from the system? GNU Mach seems
to (at least try to) guarantee that the timer won't go backwards, but it
does not guarantee (currently) that two consecutive clock_gettime()
calls will return something different in all cases.
I think it is reasonable to require the clock to not go backwards
during a test run, but it's not at all reasonable to require the
clock to advance by more than zero between two successive readings.
We used to encounter the no-advance case all the time, back when
machines had clock resolutions measured in milliseconds. It's
relatively rare now though, so it's possible that some test case
has crept in that expects that. But I'd call it a bug in the
test case if so.
It'd be interesting to see the output of a pg_test_timing run
from your Hurd machine.
regards, tom lane
Michael Banck <mbanck@gmx.net> writes:
This is the pg_test_timing output on my hurd-i386 VM with
pg_test_timing from HEAD:
Ask and ye shall receive ... sorry for not reading the whole thread.
I wonder what is going on here, was that a fluke or is that not related
to the stats isolation test failure after all? Anybody else tried the
updated pg_test_timing on Apple hardware and could possibly run the tt.c
test case from Alexander?
Yeah, I see zero-ns outputs on a couple different Apple M-series
machines. This is the output on the M4 Mini that runs the sifaka
and indri BF animals:
$ pg_test_timing
Testing timing overhead for 3 seconds.
Average loop time including overhead: 17.22 ns
Histogram of timing durations:
<= ns % of total running % count
0 58.8235 58.8235 102495613
1 0.0000 58.8235 0
3 0.0000 58.8235 0
7 0.0000 58.8235 0
15 0.0000 58.8235 0
31 0.0000 58.8235 0
63 41.1229 99.9464 71653499
127 0.0502 99.9966 87421
255 0.0026 99.9992 4522
511 0.0000 99.9992 56
1023 0.0001 99.9993 117
2047 0.0001 99.9994 164
4095 0.0003 99.9997 558
8191 0.0003 100.0000 501
16383 0.0000 100.0000 50
32767 0.0000 100.0000 17
65535 0.0000 100.0000 0
131071 0.0000 100.0000 1
Observed timing durations up to 99.9900%:
ns % of total running % count
0 58.8235 58.8235 102495613
41 13.7077 72.5313 23884717
42 27.4151 99.9464 47768782
83 0.0277 99.9741 48304
84 0.0140 99.9881 24425
125 0.0084 99.9966 14692
...
107083 0.0000 100.0000 1
Those animals are not showing failures, so we can't blame
"clock didn't advance" as a problem in itself. However,
the thing that jumps out at me from your results is that
the clock resolution seems to be only 3 to 4 us on Hurd:
Histogram of timing durations:
<= ns % of total running % count
0 0,0510 0,0510 122
1 0,0000 0,0510 0
3 0,0000 0,0510 0
7 0,0000 0,0510 0
15 0,0000 0,0510 0
31 0,0000 0,0510 0
63 0,0000 0,0510 0
127 0,0000 0,0510 0
255 0,0000 0,0510 0
511 0,0000 0,0510 0
1023 0,0004 0,0514 1
2047 0,0000 0,0514 0
4095 98,9320 98,9834 236681
8191 0,8845 99,8679 2116
It seems plausible that the execution time of the stats
test's function-under-test is so short that it sometimes
doesn't register as more than zero on a machine with poor
clock resolution. It looks like that test only calls the
test function once or twice before checking that it's
accumulated some runtime, and the test function is nothing
more than
CREATE FUNCTION test_stat_func() RETURNS VOID LANGUAGE plpgsql AS $$BEGIN END;$$;
I'd call this a bug in that test TBH. It'd be saner to
make the function do something like pg_sleep for 1ms.
regards, tom lane
Hello Michael and Tom,
Thank you for spending time on this!
24.09.2025 13:45, Michael Banck wrote:
btw, the stats test failed in a similar way on hamerkop (Windows Server
2016) once, 35 days ago:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2025-08-19%2013%3A56%3A17
Yes, and all of such failures are counted at [1]https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#subscription.sql_sporadically_fails_on_hamerkop_due_to_zero_time_difference.
And we had discussed that (Windows-specific) anomaly too: [2]/messages/by-id/CANhcyEX4hH9POyTM3vh=58newEF0=qgK46xF5i-RDir2zAZ4og@mail.gmail.com. Probably
something has changed in that environment, so that we see no such failures
for the last month.
How much timer resolution do we require from the system? GNU Mach seems
to (at least try to) guarantee that the timer won't go backwards, but it
does not guarantee (currently) that two consecutive clock_gettime()
calls will return something different in all cases.
Regarding the lowest timer resolution, as I mentioned at [3]/messages/by-id/af1d252c-738f-46ab-99c6-a00e0d65aa04@gmail.com, 32k_counter
gives only 0.030517 sec...
[1]: https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#subscription.sql_sporadically_fails_on_hamerkop_due_to_zero_time_difference
https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#subscription.sql_sporadically_fails_on_hamerkop_due_to_zero_time_difference
[2]: /messages/by-id/CANhcyEX4hH9POyTM3vh=58newEF0=qgK46xF5i-RDir2zAZ4og@mail.gmail.com
[3]: /messages/by-id/af1d252c-738f-46ab-99c6-a00e0d65aa04@gmail.com
Best regards,
Alexander
Alexander Lakhin <exclusion@gmail.com> writes:
24.09.2025 13:45, Michael Banck wrote:
How much timer resolution do we require from the system? GNU Mach seems
to (at least try to) guarantee that the timer won't go backwards, but it
does not guarantee (currently) that two consecutive clock_gettime()
calls will return something different in all cases.
Regarding the lowest timer resolution, as I mentioned at [3], 32k_counter
gives only 0.030517 sec...
We are currently doing a short pg_test_timing run in every BF run,
but with only a cursory regex-based sanity check on the output.
Since it's a TAP test, we could easily report the full output in
the TAP log without causing problems. I was already thinking about
doing that, and if there's some question about the minimum expected
timer resolution then it's really silly to not be capturing that
data.
I will go do that, and in a few day's time we should have enough
reports to see what we can realistically expect.
regards, tom lane
Hi,
On Wed, Sep 24, 2025 at 10:28:46AM -0400, Tom Lane wrote:
It seems plausible that the execution time of the stats
test's function-under-test is so short that it sometimes
doesn't register as more than zero on a machine with poor
clock resolution. It looks like that test only calls the
test function once or twice before checking that it's
accumulated some runtime, and the test function is nothing
more thanCREATE FUNCTION test_stat_func() RETURNS VOID LANGUAGE plpgsql AS $$BEGIN END;$$;
I'd call this a bug in that test TBH. It'd be saner to
make the function do something like pg_sleep for 1ms.
I did that in the attached, so far my Hurd VM ran the stats test more
than 1000 times without a failure with it. I have the loop running till
10000, I'll report back tomorrow.
Michael