Potential "AIO / io workers" inter-worker locking issue in PG18?

Started by Marco Boeringa6 months ago56 messagesbugs
Jump to latest
#1Marco Boeringa
marco@boeringa.demon.nl

Hi,

I currently run PG18 + PostGIS 3.6.0 on an Ubuntu 24.04 VM guest as
Windows 10 Hyper-V virtual machine.

The machine is a dedicated refurbished HP Z840 local workstation with
2x22 cores (E5-2699 v4) with 512 GB RAM and a 10 TB NVMe raid-0, with
the Ubuntu guest having 400 GB RAM available.

On this machine, which is dedicated to just one custom written
geoprocessing workflow involving OpenStreetMap data, I have successfully
processed up to global OpenStreetMap Facebook Daylight distribution
data, with up to > 2.4 B record Polygon table for all Facebook Daylight
buildings. So this has proven a very capable system.

However, after upgrading to PG18 and the switch to the "io_method =
worker" setting (tested with 3, 5, 16 and 22 workers), I am seeing an
issue where it appears there may be a major issue with io workers
potentially getting into some sort of locking conflict, that takes hours
to resolve.

The custom written geoprocessing workflow uses Python multi-threading
based on the Python 'concurrent.futures' framework in combination with
either pyodbc or psycopg2 as database connector to implement a powerful
parallel processing solution to speed up some of the computationally
intensive tasks (which use UPDATEs), which I generally use with up to 44
threads to fully saturate the dual CPU 44 core system. The custom code
creates a pool of jobs to process for the threads, with the code being
designed to minimize inter-thread locking issues by taking into account
PostgreSQL page locality (although the actual records to process are not
assigned by pages but by unique IDs in the tables). Basically, the code
is designed such that different threads never attempt to access the same
database pages, as each thread gets it own unique pages assigned, thus
avoiding inter-thread locking conflicts. This has worked really well in
the past, with system usage maximized over all cores and significantly
speeding up processing. Jobs are implemented as database VIEWs, that
point to the records to process via the unique ID of each. These views
must of course be read by each thread, which is probably where the PG18
io workers kick-in.

This has worked really well in previous versions of PostgreSQL (tested
up to PG17). However, in PG18, during the multi-threaded processing, I
see some of my submitted jobs that in this case were run against a small
OpenStreetMap Italy extract of Geofabrik, all of a sudden take > 1 hour
to finish (up to 6 hours for this small extract), even though similar
jobs from the same processing step, finish in less than 10 seconds (and
the other jobs should as well). This seems to happen kind of "random".
Many multi-threading tasks before and after the affected processing
steps, do finish normally.

When this happens, I observe the following things:

- High processor activity, even though the jobs that should finish in
seconds, take hours, all the while showing the high core usage.

- PgAdmin shows all sessions created by the Python threads as 'active',
with *no* wait events attached.

- The pg_locks table does not show locking conflicts, all locks are
granted. I did notice however, that the relation / table locks were not
"fastpath" locks, but ordinary ones. All other locks taken, e.g. on
indexes related to the same table, were fastpath. I don't know if this
has any relevance though, as from what I read about the difference, this
shouldn't cause such a big difference, not seconds to hours.

- Please note that the processing DOES eventually proceed, so it is not
an infinite dead-lock or something where I need to kill my Python code.
It just takes hours to resolve.

- Switching to "io_method = sync" seems to resolve this issue, and I do
not observe some jobs of the same batch getting "stuck". This is the
behavior I was used to seeing in <=PG17.

I am not to familiar with all the internals of PostgreSQL and the new
AIO framework and its "io workers". However, it seems there may be some
sort of locking issue between io workers that can occasionally happen in
PG18 with "io_method = worker"? Is there anyone else observing similar
issues in high multi-threaded processing worklflows?

Marco

#2Markus KARG
markus@headcrashing.eu
In reply to: Marco Boeringa (#1)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

I am not a PostgreSQL contributor and have no clue what the actual
technical details are in the new AIO code, but reading your report the
following questions came to my mind:

* Does the failure also happen with io_mode=io_uring? If no, it is a
proof that it is really bound to io_mode=worker, not to AIO in general.

* Does the failure also happen with io_mode=worker when your Python code
uses only 22 cores, and PostgreSQL uses only 22 workers (so Python and
PostgreSQL do not share CPU cores)? If no, it might indicate that the
problem could be solved by increasing the execution policy in favor of
PostgreSQL to give a hint to the scheduler that a CPU core should be
given to PostgreSQL FIRST as Python most likely is waiting on it to
continue, but PostgreSQL could not continue because the schedule gave
all the cores to Python... (classical deadlock; eventually resolves once
enough CPU cores are free to eventually finish the starving thread).

HTH

-Markus

Am 05.10.2025 um 10:55 schrieb Marco Boeringa:

Show quoted text

Hi,

I currently run PG18 + PostGIS 3.6.0 on an Ubuntu 24.04 VM guest as
Windows 10 Hyper-V virtual machine.

The machine is a dedicated refurbished HP Z840 local workstation with
2x22 cores (E5-2699 v4) with 512 GB RAM and a 10 TB NVMe raid-0, with
the Ubuntu guest having 400 GB RAM available.

On this machine, which is dedicated to just one custom written
geoprocessing workflow involving OpenStreetMap data, I have
successfully processed up to global OpenStreetMap Facebook Daylight
distribution data, with up to > 2.4 B record Polygon table for all
Facebook Daylight buildings. So this has proven a very capable system.

However, after upgrading to PG18 and the switch to the "io_method =
worker" setting (tested with 3, 5, 16 and 22 workers), I am seeing an
issue where it appears there may be a major issue with io workers
potentially getting into some sort of locking conflict, that takes
hours to resolve.

The custom written geoprocessing workflow uses Python multi-threading
based on the Python 'concurrent.futures' framework in combination with
either pyodbc or psycopg2 as database connector to implement a
powerful parallel processing solution to speed up some of the
computationally intensive tasks (which use UPDATEs), which I generally
use with up to 44 threads to fully saturate the dual CPU 44 core
system. The custom code creates a pool of jobs to process for the
threads, with the code being designed to minimize inter-thread locking
issues by taking into account PostgreSQL page locality (although the
actual records to process are not assigned by pages but by unique IDs
in the tables). Basically, the code is designed such that different
threads never attempt to access the same database pages, as each
thread gets it own unique pages assigned, thus avoiding inter-thread
locking conflicts. This has worked really well in the past, with
system usage maximized over all cores and significantly speeding up
processing. Jobs are implemented as database VIEWs, that point to the
records to process via the unique ID of each. These views must of
course be read by each thread, which is probably where the PG18 io
workers kick-in.

This has worked really well in previous versions of PostgreSQL (tested
up to PG17). However, in PG18, during the multi-threaded processing, I
see some of my submitted jobs that in this case were run against a
small OpenStreetMap Italy extract of Geofabrik, all of a sudden take >
1 hour to finish (up to 6 hours for this small extract), even though
similar jobs from the same processing step, finish in less than 10
seconds (and the other jobs should as well). This seems to happen kind
of "random". Many multi-threading tasks before and after the affected
processing steps, do finish normally.

When this happens, I observe the following things:

- High processor activity, even though the jobs that should finish in
seconds, take hours, all the while showing the high core usage.

- PgAdmin shows all sessions created by the Python threads as
'active', with *no* wait events attached.

- The pg_locks table does not show locking conflicts, all locks are
granted. I did notice however, that the relation / table locks were
not "fastpath" locks, but ordinary ones. All other locks taken, e.g.
on indexes related to the same table, were fastpath. I don't know if
this has any relevance though, as from what I read about the
difference, this shouldn't cause such a big difference, not seconds to
hours.

- Please note that the processing DOES eventually proceed, so it is
not an infinite dead-lock or something where I need to kill my Python
code. It just takes hours to resolve.

- Switching to "io_method = sync" seems to resolve this issue, and I
do not observe some jobs of the same batch getting "stuck". This is
the behavior I was used to seeing in <=PG17.

I am not to familiar with all the internals of PostgreSQL and the new
AIO framework and its "io workers". However, it seems there may be
some sort of locking issue between io workers that can occasionally
happen in PG18 with "io_method = worker"? Is there anyone else
observing similar issues in high multi-threaded processing worklflows?

Marco

#3Thom Brown
thom@linux.com
In reply to: Marco Boeringa (#1)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

On Sun, 5 Oct 2025, 10:52 Marco Boeringa, <marco@boeringa.demon.nl> wrote:

Hi,

I currently run PG18 + PostGIS 3.6.0 on an Ubuntu 24.04 VM guest as
Windows 10 Hyper-V virtual machine.

The machine is a dedicated refurbished HP Z840 local workstation with
2x22 cores (E5-2699 v4) with 512 GB RAM and a 10 TB NVMe raid-0, with
the Ubuntu guest having 400 GB RAM available.

On this machine, which is dedicated to just one custom written
geoprocessing workflow involving OpenStreetMap data, I have successfully
processed up to global OpenStreetMap Facebook Daylight distribution
data, with up to > 2.4 B record Polygon table for all Facebook Daylight
buildings. So this has proven a very capable system.

However, after upgrading to PG18 and the switch to the "io_method =
worker" setting (tested with 3, 5, 16 and 22 workers), I am seeing an
issue where it appears there may be a major issue with io workers
potentially getting into some sort of locking conflict, that takes hours
to resolve.

The custom written geoprocessing workflow uses Python multi-threading
based on the Python 'concurrent.futures' framework in combination with
either pyodbc or psycopg2 as database connector to implement a powerful
parallel processing solution to speed up some of the computationally
intensive tasks (which use UPDATEs), which I generally use with up to 44
threads to fully saturate the dual CPU 44 core system. The custom code
creates a pool of jobs to process for the threads, with the code being
designed to minimize inter-thread locking issues by taking into account
PostgreSQL page locality (although the actual records to process are not
assigned by pages but by unique IDs in the tables). Basically, the code
is designed such that different threads never attempt to access the same
database pages, as each thread gets it own unique pages assigned, thus
avoiding inter-thread locking conflicts. This has worked really well in
the past, with system usage maximized over all cores and significantly
speeding up processing. Jobs are implemented as database VIEWs, that
point to the records to process via the unique ID of each. These views
must of course be read by each thread, which is probably where the PG18
io workers kick-in.

This has worked really well in previous versions of PostgreSQL (tested
up to PG17). However, in PG18, during the multi-threaded processing, I
see some of my submitted jobs that in this case were run against a small
OpenStreetMap Italy extract of Geofabrik, all of a sudden take > 1 hour
to finish (up to 6 hours for this small extract), even though similar
jobs from the same processing step, finish in less than 10 seconds (and
the other jobs should as well). This seems to happen kind of "random".
Many multi-threading tasks before and after the affected processing
steps, do finish normally.

When this happens, I observe the following things:

- High processor activity, even though the jobs that should finish in
seconds, take hours, all the while showing the high core usage.

- PgAdmin shows all sessions created by the Python threads as 'active',
with *no* wait events attached.

- The pg_locks table does not show locking conflicts, all locks are
granted. I did notice however, that the relation / table locks were not
"fastpath" locks, but ordinary ones. All other locks taken, e.g. on
indexes related to the same table, were fastpath. I don't know if this
has any relevance though, as from what I read about the difference, this
shouldn't cause such a big difference, not seconds to hours.

- Please note that the processing DOES eventually proceed, so it is not
an infinite dead-lock or something where I need to kill my Python code.
It just takes hours to resolve.

- Switching to "io_method = sync" seems to resolve this issue, and I do
not observe some jobs of the same batch getting "stuck". This is the
behavior I was used to seeing in <=PG17.

I am not to familiar with all the internals of PostgreSQL and the new
AIO framework and its "io workers". However, it seems there may be some
sort of locking issue between io workers that can occasionally happen in
PG18 with "io_method = worker"? Is there anyone else observing similar
issues in high multi-threaded processing worklflows?

So, to confirm, you get the issue with as little as 3 io_workers?

Also, what is pg_aios telling you during this time?

Thom

Show quoted text
#4Marco Boeringa
marco@boeringa.demon.nl
In reply to: Thom Brown (#3)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi Thom,

I now also witnessed this issue with "io_method = sync", so it may not
have relation with the number of workers set. I initially thought it did
not occur with 'sync', as two runs successfully completed without
delays, however, the last did show the issue. Unfortunately, this is a
very complex multi-stage geoprocessing workflow, that cannot easily be
cut down to a simple one SQL statement reproducible case. And for the
specific Italy extract it takes about 7 hours to complete if the run is
successful and without the delays observed, so each test run costs
considerable time if I adjust anything.

There is also a PostGIS upgrade in the mix (3.5.2 to 3.6.0) that may or
may not be involved, as that version of PostGIS is the minimum for PG18.
I see a 3.6.1 is already planned and will need to re-test with that
version once released. I definitely do use PostGIS functions at the
stage the processing gets heavily delayed.

As to the question about the pg_aios view I wasn't aware off: it appears
to be empty at that point, but I will need to confirm that observation,
as with my last run, the moment I looked at the view, some of the very
delayed multi-threaded jobs (> 6.5 hours instead of 10 seconds!) started
slowly returning one by one, although some were still in wait / stuck
for some time before all had returned, so the pg_aios view being empty
probably is still representative of the stuck situation.

Also note that I also adjust the storage parameters of the tables
involved to force a more aggressive vacuuming to avoid transaction ID
wraparound (which shouldn't be an issue anyway with the small test Italy
extract). This has all proven pretty reliable in the past and with
previous PostgreSQL / PostGIS releases, up to the Facebook Daylight
multi-billion record tables as noted in the previous post. There also is
no PostgreSQL partitioning involved in any of this, these are ordinary
tables.

Marco

Op 5-10-2025 om 12:51 schreef Thom Brown:

Show quoted text

On Sun, 5 Oct 2025, 10:52 Marco Boeringa, <marco@boeringa.demon.nl> wrote:

Hi,

I currently run PG18 + PostGIS 3.6.0 on an Ubuntu 24.04 VM guest as
Windows 10 Hyper-V virtual machine.

The machine is a dedicated refurbished HP Z840 local workstation with
2x22 cores (E5-2699 v4) with 512 GB RAM and a 10 TB NVMe raid-0, with
the Ubuntu guest having 400 GB RAM available.

On this machine, which is dedicated to just one custom written
geoprocessing workflow involving OpenStreetMap data, I have
successfully
processed up to global OpenStreetMap Facebook Daylight distribution
data, with up to > 2.4 B record Polygon table for all Facebook
Daylight
buildings. So this has proven a very capable system.

However, after upgrading to PG18 and the switch to the "io_method =
worker" setting (tested with 3, 5, 16 and 22 workers), I am seeing an
issue where it appears there may be a major issue with io workers
potentially getting into some sort of locking conflict, that takes
hours
to resolve.

The custom written geoprocessing workflow uses Python multi-threading
based on the Python 'concurrent.futures' framework in combination
with
either pyodbc or psycopg2 as database connector to implement a
powerful
parallel processing solution to speed up some of the computationally
intensive tasks (which use UPDATEs), which I generally use with up
to 44
threads to fully saturate the dual CPU 44 core system. The custom
code
creates a pool of jobs to process for the threads, with the code
being
designed to minimize inter-thread locking issues by taking into
account
PostgreSQL page locality (although the actual records to process
are not
assigned by pages but by unique IDs in the tables). Basically, the
code
is designed such that different threads never attempt to access
the same
database pages, as each thread gets it own unique pages assigned,
thus
avoiding inter-thread locking conflicts. This has worked really
well in
the past, with system usage maximized over all cores and
significantly
speeding up processing. Jobs are implemented as database VIEWs, that
point to the records to process via the unique ID of each. These
views
must of course be read by each thread, which is probably where the
PG18
io workers kick-in.

This has worked really well in previous versions of PostgreSQL
(tested
up to PG17). However, in PG18, during the multi-threaded
processing, I
see some of my submitted jobs that in this case were run against a
small
OpenStreetMap Italy extract of Geofabrik, all of a sudden take > 1
hour
to finish (up to 6 hours for this small extract), even though similar
jobs from the same processing step, finish in less than 10 seconds
(and
the other jobs should as well). This seems to happen kind of
"random".
Many multi-threading tasks before and after the affected processing
steps, do finish normally.

When this happens, I observe the following things:

- High processor activity, even though the jobs that should finish in
seconds, take hours, all the while showing the high core usage.

- PgAdmin shows all sessions created by the Python threads as
'active',
with *no* wait events attached.

- The pg_locks table does not show locking conflicts, all locks are
granted. I did notice however, that the relation / table locks
were not
"fastpath" locks, but ordinary ones. All other locks taken, e.g. on
indexes related to the same table, were fastpath. I don't know if
this
has any relevance though, as from what I read about the
difference, this
shouldn't cause such a big difference, not seconds to hours.

- Please note that the processing DOES eventually proceed, so it
is not
an infinite dead-lock or something where I need to kill my Python
code.
It just takes hours to resolve.

- Switching to "io_method = sync" seems to resolve this issue, and
I do
not observe some jobs of the same batch getting "stuck". This is the
behavior I was used to seeing in <=PG17.

I am not to familiar with all the internals of PostgreSQL and the new
AIO framework and its "io workers". However, it seems there may be
some
sort of locking issue between io workers that can occasionally
happen in
PG18 with "io_method = worker"? Is there anyone else observing
similar
issues in high multi-threaded processing worklflows?

So, to confirm, you get the issue with as little as 3 io_workers?

Also, what is pg_aios telling you during this time?

Thom

#5Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#4)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi Thom,

I realized that my observation of the pg_aios view being empty was
likely with the "io_method = sync" option set, which I guess doesn't use
or fill the pg_aios view? Can you confirm the pg_aios view is unused
with "io_method = sync", this aspect is not documented in the PostgreSQL
help? Anyway, I will need to re-test with 'worker' set.

I do see Tomas Vondra mentioning that even the 'sync' option in PG18
still goes "through the AIO infrastructure", but what that exactly
means, also in relation to the pg_aios view, IDK:

https://vondra.me/posts/tuning-aio-in-postgresql-18/

Marco

Op 5-10-2025 om 21:57 schreef Marco Boeringa:

Show quoted text

Hi Thom,

I now also witnessed this issue with "io_method = sync", so it may not
have relation with the number of workers set. I initially thought it
did not occur with 'sync', as two runs successfully completed without
delays, however, the last did show the issue. Unfortunately, this is a
very complex multi-stage geoprocessing workflow, that cannot easily be
cut down to a simple one SQL statement reproducible case. And for the
specific Italy extract it takes about 7 hours to complete if the run
is successful and without the delays observed, so each test run costs
considerable time if I adjust anything.

There is also a PostGIS upgrade in the mix (3.5.2 to 3.6.0) that may
or may not be involved, as that version of PostGIS is the minimum for
PG18. I see a 3.6.1 is already planned and will need to re-test with
that version once released. I definitely do use PostGIS functions at
the stage the processing gets heavily delayed.

As to the question about the pg_aios view I wasn't aware off: it
appears to be empty at that point, but I will need to confirm that
observation, as with my last run, the moment I looked at the view,
some of the very delayed multi-threaded jobs (> 6.5 hours instead of
10 seconds!) started slowly returning one by one, although some were
still in wait / stuck for some time before all had returned, so the
pg_aios view being empty probably is still representative of the stuck
situation.

Also note that I also adjust the storage parameters of the tables
involved to force a more aggressive vacuuming to avoid transaction ID
wraparound (which shouldn't be an issue anyway with the small test
Italy extract). This has all proven pretty reliable in the past and
with previous PostgreSQL / PostGIS releases, up to the Facebook
Daylight multi-billion record tables as noted in the previous post.
There also is no PostgreSQL partitioning involved in any of this,
these are ordinary tables.

Marco

Op 5-10-2025 om 12:51 schreef Thom Brown:

On Sun, 5 Oct 2025, 10:52 Marco Boeringa, <marco@boeringa.demon.nl>
wrote:

Hi,

I currently run PG18 + PostGIS 3.6.0 on an Ubuntu 24.04 VM guest as
Windows 10 Hyper-V virtual machine.

The machine is a dedicated refurbished HP Z840 local workstation
with
2x22 cores (E5-2699 v4) with 512 GB RAM and a 10 TB NVMe raid-0,
with
the Ubuntu guest having 400 GB RAM available.

On this machine, which is dedicated to just one custom written
geoprocessing workflow involving OpenStreetMap data, I have
successfully
processed up to global OpenStreetMap Facebook Daylight distribution
data, with up to > 2.4 B record Polygon table for all Facebook
Daylight
buildings. So this has proven a very capable system.

However, after upgrading to PG18 and the switch to the "io_method =
worker" setting (tested with 3, 5, 16 and 22 workers), I am
seeing an
issue where it appears there may be a major issue with io workers
potentially getting into some sort of locking conflict, that
takes hours
to resolve.

The custom written geoprocessing workflow uses Python
multi-threading
based on the Python 'concurrent.futures' framework in combination
with
either pyodbc or psycopg2 as database connector to implement a
powerful
parallel processing solution to speed up some of the computationally
intensive tasks (which use UPDATEs), which I generally use with
up to 44
threads to fully saturate the dual CPU 44 core system. The custom
code
creates a pool of jobs to process for the threads, with the code
being
designed to minimize inter-thread locking issues by taking into
account
PostgreSQL page locality (although the actual records to process
are not
assigned by pages but by unique IDs in the tables). Basically,
the code
is designed such that different threads never attempt to access
the same
database pages, as each thread gets it own unique pages assigned,
thus
avoiding inter-thread locking conflicts. This has worked really
well in
the past, with system usage maximized over all cores and
significantly
speeding up processing. Jobs are implemented as database VIEWs, that
point to the records to process via the unique ID of each. These
views
must of course be read by each thread, which is probably where
the PG18
io workers kick-in.

This has worked really well in previous versions of PostgreSQL
(tested
up to PG17). However, in PG18, during the multi-threaded
processing, I
see some of my submitted jobs that in this case were run against
a small
OpenStreetMap Italy extract of Geofabrik, all of a sudden take >
1 hour
to finish (up to 6 hours for this small extract), even though
similar
jobs from the same processing step, finish in less than 10
seconds (and
the other jobs should as well). This seems to happen kind of
"random".
Many multi-threading tasks before and after the affected processing
steps, do finish normally.

When this happens, I observe the following things:

- High processor activity, even though the jobs that should
finish in
seconds, take hours, all the while showing the high core usage.

- PgAdmin shows all sessions created by the Python threads as
'active',
with *no* wait events attached.

- The pg_locks table does not show locking conflicts, all locks are
granted. I did notice however, that the relation / table locks
were not
"fastpath" locks, but ordinary ones. All other locks taken, e.g. on
indexes related to the same table, were fastpath. I don't know if
this
has any relevance though, as from what I read about the
difference, this
shouldn't cause such a big difference, not seconds to hours.

- Please note that the processing DOES eventually proceed, so it
is not
an infinite dead-lock or something where I need to kill my Python
code.
It just takes hours to resolve.

- Switching to "io_method = sync" seems to resolve this issue,
and I do
not observe some jobs of the same batch getting "stuck". This is the
behavior I was used to seeing in <=PG17.

I am not to familiar with all the internals of PostgreSQL and the
new
AIO framework and its "io workers". However, it seems there may
be some
sort of locking issue between io workers that can occasionally
happen in
PG18 with "io_method = worker"? Is there anyone else observing
similar
issues in high multi-threaded processing worklflows?

So, to confirm, you get the issue with as little as 3 io_workers?

Also, what is pg_aios telling you during this time?

Thom

#6Marco Boeringa
marco@boeringa.demon.nl
In reply to: Thom Brown (#3)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi Thom,

As an extension to what I already wrote: as the processing gets stuck
during UPDATEs, I realized the pg_aios view is likely not involved, as
the current AIO implementation of PG18 only affects the read operations
like sequential and bitmap heap scans.

So not seeing anything listed in the pg_aios view might be normal? That
said, I have attempted to view and refresh the view during other stages
of the processing, with pgAdmin showing apparently read operations, but
still no records displayed in pg_aios. Maybe I am hitting the "refresh"
button on the wrong time though...

But maybe the whole new AIO thing isn't involved in these, and it is
another issue in PG18. Just to summarize my observations once again:

- Multi-threaded processing implemented in Python using pyodbc and
concurrent.futures apparently getting stuck waiting for PostgreSQL to
return. The processing step involved should return in ***less than 10
seconds*** for the small Italy extract, but can take >1h (up to >6) when
it gets randomly stuck (some runs successful without delay, others not).

- pgAdmin showing all sessions associated with the threads as 'Active'
with no wait events nor blocking PIDs during the whole time the
processing appears stuck in PostgreSQL.

- No other sessions like VACUUM visible in pgAdmin during the time the
processing appears stuck except the main 'postgres' user session.

- All locks as shown in pg_locks are granted, and most if not all are
fastpath, with only AccessShareLock and RowExclusiveLock on the table
and its indexes involved. A couple of ExclusiveLock on virtualxid and
transactionid.

- 'Top' in Ubuntu showing multiple backend 'postgres' processes
continuously at high core usage, one for each thread (each Python thread
of course uses its own connection).

- pg_aios view empty, but the processing is UPDATEs, so probably no
surprise.

- The processing *DOES* eventually continue after this particular
anomaly, with no further consequences and expected results at the end of
the total processing flow, so it is not a true dead-lock.

- I have noticed it gets stuck when processing OpenStreetMap scrub or
grassland of the Italy extract of Geofabrik. However, as written above,
some processing runs are fine on the same data, while others get stuck
and delayed. The issue may or may not involve PostGIS though considering
this and the fact that the processing step getting stuck involves
PostGIS functions.

- In pgAdmin, the SQL statements as generated by my geoprocessing
workflow and as being processed by PostgreSQL when the processing is
stuck, look like this:

UPDATE osm.landcover_scrubs_small_scale_2_ply AS t1 SET area_geo =
t2.area_geo, perim_geo = t2.perim_geo, compact_geo = CASE WHEN
t2.area_geo > 0 THEN ((power(t2.perim_geo,2) / t2.area_geo) / (4 *
pi())) ELSE 0 END, npoints_geo = t2.npoints_geo, comp_npoints_geo = CASE
WHEN t2.npoints_geo > 0 THEN (CASE WHEN t2.area_geo > 0 THEN
((power(t2.perim_geo,2) / t2.area_geo) / (4 * pi())) ELSE 0 END /
t2.npoints_geo) ELSE 0 END, convex_ratio_geo = CASE WHEN
ST_Area(ST_ConvexHull(way)::geography,true) > 0 THEN (t2.area_geo /
ST_Area(ST_ConvexHull(way)::geography,true)) ELSE 1 END FROM (SELECT
objectid,ST_Area(way::geography,true) AS
area_geo,ST_Perimeter(way::geography,true) AS perim_geo,ST_NPoints(way)
AS npoints_geo FROM osm.landcover_scrubs_small_scale_2_ply)  AS t2 WHERE
(t2.objectid = t1.objectid) AND t1.objectid IN (SELECT t3.objectid FROM
mini_test.osm.osm_tmp_28128_ch5 AS t3)

- All of this worked fine in PG <= 17.

Marco

Show quoted text

So, to confirm, you get the issue with as little as 3 io_workers?

Also, what is pg_aios telling you during this time?

Thom

#7Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#6)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi Markus,

On my Ubuntu virtual machine, io_uring cannot be started. Setting
"io_method = io_uring" and trying to restart the cluster, fails. It will
not start, I have attempted this multiple times. Only 'sync' and
'worker' allow restarting after modifying the PostgreSQL configuration
file.

As I understood, the PostgreSQL binary needs to be compiled with the
proper support, maybe my version on Ubuntu 24.04 that runs as a Windows
Hyper-V virtual machine, doesn't have it. Although I did notice when
installing PG18 from synaptic, that it installed an additional
'liburing' package or something named like that if I remember well...

As to your question about Python and scheduling conflict: this is not
the case. Python runs on the Windows host, not under Ubuntu inside the
VM. I only have PostgreSQL installed on Ubuntu, as I use it with
osm2pgsql there. I access the PostgreSQL instance via pyodbc or psycopg2
on the Windows host, so it is like a remote database server, just
running on local hardware.

Marco

I am not a PostgreSQL contributor and have no clue what the actual
technical details are in the new AIO code, but reading your report the
following questions came to my mind:

* Does the failure also happen with io_mode=io_uring? If no, it is a

proof that it is really bound to io_mode=worker, not to AIO in general.

Show quoted text

* Does the failure also happen with io_mode=worker when your Python code
uses only 22 cores, and PostgreSQL uses only 22 workers (so Python and
PostgreSQL do not share CPU cores)? If no, it might indicate that the
problem could be solved by increasing the execution policy in favor of
PostgreSQL to give a hint to the scheduler that a CPU core should be
given to PostgreSQL FIRST as Python most likely is waiting on it to
continue, but PostgreSQL could not continue because the schedule gave
all the cores to Python... (classical deadlock; eventually resolves once
enough CPU cores are free to eventually finish the starving thread).

HTH

-Markus

#8Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#5)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi,

On 2025-10-05 22:22:32 +0200, Marco Boeringa wrote:

I realized that my observation of the pg_aios view being empty was likely
with�the "io_method = sync" option set, which I guess doesn't use or fill
the pg_aios view? Can you confirm the pg_aios view is unused with�"io_method
= sync", this aspect is not documented in the PostgreSQL help? Anyway, I
will need to re-test with 'worker' set.

pg_aios is populated even with io_method = sync, albeit with at most one entry
entry per backend.

If there were no entries in pg_aios at the time of your hang, it doesn't seem
likely - although not impossible - for AIO to be responsible.

Greetings,

Andres Freund

#9Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#1)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi,

On 2025-10-05 10:55:01 +0200, Marco Boeringa wrote:

This has worked really well in previous versions of PostgreSQL (tested up to
PG17). However, in PG18, during the multi-threaded processing, I see some of
my submitted jobs that in this case were run against a small OpenStreetMap
Italy extract of Geofabrik, all of a sudden take > 1 hour to finish (up to 6
hours for this small extract), even though similar jobs from the same
processing step, finish in less than 10 seconds (and the other jobs should
as well). This seems to happen kind of "random". Many multi-threading tasks
before and after the affected processing steps, do finish normally.

When this happens, I observe the following things:

- High processor activity, even though the jobs that should finish in
seconds, take hours, all the while showing the high core usage.

- PgAdmin shows all sessions created by the Python threads as 'active', with
*no* wait events attached.

I think we need CPU profiles of these tasks. If something is continually
taking a lot more CPU than expected, that seems like an issue worth
investigating.

Greetings,

Andres Freund

#10Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#9)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi Andres,

I should have phrased it better. The high processor and core activity is
not unexpected. The code is designed to saturate the processor and
maximize throughput by careful design of the Python threading stuff. It
is just that all the jobs send to PostgreSQL via ODBC for the specific
step in the processing and with the small Italy extract, should return
in less than 10 seconds (which they do in those lucky runs I do not
observe the issue), but some of the jobs for the specific step don't,
e.g. 30 jobs return within 10 seconds, then the remaining 14
unexpectedly get stuck for 2 hours before returning, all the while
staying at the same high core usage they were initiated with.

So some of the PostgreSQL database sessions, as I already explained show
up in pgAdmin as 'active' with no wait events or blocking pids, simply
take an excessive amount of time, but will ultimately return.

The CPU time, as witnessed with 'top' in Ubuntu, is really spend in
PostgreSQL and the database sessions, not Python, which is run in
Windows, and doesn't show high CPU usage in the Windows Task Manager.

This doesn't always happen, it is kind of random. One run with the Italy
data will be OK, the next not.

Marco

Op 6-10-2025 om 16:34 schreef Andres Freund:

Show quoted text

Hi,

On 2025-10-05 10:55:01 +0200, Marco Boeringa wrote:

This has worked really well in previous versions of PostgreSQL (tested up to
PG17). However, in PG18, during the multi-threaded processing, I see some of
my submitted jobs that in this case were run against a small OpenStreetMap
Italy extract of Geofabrik, all of a sudden take > 1 hour to finish (up to 6
hours for this small extract), even though similar jobs from the same
processing step, finish in less than 10 seconds (and the other jobs should
as well). This seems to happen kind of "random". Many multi-threading tasks
before and after the affected processing steps, do finish normally.

When this happens, I observe the following things:

- High processor activity, even though the jobs that should finish in
seconds, take hours, all the while showing the high core usage.

- PgAdmin shows all sessions created by the Python threads as 'active', with
*no* wait events attached.

I think we need CPU profiles of these tasks. If something is continually
taking a lot more CPU than expected, that seems like an issue worth
investigating.

Greetings,

Andres Freund

#11Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#10)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi,

On 2025-10-06 18:09:25 +0200, Marco Boeringa wrote:

I should have phrased it better. The high processor and core activity is not
unexpected. The code is designed to saturate the processor and maximize
throughput by careful design of the Python threading stuff. It is just that
all the jobs send to PostgreSQL via ODBC for the specific step in the
processing and with the small Italy extract, should return in less than 10
seconds (which they do in those lucky runs I do not observe the issue), but
some of the jobs for the specific step don't, e.g. 30 jobs return within 10
seconds, then the remaining 14 unexpectedly get stuck for 2 hours before
returning, all the while staying at the same high core usage they were
initiated with.

So some of the PostgreSQL database sessions, as I already explained show up
in pgAdmin as 'active' with no wait events or blocking pids, simply take an
excessive amount of time, but will ultimately return.

We need a profile of those processes while they use excessive amount of
time. If they don't have wait events they're using CPU time, and seeing a
profile of where all that time is spent might provide enough information where
to look in more detail.

Greetings,

Andres Freund

#12Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#11)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi Andras,

I am not really a Linux / Ubuntu expert. Can you give me a suggestion
for how to create such a CPU profile for the specific PostgreSQL
processes getting stuck?

Marco

Op 6-10-2025 om 18:13 schreef Andres Freund:

Show quoted text

Hi,

On 2025-10-06 18:09:25 +0200, Marco Boeringa wrote:

I should have phrased it better. The high processor and core activity is not
unexpected. The code is designed to saturate the processor and maximize
throughput by careful design of the Python threading stuff. It is just that
all the jobs send to PostgreSQL via ODBC for the specific step in the
processing and with the small Italy extract, should return in less than 10
seconds (which they do in those lucky runs I do not observe the issue), but
some of the jobs for the specific step don't, e.g. 30 jobs return within 10
seconds, then the remaining 14 unexpectedly get stuck for 2 hours before
returning, all the while staying at the same high core usage they were
initiated with.
So some of the PostgreSQL database sessions, as I already explained show up
in pgAdmin as 'active' with no wait events or blocking pids, simply take an
excessive amount of time, but will ultimately return.

We need a profile of those processes while they use excessive amount of
time. If they don't have wait events they're using CPU time, and seeing a
profile of where all that time is spent might provide enough information where
to look in more detail.

Greetings,

Andres Freund

#13Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#12)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi,

On 2025-10-06 18:17:11 +0200, Marco Boeringa wrote:

Hi Andras,

I am not really a Linux / Ubuntu expert. Can you give me a suggestion for
how to create such a CPU profile for the specific PostgreSQL processes
getting stuck?

https://wiki.postgresql.org/wiki/Profiling_with_perf is a good starting point.

Greetings,

Andres

#14Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#13)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi Andres,

Thanks for the suggestion, this seems a useful option.

However, when I attempt to run "perf top" in a Terminal window, I get
the following warning:

WARNING: perf not found for kernel 6.14.0-1012

I also see a suggestion to install the Azure linux-tools. However, if I
type 'linux-tools' as search keyword in Synaptic Package Manager, I see
a whole bunch of 'linux-tools', e.g. azure/aws/gcp/gke, which also
include kernel version build numbers (at least that is what I assume
they are).

What version do you suggest I install for an ordinary locally running
Ubuntu 24.04 VM?

And do these packages indeed add the perf command?

Marco

Op 6-10-2025 om 18:21 schreef Andres Freund:

Show quoted text

Hi,

On 2025-10-06 18:17:11 +0200, Marco Boeringa wrote:

Hi Andras,

I am not really a Linux / Ubuntu expert. Can you give me a suggestion for
how to create such a CPU profile for the specific PostgreSQL processes
getting stuck?

https://wiki.postgresql.org/wiki/Profiling_with_perf is a good starting point.

Greetings,

Andres

#15Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#14)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi,

On 2025-10-06 19:01:37 +0200, Marco Boeringa wrote:

Thanks for the suggestion, this seems a useful option.

However, when I attempt to run "perf top" in a Terminal window, I get the
following warning:

WARNING: perf not found for kernel 6.14.0-1012

I also see a suggestion to install the Azure linux-tools. However, if I type
'linux-tools' as search keyword in Synaptic Package Manager, I see a whole
bunch of 'linux-tools', e.g. azure/aws/gcp/gke, which also include kernel
version build numbers (at least that is what I assume they are).

What version do you suggest I install for an ordinary locally running Ubuntu
24.04 VM?

There are meta-packages to install linux-tools for the right
version. E.g. linux-tools-virtual. Unfortunately ubuntu has multiple "kernel
variants" (like -virtual) that you still have to choose between.

You can figure out which base kernel you have with "dpkg -l|grep linux" or
such.

Greetings,

Andres

#16Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#15)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi Andres,

I now noticed that all the suggested 'linux-tools' packages that popped
up in the warning message when I attempted to run "perf top" and
referencing Azure, are already displayed as installed in Synaptic
Package Manager. I guess it makes sense the packages and likely the
kernel of my machine are for Azure, as it is a Windows Hyper-V virtual
machine with the Microsoft provided Ubuntu install option.

However, if the packages are installed, why can't I run perf, or do I
need a 'linux-tools' specific command for that instead of perf?

Marco

Op 6-10-2025 om 19:09 schreef Andres Freund:

Show quoted text

Hi,

On 2025-10-06 19:01:37 +0200, Marco Boeringa wrote:

Thanks for the suggestion, this seems a useful option.

However, when I attempt to run "perf top" in a Terminal window, I get the
following warning:

WARNING: perf not found for kernel 6.14.0-1012

I also see a suggestion to install the Azure linux-tools. However, if I type
'linux-tools' as search keyword in Synaptic Package Manager, I see a whole
bunch of 'linux-tools', e.g. azure/aws/gcp/gke, which also include kernel
version build numbers (at least that is what I assume they are).

What version do you suggest I install for an ordinary locally running Ubuntu
24.04 VM?

There are meta-packages to install linux-tools for the right
version. E.g. linux-tools-virtual. Unfortunately ubuntu has multiple "kernel
variants" (like -virtual) that you still have to choose between.

You can figure out which base kernel you have with "dpkg -l|grep linux" or
such.

Greetings,

Andres

#17Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#15)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi Andres,

I now found out that I do have a 'perf' living under one of 'usr'
folders, but unfortunately, this is the 6.8 kernel version:

/usr/lib/linux-tools-6.8.0-85

None of the other suggested packages and their likely install folders
seem to contain perf.

Since perf appears and rather understandably seems to need to exactly
match the kernel version, I can't use this one, as my kernel was already
upgraded to 6.14 by a more or less forced update in Software Updater. It
is a pain the linux-tools-common package, that I suspect is the source
of the 6.8 'perf' version and folder and tagged as that version in
Synaptic, isn't being updated at the same time to allow you to run
'perf' with the proper version.

I guess I will need to wait for an update of it.

Marco

Op 6-10-2025 om 19:09 schreef Andres Freund:

Show quoted text

Hi,

On 2025-10-06 19:01:37 +0200, Marco Boeringa wrote:

Thanks for the suggestion, this seems a useful option.

However, when I attempt to run "perf top" in a Terminal window, I get the
following warning:

WARNING: perf not found for kernel 6.14.0-1012

I also see a suggestion to install the Azure linux-tools. However, if I type
'linux-tools' as search keyword in Synaptic Package Manager, I see a whole
bunch of 'linux-tools', e.g. azure/aws/gcp/gke, which also include kernel
version build numbers (at least that is what I assume they are).

What version do you suggest I install for an ordinary locally running Ubuntu
24.04 VM?

There are meta-packages to install linux-tools for the right
version. E.g. linux-tools-virtual. Unfortunately ubuntu has multiple "kernel
variants" (like -virtual) that you still have to choose between.

You can figure out which base kernel you have with "dpkg -l|grep linux" or
such.

Greetings,

Andres

#18Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#17)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

On 2025-10-06 22:41:31 +0200, Marco Boeringa wrote:

Hi Andres,

I now found out that I do have a 'perf' living under one of 'usr' folders,
but unfortunately, this is the 6.8 kernel version:

/usr/lib/linux-tools-6.8.0-85

None of the other suggested packages and their likely install folders seem
to contain perf.

Since perf appears and rather understandably seems to need to exactly match
the kernel version, I can't use this one, as my kernel was already upgraded
to 6.14 by a more or less forced update in Software Updater.

I'm pretty sure that you can use any halfway-recent perf binary, they don't
actually need to match exactly. I don't know why ubuntu insists on a perfect
match. I regularly run completely different versions.

Greetings,

Andres Freund

#19Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#18)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

It didn't work: as soon as I attempted to run perf, it emitted the
warning message about the kernel version mismatch with suggestions of
package to install.

However, I now realized after further digging, that Ubuntu usually has
multiple kernel versions installed. I have now attempted to add the GRUB
boot menu, which should allow me to boot with the older 6.8 version of
the kernel (which was available during configuration of GRUB), and
hopefully run perf with that version of the kernel.

Marco

Op 6-10-2025 om 23:39 schreef Andres Freund:

Show quoted text

On 2025-10-06 22:41:31 +0200, Marco Boeringa wrote:

Hi Andres,

I now found out that I do have a 'perf' living under one of 'usr' folders,
but unfortunately, this is the 6.8 kernel version:

/usr/lib/linux-tools-6.8.0-85

None of the other suggested packages and their likely install folders seem
to contain perf.

Since perf appears and rather understandably seems to need to exactly match
the kernel version, I can't use this one, as my kernel was already upgraded
to 6.14 by a more or less forced update in Software Updater.

I'm pretty sure that you can use any halfway-recent perf binary, they don't
actually need to match exactly. I don't know why ubuntu insists on a perfect
match. I regularly run completely different versions.

Greetings,

Andres Freund

#20Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#19)
Re: Potential "AIO / io workers" inter-worker locking issue in PG18?

Hi Andres,

That worked, I successfully booted with kernel 6.8!

I can now run perf, but it emits a warning, see below. Do you have
suggestions of how to set these perf 'paranoid' settings?

Marco

|Access to performance monitoring and observability operations is
limited.  │
│Consider adjusting /proc/sys/kernel/perf_event_paranoid setting to
open    │
│access to performance monitoring and observability operations for
processes│
|without CAP_PERFMON, CAP_SYS_PTRACE or CAP_SYS_ADMIN Linux capability.
│More information can be found at 'Perf events and tool security'
document: │
https://www.kernel.org/doc/html/latest/admin-guide/perf-security.html     │
│perf_event_paranoid setting is 4:           │
│  -1: Allow use of (almost) all events by all users           │
|      Ignore mlock limit after perf_event_mlock_kb without
CAP_IPC_LOCK    │
│>= 0: Disallow raw and ftrace function tracepoint access              │
│>= 1: Disallow CPU event access             │
│>= 2: Disallow kernel profiling             │
│To make the adjusted perf_event_paranoid setting permanent preserve it 
   │
│in /etc/sysctl.conf (e.g. kernel.perf_event_paranoid = <setting>)

Op 7-10-2025 om 09:15 schreef Marco Boeringa:

Show quoted text

It didn't work: as soon as I attempted to run perf, it emitted the
warning message about the kernel version mismatch with suggestions of
package to install.

However, I now realized after further digging, that Ubuntu usually has
multiple kernel versions installed. I have now attempted to add the
GRUB boot menu, which should allow me to boot with the older 6.8
version of the kernel (which was available during configuration of
GRUB), and hopefully run perf with that version of the kernel.

Marco

Op 6-10-2025 om 23:39 schreef Andres Freund:

On 2025-10-06 22:41:31 +0200, Marco Boeringa wrote:

Hi Andres,

I now found out that I do have a 'perf' living under one of 'usr'
folders,
but unfortunately, this is the 6.8 kernel version:

/usr/lib/linux-tools-6.8.0-85

None of the other suggested packages and their likely install
folders seem
to contain perf.

Since perf appears and rather understandably seems to need to
exactly match
the kernel version, I can't use this one, as my kernel was already
upgraded
to 6.14 by a more or less forced update in Software Updater.

I'm pretty sure that you can use any halfway-recent perf binary, they
don't
actually need to match exactly. I don't know why ubuntu insists on a
perfect
match. I regularly run completely different versions.

Greetings,

Andres Freund

#21Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#20)
#22Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#21)
#23Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#22)
#24Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#22)
#25Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#24)
#26Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#25)
#27Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#26)
#28Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#27)
#29Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#28)
#30Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#28)
#31Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#28)
#32Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#31)
#33Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#30)
#34Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#32)
#35Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#33)
#36Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#35)
#37Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#36)
#38Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#37)
#39Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#36)
#40Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#36)
#41Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#40)
#42Andres Freund
andres@anarazel.de
In reply to: Marco Boeringa (#41)
#43Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#42)
#44Marco Boeringa
marco@boeringa.demon.nl
In reply to: Andres Freund (#36)
#45Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#44)
#46David Rowley
dgrowleyml@gmail.com
In reply to: Marco Boeringa (#45)
#47Marco Boeringa
marco@boeringa.demon.nl
In reply to: David Rowley (#46)
#48Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#47)
#49Marco Boeringa
marco@boeringa.demon.nl
In reply to: Marco Boeringa (#47)
#50David Rowley
dgrowleyml@gmail.com
In reply to: Marco Boeringa (#49)
#51Marco Boeringa
marco@boeringa.demon.nl
In reply to: David Rowley (#50)
#52David Rowley
dgrowleyml@gmail.com
In reply to: Marco Boeringa (#51)
#53Marco Boeringa
marco@boeringa.demon.nl
In reply to: David Rowley (#52)
#54David Rowley
dgrowleyml@gmail.com
In reply to: Marco Boeringa (#53)
#55Marco Boeringa
marco@boeringa.demon.nl
In reply to: David Rowley (#54)
#56David Rowley
dgrowleyml@gmail.com
In reply to: Marco Boeringa (#55)