pgBadger and postgres_fdw

Started by Colin 't Hart3 months ago6 messagesgeneral
Jump to latest
#1Colin 't Hart
colinthart@gmail.com

Hi,

One of my clients makes extensive use of postgres_fdw. After a migration
performance isn't great. pgBadger reports show the slowest queries all
being `fetch 100 from c2`.

Anyone have any tricks for being able to associate those fetches with the
queries that were used when declaring the server-side cursor?

Thanks,

Colin

#2Adrian Klaver
adrian.klaver@aklaver.com
In reply to: Colin 't Hart (#1)
Re: pgBadger and postgres_fdw

On 1/21/26 00:18, Colin 't Hart wrote:

Hi,

One of my clients makes extensive use of postgres_fdw. After a migration
performance isn't great. pgBadger reports show the slowest queries all
being `fetch 100 from c2`.

Anyone have any tricks for being able to associate those fetches with
the queries that were used when declaring the server-side cursor?

This is going to need a lot more information. To start:

1) Migration of what and from what version to what version?

2) Where are the Postgres databases relative to each other on the network?

3) What versions of Postgres if not covered in 1.

4) If Postgres was what was being updated was an analyze done on the
instances?

5) Show a complete query using EXPLAIN ANALYZE.

6) Define slow.

Thanks,

Colin

--
Adrian Klaver
adrian.klaver@aklaver.com

#3Colin 't Hart
colinthart@gmail.com
In reply to: Adrian Klaver (#2)
Re: pgBadger and postgres_fdw

1. Migration from one server to another. Newer OS (Debian 12 vs Ubuntu
20.04), same version of Postgres (17).
2. postgres_fdw is to different databases within the same cluster.
3. 17
4. No new analyze was done; migration was achieved by moving the disks
between the virtual servers. We reindexed all text indexes to allow for the
new glibc version on Debian 12.
5. That's the thing: I have no idea which queries the `fetch 100 from c2`
are associated with because the `c2` seems to be reused for each query. The
psycopg python library generates unique server-side cursor names, but
postgres_fdw doesn't.
6. The 19 slowest queries in a 4 hour period are between 2 and 37 minutes,
with an average of over 10 minutes; they are all `fetch 100 from c2`.

The slowness itself isn't my question here; it was caused by having too few
cores in the new environment, while the application was still assuming the
higher core count and generating too many concurrent processes.

My question is how to identify which connections / queries from
postgres_fdw are generating the `fetch 100 from c2` queries, which, in
turn, may quite possibly lead to a feature request for having these named
uniquely.

Thanks,

Colin

On Wed, 21 Jan 2026 at 16:43, Adrian Klaver <adrian.klaver@aklaver.com>
wrote:

Show quoted text

On 1/21/26 00:18, Colin 't Hart wrote:

Hi,

One of my clients makes extensive use of postgres_fdw. After a migration
performance isn't great. pgBadger reports show the slowest queries all
being `fetch 100 from c2`.

Anyone have any tricks for being able to associate those fetches with
the queries that were used when declaring the server-side cursor?

This is going to need a lot more information. To start:

1) Migration of what and from what version to what version?

2) Where are the Postgres databases relative to each other on the network?

3) What versions of Postgres if not covered in 1.

4) If Postgres was what was being updated was an analyze done on the
instances?

5) Show a complete query using EXPLAIN ANALYZE.

6) Define slow.

Thanks,

Colin

--
Adrian Klaver
adrian.klaver@aklaver.com

#4Adrian Klaver
adrian.klaver@aklaver.com
In reply to: Colin 't Hart (#3)
Re: pgBadger and postgres_fdw

On 1/21/26 08:12, Colin 't Hart wrote:

6. The 19 slowest queries in a 4 hour period are between 2 and 37
minutes, with an average of over 10 minutes; they are all `fetch 100
from c2`.

The slowness itself isn't my question here; it was caused by having too
few cores in the new environment, while the application was still
assuming the higher core count and generating too many concurrent processes.

My question is how to identify which connections / queries from
postgres_fdw are generating the `fetch 100 from c2` queries, which, in
turn, may quite possibly lead to a feature request for having these
named uniquely.

My guess not.

See:

https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/postgres_fdw.c

Starting at line ~5212

fetch_size = 100;

and ending at line ~5234

/* Construct command to fetch rows from remote. */
snprintf(fetch_sql, sizeof(fetch_sql), "FETCH %d FROM c%u",
fetch_size, cursor_number);

So c2 is a cursor number.

Thanks,

Colin

--
Adrian Klaver
adrian.klaver@aklaver.com

#5Adrian Klaver
adrian.klaver@aklaver.com
In reply to: Adrian Klaver (#4)
Re: pgBadger and postgres_fdw

On 1/21/26 08:59, Adrian Klaver wrote:

On 1/21/26 08:12, Colin 't Hart wrote:

6. The 19 slowest queries in a 4 hour period are between 2 and 37
minutes, with an average of over 10 minutes; they are all `fetch 100
from c2`.

The slowness itself isn't my question here; it was caused by having
too few cores in the new environment, while the application was still
assuming the higher core count and generating too many concurrent
processes.

My question is how to identify which connections / queries from
postgres_fdw are generating the `fetch 100 from c2` queries, which, in
turn, may quite possibly lead to a feature request for having these
named uniquely.

My guess not.

See:

https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/
postgres_fdw.c

Starting at line ~5212

fetch_size = 100;

and ending at line ~5234

/* Construct command to fetch rows from remote. */
    snprintf(fetch_sql, sizeof(fetch_sql), "FETCH %d FROM c%u",
             fetch_size, cursor_number);

So c2 is a cursor number.

If I am following this something postgres_fdw does to fetch the result
in batches, so all the queries will have them.

FYI, the fetch_size can be changed, see here:

https://www.postgresql.org/docs/17/postgres-fdw.html#POSTGRES-FDW-CONFIGURATION-PARAMETERS

F.36.1.4. Remote Execution Options

If you want connection/query information I would enable from here:

https://www.postgresql.org/docs/17/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT

log_connections

log_disconnections

And at least temporarily:

log_statement = 'all'

The above will generate a lot of logs so you don't want to keep set for
too long.

Thanks,

Colin

--
Adrian Klaver
adrian.klaver@aklaver.com

#6Laurenz Albe
laurenz.albe@cybertec.at
In reply to: Colin 't Hart (#3)
Re: pgBadger and postgres_fdw

On Wed, 2026-01-21 at 17:12 +0100, Colin 't Hart wrote:

My question is how to identify which connections / queries from postgres_fdw are
generating the `fetch 100 from c2` queries, which, in turn, may quite possibly
lead to a feature request for having these named uniquely.

I would inverstigate that on the remote database.

If the user that postgres_fdw uses to connect is remote_user, you could

ALTER ROLE remote_user SET log_min_duretion_statement = 0;

Then any statements executed through postgres_fdw would be logged.

If you have %x in log_line_prefix, you can find the DECLARE statement that declared
the cursor that takes so long to fetch. Not very comfortale, but it should work.

Yours,
Laurenz Albe