Parallel workers stats in pg_stat_database
Hi hackers,
This patch introduces four new columns in pg_stat_database:
* parallel_worker_planned
* parallel_worker_launched
* parallel_maint_worker_planned
* parallel_maint_worker_launched
The intent is to help administrators evaluate the usage of parallel
workers in their databases and help sizing max_worker_processes,
max_parallel_workers or max_parallel_maintenance_workers).
Here is a test script:
psql << _EOF_
-- Index creation
DROP TABLE IF EXISTS test_pql;
CREATE TABLE test_pql(i int, j int);
INSERT INTO test_pql SELECT x,x FROM generate_series(1,1000000) as F(x);
-- 0 planned / 0 launched
EXPLAIN (ANALYZE)
SELECT 1;
-- 2 planned / 2 launched
EXPLAIN (ANALYZE)
SELECT i, avg(j) FROM test_pql GROUP BY i;
SET max_parallel_workers TO 1;
-- 4 planned / 1 launched
EXPLAIN (ANALYZE)
SELECT i, avg(j) FROM test_pql GROUP BY i
UNION
SELECT i, avg(j) FROM test_pql GROUP BY i;
RESET max_parallel_workers;
-- 1 planned / 1 launched
CREATE INDEX ON test_pql(i);
SET max_parallel_workers TO 0;
-- 1 planned / 0 launched
CREATE INDEX ON test_pql(j);
-- 1 planned / 0 launched
CREATE INDEX ON test_pql(i, j);
SET maintenance_work_mem TO '96MB';
RESET max_parallel_workers;
-- 2 planned / 2 launched
VACUUM (VERBOSE) test_pql;
SET max_parallel_workers TO 1;
-- 2 planned / 1 launched
VACUUM (VERBOSE) test_pql;
-- TOTAL: parallel workers: 6 planned / 3 launched
-- TOTAL: parallel maint workers: 7 planned / 4 launched
_EOF_
And the output in pg_stat_database a fresh server without any
configuration change except thoses in the script:
[local]:5445 postgres@postgres=# SELECT datname,
parallel_workers_planned, parallel_workers_launched,
parallel_maint_workers_planned, parallel_maint_workers_launched FROM pg
_stat_database WHERE datname = 'postgres' \gx
-[ RECORD 1 ]-------------------+---------
datname | postgres
parallel_workers_planned | 6
parallel_workers_launched | 3
parallel_maint_workers_planned | 7
parallel_maint_workers_launched | 4
Thanks to: Jehan-Guillaume de Rorthais, Guillaume Lelarge and Franck
Boudehen for the help and motivation boost.
---
Benoit Lobréau
Consultant
http://dalibo.com
Attachments:
0001-Adds-four-parallel-workers-stat-columns-to-pg_stat_d.patchtext/x-patch; charset=UTF-8; name=0001-Adds-four-parallel-workers-stat-columns-to-pg_stat_d.patchDownload
From cabe5e8ed2e88d9cf219161394396ebacb33a5a0 Mon Sep 17 00:00:00 2001
From: benoit <benoit.lobreau@dalibo.com>
Date: Wed, 28 Aug 2024 02:27:13 +0200
Subject: [PATCH] Adds four parallel workers stat columns to pg_stat_database
* parallel_workers_planned
* parallel_workers_launched
* parallel_maint_workers_planned
* parallel_maint_workers_launched
---
doc/src/sgml/monitoring.sgml | 36 ++++++++++++++++++++
src/backend/access/brin/brin.c | 4 +++
src/backend/access/nbtree/nbtsort.c | 4 +++
src/backend/catalog/system_views.sql | 4 +++
src/backend/commands/vacuumparallel.c | 5 +++
src/backend/executor/nodeGather.c | 5 +++
src/backend/executor/nodeGatherMerge.c | 5 +++
src/backend/utils/activity/pgstat_database.c | 36 ++++++++++++++++++++
src/backend/utils/adt/pgstatfuncs.c | 12 +++++++
src/include/catalog/pg_proc.dat | 20 +++++++++++
src/include/pgstat.h | 7 ++++
src/test/regress/expected/rules.out | 4 +++
12 files changed, 142 insertions(+)
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..8c4b11c11d 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3508,6 +3508,42 @@ description | Waiting for a newly initialized WAL file to reach durable storage
</para></entry>
</row>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_planned</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers obtained by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_planned</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned by utilities on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers obtained by utilities on this database
+ </para></entry>
+ </row>
+
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 6467bed604..9eceb87b52 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -2540,6 +2540,10 @@ _brin_end_parallel(BrinLeader *brinleader, BrinBuildState *state)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(brinleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) brinleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) brinleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c
index f5d7b3b0c3..232e1a0942 100644
--- a/src/backend/access/nbtree/nbtsort.c
+++ b/src/backend/access/nbtree/nbtsort.c
@@ -1611,6 +1611,10 @@ _bt_end_parallel(BTLeader *btleader)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(btleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) btleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) btleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..48bf9e5535 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1073,6 +1073,10 @@ CREATE VIEW pg_stat_database AS
pg_stat_get_db_sessions_abandoned(D.oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(D.oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(D.oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_planned(D.oid) as parallel_workers_planned,
+ pg_stat_get_db_parallel_workers_launched(D.oid) as parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_planned(D.oid) as parallel_maint_workers_planned,
+ pg_stat_get_db_parallel_maint_workers_launched(D.oid) as parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(D.oid) AS stats_reset
FROM (
SELECT 0 AS oid, NULL::name AS datname
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 22c057fe61..f7603a0863 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -435,6 +435,11 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
{
Assert(!IsParallelWorker());
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) pvs->pcxt->nworkers_to_launch,
+ (PgStat_Counter) pvs->pcxt->nworkers_launched
+ );
+
/* Copy the updated statistics */
for (int i = 0; i < pvs->nindexes; i++)
{
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 5d4ffe989c..2cfcb1411e 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -36,6 +36,7 @@
#include "executor/tqueue.h"
#include "miscadmin.h"
#include "optimizer/optimizer.h"
+#include "pgstat.h"
#include "utils/wait_event.h"
@@ -182,6 +183,10 @@ ExecGather(PlanState *pstate)
/* We save # workers launched for the benefit of EXPLAIN */
node->nworkers_launched = pcxt->nworkers_launched;
+ pgstat_update_parallel_workers_stats(
+ (PgStat_Counter) pcxt->nworkers_to_launch,
+ (PgStat_Counter) pcxt->nworkers_launched);
+
/* Set up tuple queue readers to read the results. */
if (pcxt->nworkers_launched > 0)
{
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
index 45f6017c29..3dc3231511 100644
--- a/src/backend/executor/nodeGatherMerge.c
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -21,6 +21,7 @@
#include "lib/binaryheap.h"
#include "miscadmin.h"
#include "optimizer/optimizer.h"
+#include "pgstat.h"
/*
* When we read tuples from workers, it's a good idea to read several at once
@@ -223,6 +224,10 @@ ExecGatherMerge(PlanState *pstate)
/* We save # workers launched for the benefit of EXPLAIN */
node->nworkers_launched = pcxt->nworkers_launched;
+ pgstat_update_parallel_workers_stats(
+ (PgStat_Counter) pcxt->nworkers_to_launch,
+ (PgStat_Counter) pcxt->nworkers_launched);
+
/* Set up tuple queue readers to read the results. */
if (pcxt->nworkers_launched > 0)
{
diff --git a/src/backend/utils/activity/pgstat_database.c b/src/backend/utils/activity/pgstat_database.c
index 29bc090974..9e72c286b2 100644
--- a/src/backend/utils/activity/pgstat_database.c
+++ b/src/backend/utils/activity/pgstat_database.c
@@ -262,6 +262,38 @@ AtEOXact_PgStat_Database(bool isCommit, bool parallel)
}
}
+/*
+ * reports parallel_workers_planned and parallel_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_planned, PgStat_Counter parallel_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_workers_planned += parallel_workers_planned;
+ dbentry->parallel_workers_launched += parallel_workers_launched;
+}
+
+/*
+ * reports parallel_maint_workers_planned and parallel_maint_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_planned, PgStat_Counter parallel_maint_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_maint_workers_planned += parallel_maint_workers_planned;
+ dbentry->parallel_maint_workers_launched += parallel_maint_workers_launched;
+}
+
/*
* Subroutine for pgstat_report_stat(): Handle xact commit/rollback and I/O
* timings.
@@ -425,6 +457,10 @@ pgstat_database_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
PGSTAT_ACCUM_DBCOUNT(sessions_abandoned);
PGSTAT_ACCUM_DBCOUNT(sessions_fatal);
PGSTAT_ACCUM_DBCOUNT(sessions_killed);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_planned);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_launched);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_planned);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_launched);
#undef PGSTAT_ACCUM_DBCOUNT
pgstat_unlock_entry(entry_ref);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 3221137123..377a0f6453 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -1039,6 +1039,18 @@ PG_STAT_GET_DBENTRY_INT64(sessions_fatal)
/* pg_stat_get_db_sessions_killed */
PG_STAT_GET_DBENTRY_INT64(sessions_killed)
+/* pg_stat_get_db_parallel_workers_planned*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_planned)
+
+/* pg_stat_get_db_parallel_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_launched)
+
+/* pg_stat_get_db_parallel_maint_workers_planned*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_planned)
+
+/* pg_stat_get_db_parallel_maint_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_launched)
+
/* pg_stat_get_db_temp_bytes */
PG_STAT_GET_DBENTRY_INT64(temp_bytes)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..b1cd4fa1b0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5751,6 +5751,26 @@
proname => 'pg_stat_get_db_sessions_killed', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_sessions_killed' },
+{ oid => '8403',
+ descr => 'statistics: number of parallel workers planned for queries',
+ proname => 'pg_stat_get_db_parallel_workers_planned', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_planned' },
+{ oid => '8404',
+ descr => 'statistics: number of parallel workers effectively launched for queries',
+ proname => 'pg_stat_get_db_parallel_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_launched' },
+{ oid => '8405',
+ descr => 'statistics: number of parallel workers planned for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_planned', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_planned' },
+{ oid => '8406',
+ descr => 'statistics: number of parallel workers effectively launched for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_launched' },
{ oid => '3195', descr => 'statistics: information about WAL archiver',
proname => 'pg_stat_get_archiver', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f63159c55c..3bb3e045d1 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -383,6 +383,11 @@ typedef struct PgStat_StatDBEntry
PgStat_Counter sessions_fatal;
PgStat_Counter sessions_killed;
+ PgStat_Counter parallel_workers_planned;
+ PgStat_Counter parallel_workers_launched;
+ PgStat_Counter parallel_maint_workers_planned;
+ PgStat_Counter parallel_maint_workers_launched;
+
TimestampTz stat_reset_timestamp;
} PgStat_StatDBEntry;
@@ -578,6 +583,8 @@ extern void pgstat_report_deadlock(void);
extern void pgstat_report_checksum_failures_in_db(Oid dboid, int failurecount);
extern void pgstat_report_checksum_failure(void);
extern void pgstat_report_connect(Oid dboid);
+extern void pgstat_update_parallel_workers_stats(PgStat_Counter parallel_maint_workers_planned, PgStat_Counter parallel_maint_workers_launched);
+extern void pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_planned, PgStat_Counter parallel_maint_workers_launched);
#define pgstat_count_buffer_read_time(n) \
(pgStatBlockReadTime += (n))
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..e8a4453cd5 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1861,6 +1861,10 @@ pg_stat_database| SELECT oid AS datid,
pg_stat_get_db_sessions_abandoned(oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_planned(oid) AS parallel_workers_planned,
+ pg_stat_get_db_parallel_workers_launched(oid) AS parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_planned(oid) AS parallel_maint_workers_planned,
+ pg_stat_get_db_parallel_maint_workers_launched(oid) AS parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(oid) AS stats_reset
FROM ( SELECT 0 AS oid,
NULL::name AS datname
--
2.45.2
Hi,
This is a new patch which:
* fixes some typos
* changes the execgather / execgathermerge code so that the stats are
accumulated in EState and inserted in pg_stat_database only once, during
ExecutorEnd
* adds tests (very ugly, but I could get the parallel plan to be stable
across make check executions.)
On 8/28/24 17:10, Benoit Lobréau wrote:
Hi hackers,
This patch introduces four new columns in pg_stat_database:
* parallel_worker_planned
* parallel_worker_launched
* parallel_maint_worker_planned
* parallel_maint_worker_launchedThe intent is to help administrators evaluate the usage of parallel
workers in their databases and help sizing max_worker_processes,
max_parallel_workers or max_parallel_maintenance_workers).Here is a test script:
psql << _EOF_
-- Index creation
DROP TABLE IF EXISTS test_pql;
CREATE TABLE test_pql(i int, j int);
INSERT INTO test_pql SELECT x,x FROM generate_series(1,1000000) as F(x);-- 0 planned / 0 launched
EXPLAIN (ANALYZE)
SELECT 1;-- 2 planned / 2 launched
EXPLAIN (ANALYZE)
SELECT i, avg(j) FROM test_pql GROUP BY i;SET max_parallel_workers TO 1;
-- 4 planned / 1 launched
EXPLAIN (ANALYZE)
SELECT i, avg(j) FROM test_pql GROUP BY i
UNION
SELECT i, avg(j) FROM test_pql GROUP BY i;RESET max_parallel_workers;
-- 1 planned / 1 launched
CREATE INDEX ON test_pql(i);SET max_parallel_workers TO 0;
-- 1 planned / 0 launched
CREATE INDEX ON test_pql(j);
-- 1 planned / 0 launched
CREATE INDEX ON test_pql(i, j);SET maintenance_work_mem TO '96MB';
RESET max_parallel_workers;
-- 2 planned / 2 launched
VACUUM (VERBOSE) test_pql;SET max_parallel_workers TO 1;
-- 2 planned / 1 launched
VACUUM (VERBOSE) test_pql;-- TOTAL: parallel workers: 6 planned / 3 launched
-- TOTAL: parallel maint workers: 7 planned / 4 launched
_EOF_And the output in pg_stat_database a fresh server without any
configuration change except thoses in the script:[local]:5445 postgres@postgres=# SELECT datname,
parallel_workers_planned, parallel_workers_launched,
parallel_maint_workers_planned, parallel_maint_workers_launched FROM pg
_stat_database WHERE datname = 'postgres' \gx-[ RECORD 1 ]-------------------+---------
datname | postgres
parallel_workers_planned | 6
parallel_workers_launched | 3
parallel_maint_workers_planned | 7
parallel_maint_workers_launched | 4
Thanks to: Jehan-Guillaume de Rorthais, Guillaume Lelarge and Franck
Boudehen for the help and motivation boost.---
Benoit Lobréau
Consultant
http://dalibo.com
--
Benoit Lobréau
Consultant
http://dalibo.com
Attachments:
0001-Adds-four-parallel-workers-stat-columns-to-pg_stat_d.patch_v2text/plain; charset=UTF-8; name=0001-Adds-four-parallel-workers-stat-columns-to-pg_stat_d.patch_v2Download
From 0338cfb11ab98594b2f16d143b505e269566bb6e Mon Sep 17 00:00:00 2001
From: benoit <benoit.lobreau@dalibo.com>
Date: Wed, 28 Aug 2024 02:27:13 +0200
Subject: [PATCH] Adds four parallel workers stat columns to pg_stat_database
* parallel_workers_planned
* parallel_workers_launched
* parallel_maint_workers_planned
* parallel_maint_workers_launched
---
doc/src/sgml/monitoring.sgml | 36 ++++++++++++++++++++
src/backend/access/brin/brin.c | 4 +++
src/backend/access/nbtree/nbtsort.c | 4 +++
src/backend/catalog/system_views.sql | 4 +++
src/backend/commands/vacuumparallel.c | 5 +++
src/backend/executor/execMain.c | 5 +++
src/backend/executor/execUtils.c | 3 ++
src/backend/executor/nodeGather.c | 3 ++
src/backend/executor/nodeGatherMerge.c | 3 ++
src/backend/utils/activity/pgstat_database.c | 36 ++++++++++++++++++++
src/backend/utils/adt/pgstatfuncs.c | 12 +++++++
src/include/catalog/pg_proc.dat | 20 +++++++++++
src/include/nodes/execnodes.h | 3 ++
src/include/pgstat.h | 7 ++++
src/test/regress/expected/rules.out | 4 +++
src/test/regress/expected/stats.out | 17 +++++++++
src/test/regress/sql/stats.sql | 14 ++++++++
17 files changed, 180 insertions(+)
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..8c4b11c11d 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3508,6 +3508,42 @@ description | Waiting for a newly initialized WAL file to reach durable storage
</para></entry>
</row>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_planned</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers obtained by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_planned</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned by utilities on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers obtained by utilities on this database
+ </para></entry>
+ </row>
+
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 6467bed604..9eceb87b52 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -2540,6 +2540,10 @@ _brin_end_parallel(BrinLeader *brinleader, BrinBuildState *state)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(brinleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) brinleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) brinleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c
index f5d7b3b0c3..232e1a0942 100644
--- a/src/backend/access/nbtree/nbtsort.c
+++ b/src/backend/access/nbtree/nbtsort.c
@@ -1611,6 +1611,10 @@ _bt_end_parallel(BTLeader *btleader)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(btleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) btleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) btleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..48bf9e5535 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1073,6 +1073,10 @@ CREATE VIEW pg_stat_database AS
pg_stat_get_db_sessions_abandoned(D.oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(D.oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(D.oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_planned(D.oid) as parallel_workers_planned,
+ pg_stat_get_db_parallel_workers_launched(D.oid) as parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_planned(D.oid) as parallel_maint_workers_planned,
+ pg_stat_get_db_parallel_maint_workers_launched(D.oid) as parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(D.oid) AS stats_reset
FROM (
SELECT 0 AS oid, NULL::name AS datname
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 22c057fe61..f7603a0863 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -435,6 +435,11 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
{
Assert(!IsParallelWorker());
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) pvs->pcxt->nworkers_to_launch,
+ (PgStat_Counter) pvs->pcxt->nworkers_launched
+ );
+
/* Copy the updated statistics */
for (int i = 0; i < pvs->nindexes; i++)
{
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 29e186fa73..baeb88629a 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -51,6 +51,7 @@
#include "mb/pg_wchar.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
+#include "pgstat.h"
#include "rewrite/rewriteHandler.h"
#include "tcop/utility.h"
#include "utils/acl.h"
@@ -480,6 +481,10 @@ standard_ExecutorEnd(QueryDesc *queryDesc)
Assert(estate != NULL);
+ pgstat_update_parallel_workers_stats(
+ (PgStat_Counter) estate->es_workers_planned,
+ (PgStat_Counter) estate->es_workers_launched);
+
/*
* Check that ExecutorFinish was called, unless in EXPLAIN-only mode. This
* Assert is needed because ExecutorFinish is new as of 9.1, and callers
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index 5737f9f4eb..5919902075 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -162,6 +162,9 @@ CreateExecutorState(void)
estate->es_jit_flags = 0;
estate->es_jit = NULL;
+ estate->es_workers_launched = 0;
+ estate->es_workers_planned = 0;
+
/*
* Return the executor state structure
*/
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 5d4ffe989c..1271a0f7d1 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -182,6 +182,9 @@ ExecGather(PlanState *pstate)
/* We save # workers launched for the benefit of EXPLAIN */
node->nworkers_launched = pcxt->nworkers_launched;
+ estate->es_workers_launched += pcxt->nworkers_launched;
+ estate->es_workers_planned += pcxt->nworkers_to_launch;
+
/* Set up tuple queue readers to read the results. */
if (pcxt->nworkers_launched > 0)
{
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
index 45f6017c29..677c450c3d 100644
--- a/src/backend/executor/nodeGatherMerge.c
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -223,6 +223,9 @@ ExecGatherMerge(PlanState *pstate)
/* We save # workers launched for the benefit of EXPLAIN */
node->nworkers_launched = pcxt->nworkers_launched;
+ estate->es_workers_launched += pcxt->nworkers_launched;
+ estate->es_workers_planned += pcxt->nworkers_to_launch;
+
/* Set up tuple queue readers to read the results. */
if (pcxt->nworkers_launched > 0)
{
diff --git a/src/backend/utils/activity/pgstat_database.c b/src/backend/utils/activity/pgstat_database.c
index 29bc090974..9e72c286b2 100644
--- a/src/backend/utils/activity/pgstat_database.c
+++ b/src/backend/utils/activity/pgstat_database.c
@@ -262,6 +262,38 @@ AtEOXact_PgStat_Database(bool isCommit, bool parallel)
}
}
+/*
+ * reports parallel_workers_planned and parallel_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_planned, PgStat_Counter parallel_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_workers_planned += parallel_workers_planned;
+ dbentry->parallel_workers_launched += parallel_workers_launched;
+}
+
+/*
+ * reports parallel_maint_workers_planned and parallel_maint_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_planned, PgStat_Counter parallel_maint_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_maint_workers_planned += parallel_maint_workers_planned;
+ dbentry->parallel_maint_workers_launched += parallel_maint_workers_launched;
+}
+
/*
* Subroutine for pgstat_report_stat(): Handle xact commit/rollback and I/O
* timings.
@@ -425,6 +457,10 @@ pgstat_database_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
PGSTAT_ACCUM_DBCOUNT(sessions_abandoned);
PGSTAT_ACCUM_DBCOUNT(sessions_fatal);
PGSTAT_ACCUM_DBCOUNT(sessions_killed);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_planned);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_launched);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_planned);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_launched);
#undef PGSTAT_ACCUM_DBCOUNT
pgstat_unlock_entry(entry_ref);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 3221137123..377a0f6453 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -1039,6 +1039,18 @@ PG_STAT_GET_DBENTRY_INT64(sessions_fatal)
/* pg_stat_get_db_sessions_killed */
PG_STAT_GET_DBENTRY_INT64(sessions_killed)
+/* pg_stat_get_db_parallel_workers_planned*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_planned)
+
+/* pg_stat_get_db_parallel_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_launched)
+
+/* pg_stat_get_db_parallel_maint_workers_planned*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_planned)
+
+/* pg_stat_get_db_parallel_maint_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_launched)
+
/* pg_stat_get_db_temp_bytes */
PG_STAT_GET_DBENTRY_INT64(temp_bytes)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..b1cd4fa1b0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5751,6 +5751,26 @@
proname => 'pg_stat_get_db_sessions_killed', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_sessions_killed' },
+{ oid => '8403',
+ descr => 'statistics: number of parallel workers planned for queries',
+ proname => 'pg_stat_get_db_parallel_workers_planned', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_planned' },
+{ oid => '8404',
+ descr => 'statistics: number of parallel workers effectively launched for queries',
+ proname => 'pg_stat_get_db_parallel_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_launched' },
+{ oid => '8405',
+ descr => 'statistics: number of parallel workers planned for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_planned', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_planned' },
+{ oid => '8406',
+ descr => 'statistics: number of parallel workers effectively launched for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_launched' },
{ oid => '3195', descr => 'statistics: information about WAL archiver',
proname => 'pg_stat_get_archiver', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index af7d8fd1e7..1903ad60f8 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -724,6 +724,9 @@ typedef struct EState
*/
List *es_insert_pending_result_relations;
List *es_insert_pending_modifytables;
+
+ int es_workers_launched;
+ int es_workers_planned;
} EState;
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f63159c55c..bad74a9f2d 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -383,6 +383,11 @@ typedef struct PgStat_StatDBEntry
PgStat_Counter sessions_fatal;
PgStat_Counter sessions_killed;
+ PgStat_Counter parallel_workers_planned;
+ PgStat_Counter parallel_workers_launched;
+ PgStat_Counter parallel_maint_workers_planned;
+ PgStat_Counter parallel_maint_workers_launched;
+
TimestampTz stat_reset_timestamp;
} PgStat_StatDBEntry;
@@ -578,6 +583,8 @@ extern void pgstat_report_deadlock(void);
extern void pgstat_report_checksum_failures_in_db(Oid dboid, int failurecount);
extern void pgstat_report_checksum_failure(void);
extern void pgstat_report_connect(Oid dboid);
+extern void pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_planned, PgStat_Counter parallel_workers_launched);
+extern void pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_planned, PgStat_Counter parallel_maint_workers_launched);
#define pgstat_count_buffer_read_time(n) \
(pgStatBlockReadTime += (n))
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..e8a4453cd5 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1861,6 +1861,10 @@ pg_stat_database| SELECT oid AS datid,
pg_stat_get_db_sessions_abandoned(oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_planned(oid) AS parallel_workers_planned,
+ pg_stat_get_db_parallel_workers_launched(oid) AS parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_planned(oid) AS parallel_maint_workers_planned,
+ pg_stat_get_db_parallel_maint_workers_launched(oid) AS parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(oid) AS stats_reset
FROM ( SELECT 0 AS oid,
NULL::name AS datname
diff --git a/src/test/regress/expected/stats.out b/src/test/regress/expected/stats.out
index 6e08898b18..88d283a991 100644
--- a/src/test/regress/expected/stats.out
+++ b/src/test/regress/expected/stats.out
@@ -32,6 +32,11 @@ SELECT t.seq_scan, t.seq_tup_read, t.idx_scan, t.idx_tup_fetch,
pg_catalog.pg_statio_user_tables AS b
WHERE t.relname='tenk2' AND b.relname='tenk2';
COMMIT;
+SELECT sum(parallel_workers_planned) AS parallel_workers_planned_before,
+ sum(parallel_workers_launched) AS parallel_workers_launched_before,
+ sum(parallel_maint_workers_planned) AS parallel_maint_workers_planned_before,
+ sum(parallel_maint_workers_launched) AS parallel_maint_workers_launched_before
+FROM pg_stat_database \gset
-- test effects of TRUNCATE on n_live_tup/n_dead_tup counters
CREATE TABLE trunc_stats_test(id serial);
CREATE TABLE trunc_stats_test1(id serial, stuff text);
@@ -862,6 +867,18 @@ WHERE pg_stat_get_backend_pid(beid) = pg_backend_pid();
t
(1 row)
+-- Test that parallel workers stats are updated in pg_stat_database
+SELECT
+ sum(parallel_workers_planned) > :'parallel_workers_planned_before' AS wrk_planned,
+ sum(parallel_workers_launched) > :'parallel_workers_launched_before' AS wrk_launched,
+ sum(parallel_maint_workers_planned) > :'parallel_maint_workers_planned_before' AS maint_wrk_planned,
+ sum(parallel_maint_workers_launched) > :'parallel_maint_workers_launched_before' AS maint_wrk_launched
+FROM pg_stat_database;
+ wrk_planned | wrk_launched | maint_wrk_planned | maint_wrk_launched
+-------------+--------------+-------------------+--------------------
+ t | t | t | t
+(1 row)
+
-----
-- Test that resetting stats works for reset timestamp
-----
diff --git a/src/test/regress/sql/stats.sql b/src/test/regress/sql/stats.sql
index d8ac0d06f4..3a59c75539 100644
--- a/src/test/regress/sql/stats.sql
+++ b/src/test/regress/sql/stats.sql
@@ -32,6 +32,12 @@ SELECT t.seq_scan, t.seq_tup_read, t.idx_scan, t.idx_tup_fetch,
WHERE t.relname='tenk2' AND b.relname='tenk2';
COMMIT;
+SELECT sum(parallel_workers_planned) AS parallel_workers_planned_before,
+ sum(parallel_workers_launched) AS parallel_workers_launched_before,
+ sum(parallel_maint_workers_planned) AS parallel_maint_workers_planned_before,
+ sum(parallel_maint_workers_launched) AS parallel_maint_workers_launched_before
+FROM pg_stat_database \gset
+
-- test effects of TRUNCATE on n_live_tup/n_dead_tup counters
CREATE TABLE trunc_stats_test(id serial);
CREATE TABLE trunc_stats_test1(id serial, stuff text);
@@ -442,6 +448,14 @@ SELECT (current_schemas(true))[1] = ('pg_temp_' || beid::text) AS match
FROM pg_stat_get_backend_idset() beid
WHERE pg_stat_get_backend_pid(beid) = pg_backend_pid();
+-- Test that parallel workers stats are updated in pg_stat_database
+SELECT
+ sum(parallel_workers_planned) > :'parallel_workers_planned_before' AS wrk_planned,
+ sum(parallel_workers_launched) > :'parallel_workers_launched_before' AS wrk_launched,
+ sum(parallel_maint_workers_planned) > :'parallel_maint_workers_planned_before' AS maint_wrk_planned,
+ sum(parallel_maint_workers_launched) > :'parallel_maint_workers_launched_before' AS maint_wrk_launched
+FROM pg_stat_database;
+
-----
-- Test that resetting stats works for reset timestamp
-----
--
2.45.2
Hi,
This new version avoids updating the stats for non parallel queries.
I noticed that the tests are still not stable. I tried using tenk2
but fail to have stable plans. I'd love to have pointers on that front.
--
Benoit Lobréau
Consultant
http://dalibo.com
Attachments:
0001-Adds-four-parallel-workers-stat-columns-to-pg_stat_d.patch_v3text/plain; charset=UTF-8; name=0001-Adds-four-parallel-workers-stat-columns-to-pg_stat_d.patch_v3Download
From 5e4401c865f77ed447b8b3f25aac0ffa9af0d700 Mon Sep 17 00:00:00 2001
From: benoit <benoit.lobreau@dalibo.com>
Date: Wed, 28 Aug 2024 02:27:13 +0200
Subject: [PATCH] Adds four parallel workers stat columns to pg_stat_database
* parallel_workers_planned
* parallel_workers_launched
* parallel_maint_workers_planned
* parallel_maint_workers_launched
---
doc/src/sgml/monitoring.sgml | 36 ++++++++++++++++++++
src/backend/access/brin/brin.c | 4 +++
src/backend/access/nbtree/nbtsort.c | 4 +++
src/backend/catalog/system_views.sql | 4 +++
src/backend/commands/vacuumparallel.c | 5 +++
src/backend/executor/execMain.c | 7 ++++
src/backend/executor/execUtils.c | 3 ++
src/backend/executor/nodeGather.c | 3 ++
src/backend/executor/nodeGatherMerge.c | 3 ++
src/backend/utils/activity/pgstat_database.c | 36 ++++++++++++++++++++
src/backend/utils/adt/pgstatfuncs.c | 12 +++++++
src/include/catalog/pg_proc.dat | 20 +++++++++++
src/include/nodes/execnodes.h | 3 ++
src/include/pgstat.h | 7 ++++
src/test/regress/expected/rules.out | 4 +++
src/test/regress/expected/stats.out | 17 +++++++++
src/test/regress/sql/stats.sql | 14 ++++++++
17 files changed, 182 insertions(+)
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..8c4b11c11d 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3508,6 +3508,42 @@ description | Waiting for a newly initialized WAL file to reach durable storage
</para></entry>
</row>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_planned</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers obtained by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_planned</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned by utilities on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers obtained by utilities on this database
+ </para></entry>
+ </row>
+
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 6467bed604..9eceb87b52 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -2540,6 +2540,10 @@ _brin_end_parallel(BrinLeader *brinleader, BrinBuildState *state)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(brinleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) brinleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) brinleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c
index f5d7b3b0c3..232e1a0942 100644
--- a/src/backend/access/nbtree/nbtsort.c
+++ b/src/backend/access/nbtree/nbtsort.c
@@ -1611,6 +1611,10 @@ _bt_end_parallel(BTLeader *btleader)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(btleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) btleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) btleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..48bf9e5535 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1073,6 +1073,10 @@ CREATE VIEW pg_stat_database AS
pg_stat_get_db_sessions_abandoned(D.oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(D.oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(D.oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_planned(D.oid) as parallel_workers_planned,
+ pg_stat_get_db_parallel_workers_launched(D.oid) as parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_planned(D.oid) as parallel_maint_workers_planned,
+ pg_stat_get_db_parallel_maint_workers_launched(D.oid) as parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(D.oid) AS stats_reset
FROM (
SELECT 0 AS oid, NULL::name AS datname
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 22c057fe61..f7603a0863 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -435,6 +435,11 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
{
Assert(!IsParallelWorker());
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) pvs->pcxt->nworkers_to_launch,
+ (PgStat_Counter) pvs->pcxt->nworkers_launched
+ );
+
/* Copy the updated statistics */
for (int i = 0; i < pvs->nindexes; i++)
{
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 29e186fa73..9bf5f57b65 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -51,6 +51,7 @@
#include "mb/pg_wchar.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
+#include "pgstat.h"
#include "rewrite/rewriteHandler.h"
#include "tcop/utility.h"
#include "utils/acl.h"
@@ -480,6 +481,12 @@ standard_ExecutorEnd(QueryDesc *queryDesc)
Assert(estate != NULL);
+ if (estate->es_workers_planned > 0) {
+ pgstat_update_parallel_workers_stats(
+ (PgStat_Counter) estate->es_workers_planned,
+ (PgStat_Counter) estate->es_workers_launched);
+ }
+
/*
* Check that ExecutorFinish was called, unless in EXPLAIN-only mode. This
* Assert is needed because ExecutorFinish is new as of 9.1, and callers
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index 5737f9f4eb..5919902075 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -162,6 +162,9 @@ CreateExecutorState(void)
estate->es_jit_flags = 0;
estate->es_jit = NULL;
+ estate->es_workers_launched = 0;
+ estate->es_workers_planned = 0;
+
/*
* Return the executor state structure
*/
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 5d4ffe989c..1271a0f7d1 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -182,6 +182,9 @@ ExecGather(PlanState *pstate)
/* We save # workers launched for the benefit of EXPLAIN */
node->nworkers_launched = pcxt->nworkers_launched;
+ estate->es_workers_launched += pcxt->nworkers_launched;
+ estate->es_workers_planned += pcxt->nworkers_to_launch;
+
/* Set up tuple queue readers to read the results. */
if (pcxt->nworkers_launched > 0)
{
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
index 45f6017c29..677c450c3d 100644
--- a/src/backend/executor/nodeGatherMerge.c
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -223,6 +223,9 @@ ExecGatherMerge(PlanState *pstate)
/* We save # workers launched for the benefit of EXPLAIN */
node->nworkers_launched = pcxt->nworkers_launched;
+ estate->es_workers_launched += pcxt->nworkers_launched;
+ estate->es_workers_planned += pcxt->nworkers_to_launch;
+
/* Set up tuple queue readers to read the results. */
if (pcxt->nworkers_launched > 0)
{
diff --git a/src/backend/utils/activity/pgstat_database.c b/src/backend/utils/activity/pgstat_database.c
index 29bc090974..9e72c286b2 100644
--- a/src/backend/utils/activity/pgstat_database.c
+++ b/src/backend/utils/activity/pgstat_database.c
@@ -262,6 +262,38 @@ AtEOXact_PgStat_Database(bool isCommit, bool parallel)
}
}
+/*
+ * reports parallel_workers_planned and parallel_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_planned, PgStat_Counter parallel_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_workers_planned += parallel_workers_planned;
+ dbentry->parallel_workers_launched += parallel_workers_launched;
+}
+
+/*
+ * reports parallel_maint_workers_planned and parallel_maint_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_planned, PgStat_Counter parallel_maint_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_maint_workers_planned += parallel_maint_workers_planned;
+ dbentry->parallel_maint_workers_launched += parallel_maint_workers_launched;
+}
+
/*
* Subroutine for pgstat_report_stat(): Handle xact commit/rollback and I/O
* timings.
@@ -425,6 +457,10 @@ pgstat_database_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
PGSTAT_ACCUM_DBCOUNT(sessions_abandoned);
PGSTAT_ACCUM_DBCOUNT(sessions_fatal);
PGSTAT_ACCUM_DBCOUNT(sessions_killed);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_planned);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_launched);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_planned);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_launched);
#undef PGSTAT_ACCUM_DBCOUNT
pgstat_unlock_entry(entry_ref);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 3221137123..377a0f6453 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -1039,6 +1039,18 @@ PG_STAT_GET_DBENTRY_INT64(sessions_fatal)
/* pg_stat_get_db_sessions_killed */
PG_STAT_GET_DBENTRY_INT64(sessions_killed)
+/* pg_stat_get_db_parallel_workers_planned*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_planned)
+
+/* pg_stat_get_db_parallel_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_launched)
+
+/* pg_stat_get_db_parallel_maint_workers_planned*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_planned)
+
+/* pg_stat_get_db_parallel_maint_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_launched)
+
/* pg_stat_get_db_temp_bytes */
PG_STAT_GET_DBENTRY_INT64(temp_bytes)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..b1cd4fa1b0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5751,6 +5751,26 @@
proname => 'pg_stat_get_db_sessions_killed', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_sessions_killed' },
+{ oid => '8403',
+ descr => 'statistics: number of parallel workers planned for queries',
+ proname => 'pg_stat_get_db_parallel_workers_planned', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_planned' },
+{ oid => '8404',
+ descr => 'statistics: number of parallel workers effectively launched for queries',
+ proname => 'pg_stat_get_db_parallel_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_launched' },
+{ oid => '8405',
+ descr => 'statistics: number of parallel workers planned for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_planned', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_planned' },
+{ oid => '8406',
+ descr => 'statistics: number of parallel workers effectively launched for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_launched' },
{ oid => '3195', descr => 'statistics: information about WAL archiver',
proname => 'pg_stat_get_archiver', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index af7d8fd1e7..1903ad60f8 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -724,6 +724,9 @@ typedef struct EState
*/
List *es_insert_pending_result_relations;
List *es_insert_pending_modifytables;
+
+ int es_workers_launched;
+ int es_workers_planned;
} EState;
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f63159c55c..bad74a9f2d 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -383,6 +383,11 @@ typedef struct PgStat_StatDBEntry
PgStat_Counter sessions_fatal;
PgStat_Counter sessions_killed;
+ PgStat_Counter parallel_workers_planned;
+ PgStat_Counter parallel_workers_launched;
+ PgStat_Counter parallel_maint_workers_planned;
+ PgStat_Counter parallel_maint_workers_launched;
+
TimestampTz stat_reset_timestamp;
} PgStat_StatDBEntry;
@@ -578,6 +583,8 @@ extern void pgstat_report_deadlock(void);
extern void pgstat_report_checksum_failures_in_db(Oid dboid, int failurecount);
extern void pgstat_report_checksum_failure(void);
extern void pgstat_report_connect(Oid dboid);
+extern void pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_planned, PgStat_Counter parallel_workers_launched);
+extern void pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_planned, PgStat_Counter parallel_maint_workers_launched);
#define pgstat_count_buffer_read_time(n) \
(pgStatBlockReadTime += (n))
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..e8a4453cd5 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1861,6 +1861,10 @@ pg_stat_database| SELECT oid AS datid,
pg_stat_get_db_sessions_abandoned(oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_planned(oid) AS parallel_workers_planned,
+ pg_stat_get_db_parallel_workers_launched(oid) AS parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_planned(oid) AS parallel_maint_workers_planned,
+ pg_stat_get_db_parallel_maint_workers_launched(oid) AS parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(oid) AS stats_reset
FROM ( SELECT 0 AS oid,
NULL::name AS datname
diff --git a/src/test/regress/expected/stats.out b/src/test/regress/expected/stats.out
index 6e08898b18..88d283a991 100644
--- a/src/test/regress/expected/stats.out
+++ b/src/test/regress/expected/stats.out
@@ -32,6 +32,11 @@ SELECT t.seq_scan, t.seq_tup_read, t.idx_scan, t.idx_tup_fetch,
pg_catalog.pg_statio_user_tables AS b
WHERE t.relname='tenk2' AND b.relname='tenk2';
COMMIT;
+SELECT sum(parallel_workers_planned) AS parallel_workers_planned_before,
+ sum(parallel_workers_launched) AS parallel_workers_launched_before,
+ sum(parallel_maint_workers_planned) AS parallel_maint_workers_planned_before,
+ sum(parallel_maint_workers_launched) AS parallel_maint_workers_launched_before
+FROM pg_stat_database \gset
-- test effects of TRUNCATE on n_live_tup/n_dead_tup counters
CREATE TABLE trunc_stats_test(id serial);
CREATE TABLE trunc_stats_test1(id serial, stuff text);
@@ -862,6 +867,18 @@ WHERE pg_stat_get_backend_pid(beid) = pg_backend_pid();
t
(1 row)
+-- Test that parallel workers stats are updated in pg_stat_database
+SELECT
+ sum(parallel_workers_planned) > :'parallel_workers_planned_before' AS wrk_planned,
+ sum(parallel_workers_launched) > :'parallel_workers_launched_before' AS wrk_launched,
+ sum(parallel_maint_workers_planned) > :'parallel_maint_workers_planned_before' AS maint_wrk_planned,
+ sum(parallel_maint_workers_launched) > :'parallel_maint_workers_launched_before' AS maint_wrk_launched
+FROM pg_stat_database;
+ wrk_planned | wrk_launched | maint_wrk_planned | maint_wrk_launched
+-------------+--------------+-------------------+--------------------
+ t | t | t | t
+(1 row)
+
-----
-- Test that resetting stats works for reset timestamp
-----
diff --git a/src/test/regress/sql/stats.sql b/src/test/regress/sql/stats.sql
index d8ac0d06f4..3a59c75539 100644
--- a/src/test/regress/sql/stats.sql
+++ b/src/test/regress/sql/stats.sql
@@ -32,6 +32,12 @@ SELECT t.seq_scan, t.seq_tup_read, t.idx_scan, t.idx_tup_fetch,
WHERE t.relname='tenk2' AND b.relname='tenk2';
COMMIT;
+SELECT sum(parallel_workers_planned) AS parallel_workers_planned_before,
+ sum(parallel_workers_launched) AS parallel_workers_launched_before,
+ sum(parallel_maint_workers_planned) AS parallel_maint_workers_planned_before,
+ sum(parallel_maint_workers_launched) AS parallel_maint_workers_launched_before
+FROM pg_stat_database \gset
+
-- test effects of TRUNCATE on n_live_tup/n_dead_tup counters
CREATE TABLE trunc_stats_test(id serial);
CREATE TABLE trunc_stats_test1(id serial, stuff text);
@@ -442,6 +448,14 @@ SELECT (current_schemas(true))[1] = ('pg_temp_' || beid::text) AS match
FROM pg_stat_get_backend_idset() beid
WHERE pg_stat_get_backend_pid(beid) = pg_backend_pid();
+-- Test that parallel workers stats are updated in pg_stat_database
+SELECT
+ sum(parallel_workers_planned) > :'parallel_workers_planned_before' AS wrk_planned,
+ sum(parallel_workers_launched) > :'parallel_workers_launched_before' AS wrk_launched,
+ sum(parallel_maint_workers_planned) > :'parallel_maint_workers_planned_before' AS maint_wrk_planned,
+ sum(parallel_maint_workers_launched) > :'parallel_maint_workers_launched_before' AS maint_wrk_launched
+FROM pg_stat_database;
+
-----
-- Test that resetting stats works for reset timestamp
-----
--
2.45.2
Hi,
On Tue, Sep 03, 2024 at 02:34:06PM +0200, Benoit Lobr�au wrote:
I noticed that the tests are still not stable. I tried using tenk2
but fail to have stable plans. I'd love to have pointers on that front.
What about moving the tests to places where it's "guaranteed" to get
parallel workers involved? For example, a "parallel_maint_workers" only test
could be done in vacuum_parallel.sql.
Regards,
--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com
On 9/4/24 08:46, Bertrand Drouvot wrote:> What about moving the tests to
places where it's "guaranteed" to get
parallel workers involved? For example, a "parallel_maint_workers" only test
could be done in vacuum_parallel.sql.
Thank you ! I was too focussed on the stat part and missed the obvious.
It's indeed better with this file.
... Which led me to discover that the area I choose to gather my stats
is wrong (parallel_vacuum_end), it only traps workers allocated for
parallel_vacuum_cleanup_all_indexes() and not
parallel_vacuum_bulkdel_all_indexes().
Back to the drawing board...
--
Benoit Lobréau
Consultant
http://dalibo.com
Here is an updated patch fixing the aforementionned problems
with tests and vacuum stats.
--
Benoit Lobréau
Consultant
http://dalibo.com
Attachments:
0001-Adds-four-parallel-workers-stat-columns-to-pg_stat_d.patch_v4text/plain; charset=UTF-8; name=0001-Adds-four-parallel-workers-stat-columns-to-pg_stat_d.patch_v4Download
From 1b52f5fb4e977599b8925c69193d31148042ca7d Mon Sep 17 00:00:00 2001
From: benoit <benoit.lobreau@dalibo.com>
Date: Wed, 28 Aug 2024 02:27:13 +0200
Subject: [PATCH] Adds four parallel workers stat columns to pg_stat_database
* parallel_workers_planned
* parallel_workers_launched
* parallel_maint_workers_planned
* parallel_maint_workers_launched
---
doc/src/sgml/monitoring.sgml | 36 +++++++++++++++++++
src/backend/access/brin/brin.c | 4 +++
src/backend/access/nbtree/nbtsort.c | 4 +++
src/backend/catalog/system_views.sql | 4 +++
src/backend/commands/vacuumparallel.c | 5 +++
src/backend/executor/execMain.c | 7 ++++
src/backend/executor/execUtils.c | 3 ++
src/backend/executor/nodeGather.c | 3 ++
src/backend/executor/nodeGatherMerge.c | 3 ++
src/backend/utils/activity/pgstat_database.c | 36 +++++++++++++++++++
src/backend/utils/adt/pgstatfuncs.c | 12 +++++++
src/include/catalog/pg_proc.dat | 20 +++++++++++
src/include/nodes/execnodes.h | 3 ++
src/include/pgstat.h | 7 ++++
src/test/regress/expected/rules.out | 4 +++
src/test/regress/expected/select_parallel.out | 27 ++++++++++++++
src/test/regress/expected/vacuum_parallel.out | 19 ++++++++++
src/test/regress/sql/select_parallel.sql | 15 ++++++++
src/test/regress/sql/vacuum_parallel.sql | 11 ++++++
19 files changed, 223 insertions(+)
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..8c4b11c11d 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3508,6 +3508,42 @@ description | Waiting for a newly initialized WAL file to reach durable storage
</para></entry>
</row>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_planned</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers obtained by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_planned</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned by utilities on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers obtained by utilities on this database
+ </para></entry>
+ </row>
+
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 6467bed604..9eceb87b52 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -2540,6 +2540,10 @@ _brin_end_parallel(BrinLeader *brinleader, BrinBuildState *state)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(brinleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) brinleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) brinleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c
index f5d7b3b0c3..232e1a0942 100644
--- a/src/backend/access/nbtree/nbtsort.c
+++ b/src/backend/access/nbtree/nbtsort.c
@@ -1611,6 +1611,10 @@ _bt_end_parallel(BTLeader *btleader)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(btleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) btleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) btleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..48bf9e5535 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1073,6 +1073,10 @@ CREATE VIEW pg_stat_database AS
pg_stat_get_db_sessions_abandoned(D.oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(D.oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(D.oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_planned(D.oid) as parallel_workers_planned,
+ pg_stat_get_db_parallel_workers_launched(D.oid) as parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_planned(D.oid) as parallel_maint_workers_planned,
+ pg_stat_get_db_parallel_maint_workers_launched(D.oid) as parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(D.oid) AS stats_reset
FROM (
SELECT 0 AS oid, NULL::name AS datname
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 22c057fe61..e3a64290e5 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -737,6 +737,11 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
for (int i = 0; i < pvs->pcxt->nworkers_launched; i++)
InstrAccumParallelQuery(&pvs->buffer_usage[i], &pvs->wal_usage[i]);
+
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) pvs->pcxt->nworkers_to_launch,
+ (PgStat_Counter) pvs->pcxt->nworkers_launched
+ );
}
/*
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 29e186fa73..9bf5f57b65 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -51,6 +51,7 @@
#include "mb/pg_wchar.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
+#include "pgstat.h"
#include "rewrite/rewriteHandler.h"
#include "tcop/utility.h"
#include "utils/acl.h"
@@ -480,6 +481,12 @@ standard_ExecutorEnd(QueryDesc *queryDesc)
Assert(estate != NULL);
+ if (estate->es_workers_planned > 0) {
+ pgstat_update_parallel_workers_stats(
+ (PgStat_Counter) estate->es_workers_planned,
+ (PgStat_Counter) estate->es_workers_launched);
+ }
+
/*
* Check that ExecutorFinish was called, unless in EXPLAIN-only mode. This
* Assert is needed because ExecutorFinish is new as of 9.1, and callers
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index 5737f9f4eb..5919902075 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -162,6 +162,9 @@ CreateExecutorState(void)
estate->es_jit_flags = 0;
estate->es_jit = NULL;
+ estate->es_workers_launched = 0;
+ estate->es_workers_planned = 0;
+
/*
* Return the executor state structure
*/
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 5d4ffe989c..1271a0f7d1 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -182,6 +182,9 @@ ExecGather(PlanState *pstate)
/* We save # workers launched for the benefit of EXPLAIN */
node->nworkers_launched = pcxt->nworkers_launched;
+ estate->es_workers_launched += pcxt->nworkers_launched;
+ estate->es_workers_planned += pcxt->nworkers_to_launch;
+
/* Set up tuple queue readers to read the results. */
if (pcxt->nworkers_launched > 0)
{
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
index 45f6017c29..677c450c3d 100644
--- a/src/backend/executor/nodeGatherMerge.c
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -223,6 +223,9 @@ ExecGatherMerge(PlanState *pstate)
/* We save # workers launched for the benefit of EXPLAIN */
node->nworkers_launched = pcxt->nworkers_launched;
+ estate->es_workers_launched += pcxt->nworkers_launched;
+ estate->es_workers_planned += pcxt->nworkers_to_launch;
+
/* Set up tuple queue readers to read the results. */
if (pcxt->nworkers_launched > 0)
{
diff --git a/src/backend/utils/activity/pgstat_database.c b/src/backend/utils/activity/pgstat_database.c
index 29bc090974..9e72c286b2 100644
--- a/src/backend/utils/activity/pgstat_database.c
+++ b/src/backend/utils/activity/pgstat_database.c
@@ -262,6 +262,38 @@ AtEOXact_PgStat_Database(bool isCommit, bool parallel)
}
}
+/*
+ * reports parallel_workers_planned and parallel_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_planned, PgStat_Counter parallel_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_workers_planned += parallel_workers_planned;
+ dbentry->parallel_workers_launched += parallel_workers_launched;
+}
+
+/*
+ * reports parallel_maint_workers_planned and parallel_maint_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_planned, PgStat_Counter parallel_maint_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_maint_workers_planned += parallel_maint_workers_planned;
+ dbentry->parallel_maint_workers_launched += parallel_maint_workers_launched;
+}
+
/*
* Subroutine for pgstat_report_stat(): Handle xact commit/rollback and I/O
* timings.
@@ -425,6 +457,10 @@ pgstat_database_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
PGSTAT_ACCUM_DBCOUNT(sessions_abandoned);
PGSTAT_ACCUM_DBCOUNT(sessions_fatal);
PGSTAT_ACCUM_DBCOUNT(sessions_killed);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_planned);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_launched);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_planned);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_launched);
#undef PGSTAT_ACCUM_DBCOUNT
pgstat_unlock_entry(entry_ref);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 3221137123..377a0f6453 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -1039,6 +1039,18 @@ PG_STAT_GET_DBENTRY_INT64(sessions_fatal)
/* pg_stat_get_db_sessions_killed */
PG_STAT_GET_DBENTRY_INT64(sessions_killed)
+/* pg_stat_get_db_parallel_workers_planned*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_planned)
+
+/* pg_stat_get_db_parallel_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_launched)
+
+/* pg_stat_get_db_parallel_maint_workers_planned*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_planned)
+
+/* pg_stat_get_db_parallel_maint_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_launched)
+
/* pg_stat_get_db_temp_bytes */
PG_STAT_GET_DBENTRY_INT64(temp_bytes)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..b1cd4fa1b0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5751,6 +5751,26 @@
proname => 'pg_stat_get_db_sessions_killed', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_sessions_killed' },
+{ oid => '8403',
+ descr => 'statistics: number of parallel workers planned for queries',
+ proname => 'pg_stat_get_db_parallel_workers_planned', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_planned' },
+{ oid => '8404',
+ descr => 'statistics: number of parallel workers effectively launched for queries',
+ proname => 'pg_stat_get_db_parallel_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_launched' },
+{ oid => '8405',
+ descr => 'statistics: number of parallel workers planned for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_planned', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_planned' },
+{ oid => '8406',
+ descr => 'statistics: number of parallel workers effectively launched for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_launched' },
{ oid => '3195', descr => 'statistics: information about WAL archiver',
proname => 'pg_stat_get_archiver', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index af7d8fd1e7..1903ad60f8 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -724,6 +724,9 @@ typedef struct EState
*/
List *es_insert_pending_result_relations;
List *es_insert_pending_modifytables;
+
+ int es_workers_launched;
+ int es_workers_planned;
} EState;
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f63159c55c..bad74a9f2d 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -383,6 +383,11 @@ typedef struct PgStat_StatDBEntry
PgStat_Counter sessions_fatal;
PgStat_Counter sessions_killed;
+ PgStat_Counter parallel_workers_planned;
+ PgStat_Counter parallel_workers_launched;
+ PgStat_Counter parallel_maint_workers_planned;
+ PgStat_Counter parallel_maint_workers_launched;
+
TimestampTz stat_reset_timestamp;
} PgStat_StatDBEntry;
@@ -578,6 +583,8 @@ extern void pgstat_report_deadlock(void);
extern void pgstat_report_checksum_failures_in_db(Oid dboid, int failurecount);
extern void pgstat_report_checksum_failure(void);
extern void pgstat_report_connect(Oid dboid);
+extern void pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_planned, PgStat_Counter parallel_workers_launched);
+extern void pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_planned, PgStat_Counter parallel_maint_workers_launched);
#define pgstat_count_buffer_read_time(n) \
(pgStatBlockReadTime += (n))
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 862433ee52..e8a4453cd5 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1861,6 +1861,10 @@ pg_stat_database| SELECT oid AS datid,
pg_stat_get_db_sessions_abandoned(oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_planned(oid) AS parallel_workers_planned,
+ pg_stat_get_db_parallel_workers_launched(oid) AS parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_planned(oid) AS parallel_maint_workers_planned,
+ pg_stat_get_db_parallel_maint_workers_launched(oid) AS parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(oid) AS stats_reset
FROM ( SELECT 0 AS oid,
NULL::name AS datname
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index c565407082..7c196a38f4 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -1,6 +1,17 @@
--
-- PARALLEL
--
+-- Get a reference for parallel stats in pg_stat_database
+select pg_stat_force_next_flush();
+ pg_stat_force_next_flush
+--------------------------
+
+(1 row)
+
+select parallel_workers_planned as parallel_workers_planned_before,
+ parallel_workers_launched as parallel_workers_launched_before
+from pg_stat_database
+where datname = 'regression' \gset
create function sp_parallel_restricted(int) returns int as
$$begin return $1; end$$ language plpgsql parallel restricted;
begin;
@@ -1386,3 +1397,19 @@ reset debug_parallel_query;
drop function set_and_report_role();
drop function set_role_and_error(int);
drop role regress_parallel_worker;
+-- Check parallel stats in pg_stat_database
+select pg_stat_force_next_flush();
+ pg_stat_force_next_flush
+--------------------------
+
+(1 row)
+
+select :'parallel_workers_planned_before' < parallel_workers_planned as wrk_planned_changed,
+ :'parallel_workers_launched_before' < parallel_workers_launched as wrk_launched_changed
+from pg_stat_database
+where datname = 'regression';
+ wrk_planned_changed | wrk_launched_changed
+---------------------+----------------------
+ t | t
+(1 row)
+
diff --git a/src/test/regress/expected/vacuum_parallel.out b/src/test/regress/expected/vacuum_parallel.out
index ddf0ee544b..4c9c5cb1fd 100644
--- a/src/test/regress/expected/vacuum_parallel.out
+++ b/src/test/regress/expected/vacuum_parallel.out
@@ -37,9 +37,28 @@ WHERE oid in ('regular_sized_index'::regclass, 'typically_sized_index'::regclass
2
(1 row)
+-- Get a reference for parallel stats in pg_stat_database
+SELECT sum(parallel_maint_workers_planned) AS parallel_maint_workers_planned_before,
+ sum(parallel_maint_workers_launched) AS parallel_maint_workers_launched_before
+FROM pg_stat_database \gset
-- Parallel VACUUM with B-Tree page deletions, ambulkdelete calls:
DELETE FROM parallel_vacuum_table;
VACUUM (PARALLEL 4, INDEX_CLEANUP ON) parallel_vacuum_table;
+-- Check parallel stats in pg_stat_database
+SELECT pg_stat_force_next_flush();
+ pg_stat_force_next_flush
+--------------------------
+
+(1 row)
+
+SELECT sum(parallel_maint_workers_planned) > :'parallel_maint_workers_planned_before' AS maint_wrk_planned,
+ sum(parallel_maint_workers_launched) > :'parallel_maint_workers_launched_before' AS maint_wrk_launched
+FROM pg_stat_database;
+ maint_wrk_planned | maint_wrk_launched
+-------------------+--------------------
+ t | t
+(1 row)
+
-- Since vacuum_in_leader_small_index uses deduplication, we expect an
-- assertion failure with bug #17245 (in the absence of bugfix):
INSERT INTO parallel_vacuum_table SELECT i FROM generate_series(1, 10000) i;
diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql
index 22384b5ad8..dd96446b0d 100644
--- a/src/test/regress/sql/select_parallel.sql
+++ b/src/test/regress/sql/select_parallel.sql
@@ -2,6 +2,14 @@
-- PARALLEL
--
+-- Get a reference for parallel stats in pg_stat_database
+select pg_stat_force_next_flush();
+select parallel_workers_planned as parallel_workers_planned_before,
+ parallel_workers_launched as parallel_workers_launched_before
+from pg_stat_database
+where datname = 'regression' \gset
+
+
create function sp_parallel_restricted(int) returns int as
$$begin return $1; end$$ language plpgsql parallel restricted;
@@ -543,3 +551,10 @@ reset debug_parallel_query;
drop function set_and_report_role();
drop function set_role_and_error(int);
drop role regress_parallel_worker;
+
+-- Check parallel stats in pg_stat_database
+select pg_stat_force_next_flush();
+select :'parallel_workers_planned_before' < parallel_workers_planned as wrk_planned_changed,
+ :'parallel_workers_launched_before' < parallel_workers_launched as wrk_launched_changed
+from pg_stat_database
+where datname = 'regression';
diff --git a/src/test/regress/sql/vacuum_parallel.sql b/src/test/regress/sql/vacuum_parallel.sql
index 1d23f33e39..d0d7851e6a 100644
--- a/src/test/regress/sql/vacuum_parallel.sql
+++ b/src/test/regress/sql/vacuum_parallel.sql
@@ -31,10 +31,21 @@ WHERE oid in ('regular_sized_index'::regclass, 'typically_sized_index'::regclass
pg_relation_size(oid) >=
pg_size_bytes(current_setting('min_parallel_index_scan_size'));
+-- Get a reference for parallel stats in pg_stat_database
+SELECT sum(parallel_maint_workers_planned) AS parallel_maint_workers_planned_before,
+ sum(parallel_maint_workers_launched) AS parallel_maint_workers_launched_before
+FROM pg_stat_database \gset
+
-- Parallel VACUUM with B-Tree page deletions, ambulkdelete calls:
DELETE FROM parallel_vacuum_table;
VACUUM (PARALLEL 4, INDEX_CLEANUP ON) parallel_vacuum_table;
+-- Check parallel stats in pg_stat_database
+SELECT pg_stat_force_next_flush();
+SELECT sum(parallel_maint_workers_planned) > :'parallel_maint_workers_planned_before' AS maint_wrk_planned,
+ sum(parallel_maint_workers_launched) > :'parallel_maint_workers_launched_before' AS maint_wrk_launched
+FROM pg_stat_database;
+
-- Since vacuum_in_leader_small_index uses deduplication, we expect an
-- assertion failure with bug #17245 (in the absence of bugfix):
INSERT INTO parallel_vacuum_table SELECT i FROM generate_series(1, 10000) i;
--
2.45.2
On Tue, Sep 17, 2024 at 02:22:59PM +0200, Benoit Lobréau wrote:
Here is an updated patch fixing the aforementionned problems
with tests and vacuum stats.
Your patch needs a rebase.
+ Number of parallel workers obtained by utilities on this database
s/obtained/launched/ for consistency?
I like the general idea of the patch because it is rather difficult
now to know how to tune these parameters. If I were to put a priority
on both ideas, the possibility of being able to look at the number of
workers launched vs requested in the executor is higher, and I'm less
a fan of the addition for utilities because these are less common
operations. So I'd suggest to split the patch into two pieces, one
for each, if we do that at database level, but..
Actually, could we do better than what's proposed here? How about
presenting an aggregate of this data in pg_stat_statements for each
query instead? The ExecutorEnd() hook has an access to the executor
state, so the number of workers planned and launched could be given by
the execution nodes to the estate, then fed back to
pg_stat_statements. You are already doing most of the work with the
introduction of es_workers_launched and es_workers_planned.
If you want to get the data across a database, then just sum up the
counters for all the queries, applying a filter with the number of
calls, for example.
--
Michael
Hi,
Thanks for your imput ! I will fix the doc as proposed and do the split
as soon as I have time.
On 10/1/24 09:27, Michael Paquier wrote:
I'm less
a fan of the addition for utilities because these are less common
operations.
My thought process was that in order to size max_parallel_workers we
need to
have information on the maintenance parallel worker and "query" parallel
workers.
Actually, could we do better than what's proposed here? How about
presenting an aggregate of this data in pg_stat_statements for each
query instead?
I think both features are useful.
My collegues and I had a discussion about what could be done to improve
parallelism observability in PostgreSQL [0]/messages/by-id/d657df20-c4bf-63f6-e74c-cb85a81d0383@dalibo.com. We thought about several
places to do it for several use cases.
Guillaume Lelarge worked on pg_stat_statements [1]/messages/by-id/CAECtzeWtTGOK0UgKXdDGpfTVSa5bd_VbUt6K6xn8P7X+_dZqKw@mail.gmail.com and
pg_stat_user_[tables|indexes] [2]/messages/by-id/CAECtzeXXuMkw-RVGTWvHGOJsmFdsRY+jK0ndQa80sw46y2uvVQ@mail.gmail.com. I proposed a patch for the logs [3]/messages/by-id/8123423a-f041-4f4c-a771-bfd96ab235b0@dalibo.com.
As a consultant, I frequently work on installation without
pg_stat_statements and I cannot install it on the client's production
in the timeframe of my intervention.
pg_stat_database is available everywhere and can easily be sampled by
collectors/supervision services (like check_pgactivity).
Lastly the number would be more precise/easier to make sense of, since
pg_stat_statement has a limited size.
[0]: /messages/by-id/d657df20-c4bf-63f6-e74c-cb85a81d0383@dalibo.com
/messages/by-id/d657df20-c4bf-63f6-e74c-cb85a81d0383@dalibo.com
[1]: /messages/by-id/CAECtzeWtTGOK0UgKXdDGpfTVSa5bd_VbUt6K6xn8P7X+_dZqKw@mail.gmail.com
/messages/by-id/CAECtzeWtTGOK0UgKXdDGpfTVSa5bd_VbUt6K6xn8P7X+_dZqKw@mail.gmail.com
[2]: /messages/by-id/CAECtzeXXuMkw-RVGTWvHGOJsmFdsRY+jK0ndQa80sw46y2uvVQ@mail.gmail.com
/messages/by-id/CAECtzeXXuMkw-RVGTWvHGOJsmFdsRY+jK0ndQa80sw46y2uvVQ@mail.gmail.com
[3]: /messages/by-id/8123423a-f041-4f4c-a771-bfd96ab235b0@dalibo.com
/messages/by-id/8123423a-f041-4f4c-a771-bfd96ab235b0@dalibo.com
--
Benoit Lobréau
Consultant
http://dalibo.com
On Wed, Oct 02, 2024 at 11:12:37AM +0200, Benoit Lobréau wrote:
My collegues and I had a discussion about what could be done to improve
parallelism observability in PostgreSQL [0]. We thought about several
places to do it for several use cases.Guillaume Lelarge worked on pg_stat_statements [1].
Thanks, missed that. I will post a reply there. There is a good
overlap with everything you are doing here, because each one of you
wishes to track more data to the executor state and push it to
different part of the system, system view or just an extension.
Tracking the number of workers launched and planned in the executor
state is the strict minimum for a lot of these things, as far as I can
see. Once the nodes are able to push this data, then extensions can
feed on it the way they want. So that's a good idea on its own, and
covers two of the counters posted here:
/messages/by-id/CAECtzeWtTGOK0UgKXdDGpfTVSa5bd_VbUt6K6xn8P7X+_dZqKw@mail.gmail.com
Could you split the patch based on that? I'd recommend to move
es_workers_launched and es_workers_planned closer to the top, say
es_total_processed, and document what these counters are here for.
After that comes the problem of where to push this data..
Lastly the number would be more precise/easier to make sense of, since
pg_stat_statement has a limited size.
Upper bound that can be configured.
When looking for query-level patterns or specific SET tuning, using
query-level data speaks more than this data pushed at database level.
TBH, I am +-0 about pushing this data to pg_stat_database so as we
would be able to tune database-level GUCs. That does not help with
SET commands tweaking the number of workers to use. Well, perhaps few
rely on SET and most rely on the system-level GUCs in their
applications, meaning that I'm wrong, making your point about
publishing this data at database-level better, but I'm not really
sure. If others have an opinion, feel free.
Anyway, what I am sure of is that publishing the same set of data
everywhere leads to bloat, and I'd rather avoid that. Aggregating
that from the queries also to get an impression of the whole database
offers an equivalent of what would be stored in pg_stat_database
assuming that the load is steady. Your point about pg_stat_statements
not being set is also true, even if some cloud vendors enable it by
default.
Table/index-level data can be really interesting because we can
cross-check what's happening for more complex queries if there are
many gather nodes with complex JOINs.
Utilities (vacuum, btree, brin) are straight-forward and best at query
level, making pg_stat_statements their best match. And there is no
need for four counters if pushed at this level while two are able to
do the job as utility and non-utility statements are separated
depending on their PlannedStmt leading to separate entries in PGSS.
--
Michael
Hey,
Le mer. 2 oct. 2024 à 11:12, Benoit Lobréau <benoit.lobreau@dalibo.com> a
écrit :
Hi,
Thanks for your imput ! I will fix the doc as proposed and do the split
as soon as I have time.
I've done the split, but I didn't go any further than that.
Two patches attached:
* v5-0001 adds the metrics (same patch than v3-0001 for pg_stat_statements)
* v5-0002 handles the metrics for pg_stat_database.
"make check" works, and I also did a few other tests without any issues.
Regards.
--
Guillaume.
Attachments:
v5-0001-Introduce-two-new-counters-in-EState.patchtext/x-patch; charset=US-ASCII; name=v5-0001-Introduce-two-new-counters-in-EState.patchDownload
From 9413829616bdc0806970647c15d7d6bbd96489d1 Mon Sep 17 00:00:00 2001
From: Guillaume Lelarge <guillaume.lelarge@dalibo.com>
Date: Mon, 7 Oct 2024 08:45:36 +0200
Subject: [PATCH v5 1/2] Introduce two new counters in EState
They will be used by two other patchs to populate new columns in
pg_stat_database and pg_statements.
---
src/backend/executor/execUtils.c | 3 +++
src/backend/executor/nodeGather.c | 3 +++
src/backend/executor/nodeGatherMerge.c | 3 +++
src/include/nodes/execnodes.h | 3 +++
4 files changed, 12 insertions(+)
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index 5737f9f4eb..1908481999 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -162,6 +162,9 @@ CreateExecutorState(void)
estate->es_jit_flags = 0;
estate->es_jit = NULL;
+ estate->es_parallelized_workers_launched = 0;
+ estate->es_parallelized_workers_planned = 0;
+
/*
* Return the executor state structure
*/
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 5d4ffe989c..0fb915175a 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -182,6 +182,9 @@ ExecGather(PlanState *pstate)
/* We save # workers launched for the benefit of EXPLAIN */
node->nworkers_launched = pcxt->nworkers_launched;
+ estate->es_parallelized_workers_launched += pcxt->nworkers_launched;
+ estate->es_parallelized_workers_planned += pcxt->nworkers_to_launch;
+
/* Set up tuple queue readers to read the results. */
if (pcxt->nworkers_launched > 0)
{
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
index 45f6017c29..149ab23d90 100644
--- a/src/backend/executor/nodeGatherMerge.c
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -223,6 +223,9 @@ ExecGatherMerge(PlanState *pstate)
/* We save # workers launched for the benefit of EXPLAIN */
node->nworkers_launched = pcxt->nworkers_launched;
+ estate->es_parallelized_workers_launched += pcxt->nworkers_launched;
+ estate->es_parallelized_workers_planned += pcxt->nworkers_to_launch;
+
/* Set up tuple queue readers to read the results. */
if (pcxt->nworkers_launched > 0)
{
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index aab59d681c..f898590ece 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -708,6 +708,9 @@ typedef struct EState
bool es_use_parallel_mode; /* can we use parallel workers? */
+ int es_parallelized_workers_launched;
+ int es_parallelized_workers_planned;
+
/* The per-query shared memory area to use for parallel execution. */
struct dsa_area *es_query_dsa;
--
2.46.2
v5-0002-Adds-four-parallel-workers-stat-columns-to-pg_sta.patchtext/x-patch; charset=US-ASCII; name=v5-0002-Adds-four-parallel-workers-stat-columns-to-pg_sta.patchDownload
From 873c03b99e1ccc05bac4c71b5f070e0dbe0bd779 Mon Sep 17 00:00:00 2001
From: benoit <benoit.lobreau@dalibo.com>
Date: Mon, 7 Oct 2024 10:12:46 +0200
Subject: [PATCH v5 2/2] Adds four parallel workers stat columns to
pg_stat_database
* parallel_workers_planned
* parallel_workers_launched
* parallel_maint_workers_planned
* parallel_maint_workers_launched
---
doc/src/sgml/monitoring.sgml | 36 +++++++++++++++++++
src/backend/access/brin/brin.c | 4 +++
src/backend/access/nbtree/nbtsort.c | 4 +++
src/backend/catalog/system_views.sql | 4 +++
src/backend/commands/vacuumparallel.c | 5 +++
src/backend/executor/execMain.c | 7 ++++
src/backend/utils/activity/pgstat_database.c | 36 +++++++++++++++++++
src/backend/utils/adt/pgstatfuncs.c | 12 +++++++
src/include/catalog/pg_proc.dat | 20 +++++++++++
src/include/pgstat.h | 7 ++++
src/test/regress/expected/rules.out | 4 +++
src/test/regress/expected/select_parallel.out | 11 ++++++
src/test/regress/expected/vacuum_parallel.out | 19 ++++++++++
src/test/regress/sql/select_parallel.sql | 8 +++++
src/test/regress/sql/vacuum_parallel.sql | 11 ++++++
15 files changed, 188 insertions(+)
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 331315f8d3..9567ca5bd1 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3611,6 +3611,42 @@ description | Waiting for a newly initialized WAL file to reach durable storage
</para></entry>
</row>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_planned</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers obtained by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_planned</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned by utilities on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers obtained by utilities on this database
+ </para></entry>
+ </row>
+
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index c0b978119a..4e83091d2c 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -2544,6 +2544,10 @@ _brin_end_parallel(BrinLeader *brinleader, BrinBuildState *state)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(brinleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) brinleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) brinleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c
index 5cca0d4f52..8ee5fcf6d3 100644
--- a/src/backend/access/nbtree/nbtsort.c
+++ b/src/backend/access/nbtree/nbtsort.c
@@ -1615,6 +1615,10 @@ _bt_end_parallel(BTLeader *btleader)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(btleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) btleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) btleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3456b821bc..7e693a5bae 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1073,6 +1073,10 @@ CREATE VIEW pg_stat_database AS
pg_stat_get_db_sessions_abandoned(D.oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(D.oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(D.oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_planned(D.oid) as parallel_workers_planned,
+ pg_stat_get_db_parallel_workers_launched(D.oid) as parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_planned(D.oid) as parallel_maint_workers_planned,
+ pg_stat_get_db_parallel_maint_workers_launched(D.oid) as parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(D.oid) AS stats_reset
FROM (
SELECT 0 AS oid, NULL::name AS datname
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 4fd6574e12..c6f43419aa 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -739,6 +739,11 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
for (int i = 0; i < pvs->pcxt->nworkers_launched; i++)
InstrAccumParallelQuery(&pvs->buffer_usage[i], &pvs->wal_usage[i]);
+
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) pvs->pcxt->nworkers_to_launch,
+ (PgStat_Counter) pvs->pcxt->nworkers_launched
+ );
}
/*
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index cc9a594cba..e749cdaa17 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -52,6 +52,7 @@
#include "miscadmin.h"
#include "nodes/queryjumble.h"
#include "parser/parse_relation.h"
+#include "pgstat.h"
#include "rewrite/rewriteHandler.h"
#include "tcop/utility.h"
#include "utils/acl.h"
@@ -483,6 +484,12 @@ standard_ExecutorEnd(QueryDesc *queryDesc)
Assert(estate != NULL);
+ if (estate->es_parallelized_workers_planned > 0) {
+ pgstat_update_parallel_workers_stats(
+ (PgStat_Counter) estate->es_parallelized_workers_planned,
+ (PgStat_Counter) estate->es_parallelized_workers_launched);
+ }
+
/*
* Check that ExecutorFinish was called, unless in EXPLAIN-only mode. This
* Assert is needed because ExecutorFinish is new as of 9.1, and callers
diff --git a/src/backend/utils/activity/pgstat_database.c b/src/backend/utils/activity/pgstat_database.c
index 29bc090974..9e72c286b2 100644
--- a/src/backend/utils/activity/pgstat_database.c
+++ b/src/backend/utils/activity/pgstat_database.c
@@ -262,6 +262,38 @@ AtEOXact_PgStat_Database(bool isCommit, bool parallel)
}
}
+/*
+ * reports parallel_workers_planned and parallel_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_planned, PgStat_Counter parallel_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_workers_planned += parallel_workers_planned;
+ dbentry->parallel_workers_launched += parallel_workers_launched;
+}
+
+/*
+ * reports parallel_maint_workers_planned and parallel_maint_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_planned, PgStat_Counter parallel_maint_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_maint_workers_planned += parallel_maint_workers_planned;
+ dbentry->parallel_maint_workers_launched += parallel_maint_workers_launched;
+}
+
/*
* Subroutine for pgstat_report_stat(): Handle xact commit/rollback and I/O
* timings.
@@ -425,6 +457,10 @@ pgstat_database_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
PGSTAT_ACCUM_DBCOUNT(sessions_abandoned);
PGSTAT_ACCUM_DBCOUNT(sessions_fatal);
PGSTAT_ACCUM_DBCOUNT(sessions_killed);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_planned);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_launched);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_planned);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_launched);
#undef PGSTAT_ACCUM_DBCOUNT
pgstat_unlock_entry(entry_ref);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index f7b50e0b5a..2db4d2bb6b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -1039,6 +1039,18 @@ PG_STAT_GET_DBENTRY_INT64(sessions_fatal)
/* pg_stat_get_db_sessions_killed */
PG_STAT_GET_DBENTRY_INT64(sessions_killed)
+/* pg_stat_get_db_parallel_workers_planned*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_planned)
+
+/* pg_stat_get_db_parallel_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_launched)
+
+/* pg_stat_get_db_parallel_maint_workers_planned*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_planned)
+
+/* pg_stat_get_db_parallel_maint_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_launched)
+
/* pg_stat_get_db_temp_bytes */
PG_STAT_GET_DBENTRY_INT64(temp_bytes)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 77f54a79e6..47dc0f87eb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5803,6 +5803,26 @@
proname => 'pg_stat_get_db_sessions_killed', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_sessions_killed' },
+{ oid => '8403',
+ descr => 'statistics: number of parallel workers planned for queries',
+ proname => 'pg_stat_get_db_parallel_workers_planned', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_planned' },
+{ oid => '8404',
+ descr => 'statistics: number of parallel workers effectively launched for queries',
+ proname => 'pg_stat_get_db_parallel_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_launched' },
+{ oid => '8405',
+ descr => 'statistics: number of parallel workers planned for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_planned', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_planned' },
+{ oid => '8406',
+ descr => 'statistics: number of parallel workers effectively launched for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_launched' },
{ oid => '3195', descr => 'statistics: information about WAL archiver',
proname => 'pg_stat_get_archiver', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index df53fa2d4f..1a5c489bbd 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -387,6 +387,11 @@ typedef struct PgStat_StatDBEntry
PgStat_Counter sessions_fatal;
PgStat_Counter sessions_killed;
+ PgStat_Counter parallel_workers_planned;
+ PgStat_Counter parallel_workers_launched;
+ PgStat_Counter parallel_maint_workers_planned;
+ PgStat_Counter parallel_maint_workers_launched;
+
TimestampTz stat_reset_timestamp;
} PgStat_StatDBEntry;
@@ -583,6 +588,8 @@ extern void pgstat_report_deadlock(void);
extern void pgstat_report_checksum_failures_in_db(Oid dboid, int failurecount);
extern void pgstat_report_checksum_failure(void);
extern void pgstat_report_connect(Oid dboid);
+extern void pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_planned, PgStat_Counter parallel_workers_launched);
+extern void pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_planned, PgStat_Counter parallel_maint_workers_launched);
#define pgstat_count_buffer_read_time(n) \
(pgStatBlockReadTime += (n))
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2b47013f11..c795fb68c1 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1863,6 +1863,10 @@ pg_stat_database| SELECT oid AS datid,
pg_stat_get_db_sessions_abandoned(oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_planned(oid) AS parallel_workers_planned,
+ pg_stat_get_db_parallel_workers_launched(oid) AS parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_planned(oid) AS parallel_maint_workers_planned,
+ pg_stat_get_db_parallel_maint_workers_launched(oid) AS parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(oid) AS stats_reset
FROM ( SELECT 0 AS oid,
NULL::name AS datname
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index 2c63aa85a6..db3ca8b198 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -1,6 +1,17 @@
--
-- PARALLEL
--
+-- Get a reference for parallel stats in pg_stat_database
+select pg_stat_force_next_flush();
+ pg_stat_force_next_flush
+--------------------------
+
+(1 row)
+
+select parallel_workers_planned as parallel_workers_planned_before,
+ parallel_workers_launched as parallel_workers_launched_before
+from pg_stat_database
+where datname = 'regression' \gset
create function sp_parallel_restricted(int) returns int as
$$begin return $1; end$$ language plpgsql parallel restricted;
begin;
diff --git a/src/test/regress/expected/vacuum_parallel.out b/src/test/regress/expected/vacuum_parallel.out
index ddf0ee544b..4c9c5cb1fd 100644
--- a/src/test/regress/expected/vacuum_parallel.out
+++ b/src/test/regress/expected/vacuum_parallel.out
@@ -37,9 +37,28 @@ WHERE oid in ('regular_sized_index'::regclass, 'typically_sized_index'::regclass
2
(1 row)
+-- Get a reference for parallel stats in pg_stat_database
+SELECT sum(parallel_maint_workers_planned) AS parallel_maint_workers_planned_before,
+ sum(parallel_maint_workers_launched) AS parallel_maint_workers_launched_before
+FROM pg_stat_database \gset
-- Parallel VACUUM with B-Tree page deletions, ambulkdelete calls:
DELETE FROM parallel_vacuum_table;
VACUUM (PARALLEL 4, INDEX_CLEANUP ON) parallel_vacuum_table;
+-- Check parallel stats in pg_stat_database
+SELECT pg_stat_force_next_flush();
+ pg_stat_force_next_flush
+--------------------------
+
+(1 row)
+
+SELECT sum(parallel_maint_workers_planned) > :'parallel_maint_workers_planned_before' AS maint_wrk_planned,
+ sum(parallel_maint_workers_launched) > :'parallel_maint_workers_launched_before' AS maint_wrk_launched
+FROM pg_stat_database;
+ maint_wrk_planned | maint_wrk_launched
+-------------------+--------------------
+ t | t
+(1 row)
+
-- Since vacuum_in_leader_small_index uses deduplication, we expect an
-- assertion failure with bug #17245 (in the absence of bugfix):
INSERT INTO parallel_vacuum_table SELECT i FROM generate_series(1, 10000) i;
diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql
index 9ba1328fd2..2ff0f7f21c 100644
--- a/src/test/regress/sql/select_parallel.sql
+++ b/src/test/regress/sql/select_parallel.sql
@@ -2,6 +2,14 @@
-- PARALLEL
--
+-- Get a reference for parallel stats in pg_stat_database
+select pg_stat_force_next_flush();
+select parallel_workers_planned as parallel_workers_planned_before,
+ parallel_workers_launched as parallel_workers_launched_before
+from pg_stat_database
+where datname = 'regression' \gset
+
+
create function sp_parallel_restricted(int) returns int as
$$begin return $1; end$$ language plpgsql parallel restricted;
diff --git a/src/test/regress/sql/vacuum_parallel.sql b/src/test/regress/sql/vacuum_parallel.sql
index 1d23f33e39..d0d7851e6a 100644
--- a/src/test/regress/sql/vacuum_parallel.sql
+++ b/src/test/regress/sql/vacuum_parallel.sql
@@ -31,10 +31,21 @@ WHERE oid in ('regular_sized_index'::regclass, 'typically_sized_index'::regclass
pg_relation_size(oid) >=
pg_size_bytes(current_setting('min_parallel_index_scan_size'));
+-- Get a reference for parallel stats in pg_stat_database
+SELECT sum(parallel_maint_workers_planned) AS parallel_maint_workers_planned_before,
+ sum(parallel_maint_workers_launched) AS parallel_maint_workers_launched_before
+FROM pg_stat_database \gset
+
-- Parallel VACUUM with B-Tree page deletions, ambulkdelete calls:
DELETE FROM parallel_vacuum_table;
VACUUM (PARALLEL 4, INDEX_CLEANUP ON) parallel_vacuum_table;
+-- Check parallel stats in pg_stat_database
+SELECT pg_stat_force_next_flush();
+SELECT sum(parallel_maint_workers_planned) > :'parallel_maint_workers_planned_before' AS maint_wrk_planned,
+ sum(parallel_maint_workers_launched) > :'parallel_maint_workers_launched_before' AS maint_wrk_launched
+FROM pg_stat_database;
+
-- Since vacuum_in_leader_small_index uses deduplication, we expect an
-- assertion failure with bug #17245 (in the absence of bugfix):
INSERT INTO parallel_vacuum_table SELECT i FROM generate_series(1, 10000) i;
--
2.46.2
On 10/7/24 10:19, Guillaume Lelarge wrote:
I've done the split, but I didn't go any further than that.
Thank you Guillaume. I have done the rest of the reformatting
suggested by Michael but I decided to see If I have similar stuff
in my logging patch and refactor accordingly if needed before posting
the result here.
I have hopes to finish it this week.
--
Benoit Lobréau
Consultant
http://dalibo.com
Le mar. 8 oct. 2024 à 14:03, Benoit Lobréau <benoit.lobreau@dalibo.com> a
écrit :
On 10/7/24 10:19, Guillaume Lelarge wrote:
I've done the split, but I didn't go any further than that.
Thank you Guillaume. I have done the rest of the reformatting
suggested by Michael but I decided to see If I have similar stuff
in my logging patch and refactor accordingly if needed before posting
the result here.I have hopes to finish it this week.
FWIW, with the recent commits of the pg_stat_statements patch, you need a
slight change in the patch I sent on this thread. You'll find a patch
attached to do that. You need to apply it after a rebase to master.
--
Guillaume.
Attachments:
rebase.patchtext/x-patch; charset=US-ASCII; name=rebase.patchDownload
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index e749cdaa17..60a643b08d 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -484,10 +484,10 @@ standard_ExecutorEnd(QueryDesc *queryDesc)
Assert(estate != NULL);
- if (estate->es_parallelized_workers_planned > 0) {
+ if (estate->es_parallel_workers_to_launch > 0) {
pgstat_update_parallel_workers_stats(
- (PgStat_Counter) estate->es_parallelized_workers_planned,
- (PgStat_Counter) estate->es_parallelized_workers_launched);
+ (PgStat_Counter) estate->es_parallel_workers_to_launch,
+ (PgStat_Counter) estate->es_parallel_workers_launched);
}
/*
On 10/11/24 09:33, Guillaume Lelarge wrote:
FWIW, with the recent commits of the pg_stat_statements patch, you need
a slight change in the patch I sent on this thread. You'll find a patch
attached to do that. You need to apply it after a rebase to master.
Thanks.
Here is an updated version, I modified it to:
* have the same wording in the doc and code (planned => to_launch)
* split de declaration from the rest (and have the same code as the
parallel worker logging patch)
--
Benoit Lobréau
Consultant
http://dalibo.com
Attachments:
V6_0003-Adds-two-parallel-workers-stat-columns-for-utilities.patchtext/x-patch; charset=UTF-8; name=V6_0003-Adds-two-parallel-workers-stat-columns-for-utilities.patchDownload
From ab9aa1344f974348638dd3898c944f3d5253374d Mon Sep 17 00:00:00 2001
From: benoit <benoit.lobreau@dalibo.com>
Date: Tue, 8 Oct 2024 10:01:52 +0200
Subject: [PATCH 3/3] Adds two parallel workers stat columns for utilities to
pg_stat_database
* parallel_maint_workers_to_launch
* parallel_maint_workers_launched
---
doc/src/sgml/monitoring.sgml | 18 ++++++++++++++++++
src/backend/access/brin/brin.c | 4 ++++
src/backend/access/nbtree/nbtsort.c | 4 ++++
src/backend/catalog/system_views.sql | 2 ++
src/backend/commands/vacuumparallel.c | 6 ++++++
src/backend/utils/activity/pgstat_database.c | 18 ++++++++++++++++++
src/backend/utils/adt/pgstatfuncs.c | 6 ++++++
src/include/catalog/pg_proc.dat | 10 ++++++++++
src/include/pgstat.h | 3 +++
src/test/regress/expected/rules.out | 2 ++
src/test/regress/expected/vacuum_parallel.out | 19 +++++++++++++++++++
src/test/regress/sql/vacuum_parallel.sql | 11 +++++++++++
12 files changed, 103 insertions(+)
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 840d7f8161..bd4e4b63c7 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3629,6 +3629,24 @@ description | Waiting for a newly initialized WAL file to reach durable storage
</para></entry>
</row>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_to_launch</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned to be launched by utilities on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_maint_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers launched by utilities on this database
+ </para></entry>
+ </row>
+
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index c0b978119a..4e83091d2c 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -2544,6 +2544,10 @@ _brin_end_parallel(BrinLeader *brinleader, BrinBuildState *state)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(brinleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) brinleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) brinleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c
index 5cca0d4f52..8ee5fcf6d3 100644
--- a/src/backend/access/nbtree/nbtsort.c
+++ b/src/backend/access/nbtree/nbtsort.c
@@ -1615,6 +1615,10 @@ _bt_end_parallel(BTLeader *btleader)
/* Shutdown worker processes */
WaitForParallelWorkersToFinish(btleader->pcxt);
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) btleader->pcxt->nworkers_to_launch,
+ (PgStat_Counter) btleader->pcxt->nworkers_launched);
+
/*
* Next, accumulate WAL usage. (This must wait for the workers to finish,
* or we might get incomplete data.)
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index da9a8fe99f..648166bb3b 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1075,6 +1075,8 @@ CREATE VIEW pg_stat_database AS
pg_stat_get_db_sessions_killed(D.oid) AS sessions_killed,
pg_stat_get_db_parallel_workers_to_launch(D.oid) as parallel_workers_to_launch,
pg_stat_get_db_parallel_workers_launched(D.oid) as parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_to_launch(D.oid) as parallel_maint_workers_to_launch,
+ pg_stat_get_db_parallel_maint_workers_launched(D.oid) as parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(D.oid) AS stats_reset
FROM (
SELECT 0 AS oid, NULL::name AS datname
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 77679e8df6..edd0823353 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -443,6 +443,12 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
{
Assert(!IsParallelWorker());
+ if (pvs->nworkers_to_launch > 0)
+ pgstat_update_parallel_maint_workers_stats(
+ (PgStat_Counter) pvs->pcxt->nworkers_to_launch,
+ (PgStat_Counter) pvs->pcxt->nworkers_launched
+ );
+
/* Copy the updated statistics */
for (int i = 0; i < pvs->nindexes; i++)
{
diff --git a/src/backend/utils/activity/pgstat_database.c b/src/backend/utils/activity/pgstat_database.c
index efa3d51408..38daf6d978 100644
--- a/src/backend/utils/activity/pgstat_database.c
+++ b/src/backend/utils/activity/pgstat_database.c
@@ -278,6 +278,22 @@ pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_to_launch,
dbentry->parallel_workers_launched += parallel_workers_launched;
}
+/*
+ * reports parallel_maint_workers_to_launch and parallel_maint_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_to_launch, PgStat_Counter parallel_maint_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_maint_workers_to_launch += parallel_maint_workers_to_launch;
+ dbentry->parallel_maint_workers_launched += parallel_maint_workers_launched;
+}
+
/*
* Subroutine for pgstat_report_stat(): Handle xact commit/rollback and I/O
* timings.
@@ -443,6 +459,8 @@ pgstat_database_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
PGSTAT_ACCUM_DBCOUNT(sessions_killed);
PGSTAT_ACCUM_DBCOUNT(parallel_workers_to_launch);
PGSTAT_ACCUM_DBCOUNT(parallel_workers_launched);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_to_launch);
+ PGSTAT_ACCUM_DBCOUNT(parallel_maint_workers_launched);
#undef PGSTAT_ACCUM_DBCOUNT
pgstat_unlock_entry(entry_ref);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 054c416ab4..13d5ea7a5c 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -1045,6 +1045,12 @@ PG_STAT_GET_DBENTRY_INT64(parallel_workers_to_launch)
/* pg_stat_get_db_parallel_workers_launched*/
PG_STAT_GET_DBENTRY_INT64(parallel_workers_launched)
+/* pg_stat_get_db_parallel_maint_workers_to_launch*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_to_launch)
+
+/* pg_stat_get_db_parallel_maint_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_maint_workers_launched)
+
/* pg_stat_get_db_temp_bytes */
PG_STAT_GET_DBENTRY_INT64(temp_bytes)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index fe05b279f2..16cb46f343 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5820,6 +5820,16 @@
proname => 'pg_stat_get_db_parallel_workers_launched', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_parallel_workers_launched' },
+{ oid => '8405',
+ descr => 'statistics: number of parallel workers planned to be launched for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_to_launch', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_to_launch' },
+{ oid => '8406',
+ descr => 'statistics: number of parallel workers effectively launched for utilities',
+ proname => 'pg_stat_get_db_parallel_maint_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_maint_workers_launched' },
{ oid => '3195', descr => 'statistics: information about WAL archiver',
proname => 'pg_stat_get_archiver', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index cfba5615a7..3718d31cf6 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -389,6 +389,8 @@ typedef struct PgStat_StatDBEntry
PgStat_Counter parallel_workers_to_launch;
PgStat_Counter parallel_workers_launched;
+ PgStat_Counter parallel_maint_workers_to_launch;
+ PgStat_Counter parallel_maint_workers_launched;
TimestampTz stat_reset_timestamp;
} PgStat_StatDBEntry;
@@ -587,6 +589,7 @@ extern void pgstat_report_checksum_failures_in_db(Oid dboid, int failurecount);
extern void pgstat_report_checksum_failure(void);
extern void pgstat_report_connect(Oid dboid);
extern void pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_to_launch, PgStat_Counter parallel_workers_launched);
+extern void pgstat_update_parallel_maint_workers_stats(PgStat_Counter parallel_maint_workers_to_launch, PgStat_Counter parallel_maint_workers_launched);
#define pgstat_count_buffer_read_time(n) \
(pgStatBlockReadTime += (n))
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 3014d047fe..e696baa8f3 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1865,6 +1865,8 @@ pg_stat_database| SELECT oid AS datid,
pg_stat_get_db_sessions_killed(oid) AS sessions_killed,
pg_stat_get_db_parallel_workers_to_launch(oid) AS parallel_workers_to_launch,
pg_stat_get_db_parallel_workers_launched(oid) AS parallel_workers_launched,
+ pg_stat_get_db_parallel_maint_workers_to_launch(oid) AS parallel_maint_workers_to_launch,
+ pg_stat_get_db_parallel_maint_workers_launched(oid) AS parallel_maint_workers_launched,
pg_stat_get_db_stat_reset_time(oid) AS stats_reset
FROM ( SELECT 0 AS oid,
NULL::name AS datname
diff --git a/src/test/regress/expected/vacuum_parallel.out b/src/test/regress/expected/vacuum_parallel.out
index ddf0ee544b..4973c7bdae 100644
--- a/src/test/regress/expected/vacuum_parallel.out
+++ b/src/test/regress/expected/vacuum_parallel.out
@@ -37,9 +37,28 @@ WHERE oid in ('regular_sized_index'::regclass, 'typically_sized_index'::regclass
2
(1 row)
+-- Get a reference for parallel stats in pg_stat_database
+SELECT sum(parallel_maint_workers_to_launch) AS parallel_maint_workers_to_launch_before,
+ sum(parallel_maint_workers_launched) AS parallel_maint_workers_launched_before
+FROM pg_stat_database \gset
-- Parallel VACUUM with B-Tree page deletions, ambulkdelete calls:
DELETE FROM parallel_vacuum_table;
VACUUM (PARALLEL 4, INDEX_CLEANUP ON) parallel_vacuum_table;
+-- Check parallel stats in pg_stat_database
+SELECT pg_stat_force_next_flush();
+ pg_stat_force_next_flush
+--------------------------
+
+(1 row)
+
+SELECT sum(parallel_maint_workers_to_launch) > :'parallel_maint_workers_to_launch_before' AS maint_wrk_to_launch,
+ sum(parallel_maint_workers_launched) > :'parallel_maint_workers_launched_before' AS maint_wrk_launched
+FROM pg_stat_database;
+ maint_wrk_to_launch | maint_wrk_launched
+---------------------+--------------------
+ t | t
+(1 row)
+
-- Since vacuum_in_leader_small_index uses deduplication, we expect an
-- assertion failure with bug #17245 (in the absence of bugfix):
INSERT INTO parallel_vacuum_table SELECT i FROM generate_series(1, 10000) i;
diff --git a/src/test/regress/sql/vacuum_parallel.sql b/src/test/regress/sql/vacuum_parallel.sql
index 1d23f33e39..77ddc807cd 100644
--- a/src/test/regress/sql/vacuum_parallel.sql
+++ b/src/test/regress/sql/vacuum_parallel.sql
@@ -31,10 +31,21 @@ WHERE oid in ('regular_sized_index'::regclass, 'typically_sized_index'::regclass
pg_relation_size(oid) >=
pg_size_bytes(current_setting('min_parallel_index_scan_size'));
+-- Get a reference for parallel stats in pg_stat_database
+SELECT sum(parallel_maint_workers_to_launch) AS parallel_maint_workers_to_launch_before,
+ sum(parallel_maint_workers_launched) AS parallel_maint_workers_launched_before
+FROM pg_stat_database \gset
+
-- Parallel VACUUM with B-Tree page deletions, ambulkdelete calls:
DELETE FROM parallel_vacuum_table;
VACUUM (PARALLEL 4, INDEX_CLEANUP ON) parallel_vacuum_table;
+-- Check parallel stats in pg_stat_database
+SELECT pg_stat_force_next_flush();
+SELECT sum(parallel_maint_workers_to_launch) > :'parallel_maint_workers_to_launch_before' AS maint_wrk_to_launch,
+ sum(parallel_maint_workers_launched) > :'parallel_maint_workers_launched_before' AS maint_wrk_launched
+FROM pg_stat_database;
+
-- Since vacuum_in_leader_small_index uses deduplication, we expect an
-- assertion failure with bug #17245 (in the absence of bugfix):
INSERT INTO parallel_vacuum_table SELECT i FROM generate_series(1, 10000) i;
--
2.46.2
V6_0002-Setup-counters-for-parallel-vacuums.patchtext/x-patch; charset=UTF-8; name=V6_0002-Setup-counters-for-parallel-vacuums.patchDownload
From d3319375a66bc9e356c8ad741f047d82a47255e3 Mon Sep 17 00:00:00 2001
From: benoit <benoit.lobreau@dalibo.com>
Date: Fri, 11 Oct 2024 23:56:23 +0200
Subject: [PATCH 2/3] Setup counters for parallel vacuums
This is used by the logging and pg_stat_database patches.
---
src/backend/commands/vacuumparallel.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 4fd6574e12..77679e8df6 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -208,6 +208,9 @@ struct ParallelVacuumState
int nindexes_parallel_cleanup;
int nindexes_parallel_condcleanup;
+ int nworkers_to_launch;
+ int nworkers_launched;
+
/* Buffer access strategy used by leader process */
BufferAccessStrategy bstrategy;
@@ -362,6 +365,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
if ((vacoptions & VACUUM_OPTION_PARALLEL_COND_CLEANUP) != 0)
pvs->nindexes_parallel_condcleanup++;
}
+ pvs->nworkers_to_launch = 0;
+ pvs->nworkers_launched = 0;
+
shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_INDEX_STATS, indstats);
pvs->indstats = indstats;
@@ -739,6 +745,9 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
for (int i = 0; i < pvs->pcxt->nworkers_launched; i++)
InstrAccumParallelQuery(&pvs->buffer_usage[i], &pvs->wal_usage[i]);
+
+ pvs->nworkers_to_launch += pvs->pcxt->nworkers_to_launch;
+ pvs->nworkers_launched += pvs->pcxt->nworkers_launched;
}
/*
--
2.46.2
V6_0001-Adds-two-parallel-workers-stat-columns-to-pg_stat_da.patchtext/x-patch; charset=UTF-8; name=V6_0001-Adds-two-parallel-workers-stat-columns-to-pg_stat_da.patchDownload
From f759381fdd2315b9fc3bf7c31f505f80f89a5d90 Mon Sep 17 00:00:00 2001
From: benoit <benoit.lobreau@dalibo.com>
Date: Fri, 11 Oct 2024 17:29:13 +0200
Subject: [PATCH 1/3] Adds two parallel workers stat columns to
pg_stat_database
* parallel_workers_to_launch
* parallel_workers_launched
---
doc/src/sgml/monitoring.sgml | 18 +++++++++++++
src/backend/catalog/system_views.sql | 2 ++
src/backend/executor/execMain.c | 6 +++++
src/backend/utils/activity/pgstat_database.c | 18 +++++++++++++
src/backend/utils/adt/pgstatfuncs.c | 6 +++++
src/include/catalog/pg_proc.dat | 10 +++++++
src/include/pgstat.h | 4 +++
src/test/regress/expected/rules.out | 2 ++
src/test/regress/expected/select_parallel.out | 26 +++++++++++++++++++
src/test/regress/sql/select_parallel.sql | 13 ++++++++++
10 files changed, 105 insertions(+)
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 331315f8d3..840d7f8161 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3611,6 +3611,24 @@ description | Waiting for a newly initialized WAL file to reach durable storage
</para></entry>
</row>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_to_launch</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned to be launched by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers launched by queries on this database
+ </para></entry>
+ </row>
+
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3456b821bc..da9a8fe99f 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1073,6 +1073,8 @@ CREATE VIEW pg_stat_database AS
pg_stat_get_db_sessions_abandoned(D.oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(D.oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(D.oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_to_launch(D.oid) as parallel_workers_to_launch,
+ pg_stat_get_db_parallel_workers_launched(D.oid) as parallel_workers_launched,
pg_stat_get_db_stat_reset_time(D.oid) AS stats_reset
FROM (
SELECT 0 AS oid, NULL::name AS datname
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index cc9a594cba..e3da94cb10 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -52,6 +52,7 @@
#include "miscadmin.h"
#include "nodes/queryjumble.h"
#include "parser/parse_relation.h"
+#include "pgstat.h"
#include "rewrite/rewriteHandler.h"
#include "tcop/utility.h"
#include "utils/acl.h"
@@ -483,6 +484,11 @@ standard_ExecutorEnd(QueryDesc *queryDesc)
Assert(estate != NULL);
+ if (estate->es_parallel_workers_to_launch > 0)
+ pgstat_update_parallel_workers_stats(
+ (PgStat_Counter) estate->es_parallel_workers_to_launch,
+ (PgStat_Counter) estate->es_parallel_workers_launched);
+
/*
* Check that ExecutorFinish was called, unless in EXPLAIN-only mode. This
* Assert is needed because ExecutorFinish is new as of 9.1, and callers
diff --git a/src/backend/utils/activity/pgstat_database.c b/src/backend/utils/activity/pgstat_database.c
index 29bc090974..efa3d51408 100644
--- a/src/backend/utils/activity/pgstat_database.c
+++ b/src/backend/utils/activity/pgstat_database.c
@@ -262,6 +262,22 @@ AtEOXact_PgStat_Database(bool isCommit, bool parallel)
}
}
+/*
+ * reports parallel_workers_to_launch and parallel_workers_launched into
+ * PgStat_StatDBEntry
+ */
+void
+pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_to_launch, PgStat_Counter parallel_workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_workers_to_launch += parallel_workers_to_launch;
+ dbentry->parallel_workers_launched += parallel_workers_launched;
+}
+
/*
* Subroutine for pgstat_report_stat(): Handle xact commit/rollback and I/O
* timings.
@@ -425,6 +441,8 @@ pgstat_database_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
PGSTAT_ACCUM_DBCOUNT(sessions_abandoned);
PGSTAT_ACCUM_DBCOUNT(sessions_fatal);
PGSTAT_ACCUM_DBCOUNT(sessions_killed);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_to_launch);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_launched);
#undef PGSTAT_ACCUM_DBCOUNT
pgstat_unlock_entry(entry_ref);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index f7b50e0b5a..054c416ab4 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -1039,6 +1039,12 @@ PG_STAT_GET_DBENTRY_INT64(sessions_fatal)
/* pg_stat_get_db_sessions_killed */
PG_STAT_GET_DBENTRY_INT64(sessions_killed)
+/* pg_stat_get_db_parallel_workers_to_launch*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_to_launch)
+
+/* pg_stat_get_db_parallel_workers_launched*/
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_launched)
+
/* pg_stat_get_db_temp_bytes */
PG_STAT_GET_DBENTRY_INT64(temp_bytes)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8876bebde0..fe05b279f2 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5810,6 +5810,16 @@
proname => 'pg_stat_get_db_sessions_killed', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_sessions_killed' },
+{ oid => '8403',
+ descr => 'statistics: number of parallel workers planned to be launched for queries',
+ proname => 'pg_stat_get_db_parallel_workers_to_launch', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_to_launch' },
+{ oid => '8404',
+ descr => 'statistics: number of parallel workers effectively launched for queries',
+ proname => 'pg_stat_get_db_parallel_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_launched' },
{ oid => '3195', descr => 'statistics: information about WAL archiver',
proname => 'pg_stat_get_archiver', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index df53fa2d4f..cfba5615a7 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -387,6 +387,9 @@ typedef struct PgStat_StatDBEntry
PgStat_Counter sessions_fatal;
PgStat_Counter sessions_killed;
+ PgStat_Counter parallel_workers_to_launch;
+ PgStat_Counter parallel_workers_launched;
+
TimestampTz stat_reset_timestamp;
} PgStat_StatDBEntry;
@@ -583,6 +586,7 @@ extern void pgstat_report_deadlock(void);
extern void pgstat_report_checksum_failures_in_db(Oid dboid, int failurecount);
extern void pgstat_report_checksum_failure(void);
extern void pgstat_report_connect(Oid dboid);
+extern void pgstat_update_parallel_workers_stats(PgStat_Counter parallel_workers_to_launch, PgStat_Counter parallel_workers_launched);
#define pgstat_count_buffer_read_time(n) \
(pgStatBlockReadTime += (n))
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2b47013f11..3014d047fe 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1863,6 +1863,8 @@ pg_stat_database| SELECT oid AS datid,
pg_stat_get_db_sessions_abandoned(oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_to_launch(oid) AS parallel_workers_to_launch,
+ pg_stat_get_db_parallel_workers_launched(oid) AS parallel_workers_launched,
pg_stat_get_db_stat_reset_time(oid) AS stats_reset
FROM ( SELECT 0 AS oid,
NULL::name AS datname
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index d17ade278b..d1bb0bd61a 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -1,6 +1,17 @@
--
-- PARALLEL
--
+-- Get a reference for parallel stats in pg_stat_database
+select pg_stat_force_next_flush();
+ pg_stat_force_next_flush
+--------------------------
+
+(1 row)
+
+select parallel_workers_to_launch as parallel_workers_to_launch_before,
+ parallel_workers_launched as parallel_workers_launched_before
+from pg_stat_database
+where datname = 'regression' \gset
create function sp_parallel_restricted(int) returns int as
$$begin return $1; end$$ language plpgsql parallel restricted;
begin;
@@ -1407,3 +1418,18 @@ CREATE UNIQUE INDEX parallel_hang_idx
SET debug_parallel_query = on;
DELETE FROM parallel_hang WHERE 380 <= i AND i <= 420;
ROLLBACK;
+select pg_stat_force_next_flush();
+ pg_stat_force_next_flush
+--------------------------
+
+(1 row)
+
+select parallel_workers_to_launch > :'parallel_workers_to_launch_before' AS wrk_to_launch,
+ parallel_workers_launched > :'parallel_workers_launched_before' AS wrk_launched
+from pg_stat_database
+where datname = 'regression';
+ wrk_to_launch | wrk_launched
+---------------+--------------
+ t | t
+(1 row)
+
diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql
index 9ba1328fd2..5da7d6bcc8 100644
--- a/src/test/regress/sql/select_parallel.sql
+++ b/src/test/regress/sql/select_parallel.sql
@@ -2,6 +2,13 @@
-- PARALLEL
--
+-- Get a reference for parallel stats in pg_stat_database
+select pg_stat_force_next_flush();
+select parallel_workers_to_launch as parallel_workers_to_launch_before,
+ parallel_workers_launched as parallel_workers_launched_before
+from pg_stat_database
+where datname = 'regression' \gset
+
create function sp_parallel_restricted(int) returns int as
$$begin return $1; end$$ language plpgsql parallel restricted;
@@ -574,3 +581,9 @@ SET debug_parallel_query = on;
DELETE FROM parallel_hang WHERE 380 <= i AND i <= 420;
ROLLBACK;
+
+select pg_stat_force_next_flush();
+select parallel_workers_to_launch > :'parallel_workers_to_launch_before' AS wrk_to_launch,
+ parallel_workers_launched > :'parallel_workers_launched_before' AS wrk_launched
+from pg_stat_database
+where datname = 'regression';
--
2.46.2
On Sat, Oct 12, 2024 at 01:14:54AM +0200, Benoit Lobréau wrote:
Here is an updated version, I modified it to:
* have the same wording in the doc and code (planned => to_launch)
* split de declaration from the rest (and have the same code as the parallel
worker logging patch)
Thanks for the updated patch set.
I've been thinking about this proposal for the two counters with
pg_stat_database in 0001, and I am going to side with the argument
that it sucks to not have this information except if
pg_stat_statements is enabled on an instance. It would be a different
discussion if PGSS were to be in core, and if that were to happen we
could perhaps remove these counters from pg_stat_database, but there
is no way to be sure if this is going to happen, as well. And this
information is useful for the GUC settings.
+/*
+ * reports parallel_workers_to_launch and parallel_workers_launched into
+ * PgStat_StatDBEntry
+ */
Perhaps a reword with:
"Notify the stats system about parallel worker information."
+/* pg_stat_get_db_parallel_workers_to_launch*/
[...]
+/* pg_stat_get_db_parallel_workers_launched*/
Incorrect comment format, about which pgindent does not complain..
.. But pgindent complains in execMain.c and pgstat_database.c. These
are only nits, the patch is fine. If anybody has objections or
comments, feel free.
Now, I am not really on board with 0002 and 0003 about the tracking of
the maintenance workers, which reflect operations that happen less
often than what 0001 is covering. Perhaps this would have more
value if autovacuum supported parallel operations, though.
--
Michael
On Thu, Nov 07, 2024 at 02:36:58PM +0900, Michael Paquier wrote:
Incorrect comment format, about which pgindent does not complain..
.. But pgindent complains in execMain.c and pgstat_database.c. These
are only nits, the patch is fine. If anybody has objections or
comments, feel free.
Found a few more things, but overall it was fine. Here is what I have
staged on my local branch.
--
Michael
Attachments:
v7-0001-Add-two-attributes-to-pg_stat_database-for-parall.patchtext/x-diff; charset=iso-8859-1Download
From 960fbe663d4a6a4594b8121bbf299c72ef2a6ab8 Mon Sep 17 00:00:00 2001
From: Michael Paquier <michael@paquier.xyz>
Date: Fri, 8 Nov 2024 13:00:26 +0900
Subject: [PATCH v7] Add two attributes to pg_stat_database for parallel
workers activity
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Two attributes are added to pg_stat_database:
* parallel_workers_to_launch, counting the total number of parallel
workers that were planned to be launched.
* parallel_workers_launched, counting the total number of parallel
workers actually launched.
The ratio of both fields can provide hints that there are not enough
slots available when launching parallel workers.
This commit relies on de3a2ea3b264, that has added two fields to EState,
that get incremented when executing Gather or GatherMerge nodes. The
data now gets pushed to pg_stat_database, which is useful how much a
database faces contention when spawning parallel workers if
pg_stat_statements is not enabled on an instance.
Bump catalog version.
Author: Benoit Lobréau
Discussion: https://postgr.es/m/783bc7f7-659a-42fa-99dd-ee0565644e25@dalibo.com
---
src/include/catalog/catversion.h | 2 +-
src/include/catalog/pg_proc.dat | 10 +++++++
src/include/pgstat.h | 4 +++
src/backend/catalog/system_views.sql | 2 ++
src/backend/executor/execMain.c | 5 ++++
src/backend/utils/activity/pgstat_database.c | 19 +++++++++++++
src/backend/utils/adt/pgstatfuncs.c | 6 +++++
src/test/regress/expected/rules.out | 2 ++
src/test/regress/expected/select_parallel.out | 27 +++++++++++++++++++
src/test/regress/sql/select_parallel.sql | 14 ++++++++++
doc/src/sgml/monitoring.sgml | 18 +++++++++++++
11 files changed, 108 insertions(+), 1 deletion(-)
diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h
index 2abc523f5c..86436e0356 100644
--- a/src/include/catalog/catversion.h
+++ b/src/include/catalog/catversion.h
@@ -57,6 +57,6 @@
*/
/* yyyymmddN */
-#define CATALOG_VERSION_NO 202411071
+#define CATALOG_VERSION_NO 202411081
#endif
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index f23321a41f..cbbe8acd38 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5813,6 +5813,16 @@
proname => 'pg_stat_get_db_sessions_killed', provolatile => 's',
proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
prosrc => 'pg_stat_get_db_sessions_killed' },
+{ oid => '8403',
+ descr => 'statistics: number of parallel workers planned to be launched by queries',
+ proname => 'pg_stat_get_db_parallel_workers_to_launch', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_to_launch' },
+{ oid => '8404',
+ descr => 'statistics: number of parallel workers effectively launched by queries',
+ proname => 'pg_stat_get_db_parallel_workers_launched', provolatile => 's',
+ proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
+ prosrc => 'pg_stat_get_db_parallel_workers_launched' },
{ oid => '3195', descr => 'statistics: information about WAL archiver',
proname => 'pg_stat_get_archiver', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index df53fa2d4f..59c28b4aca 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -386,6 +386,8 @@ typedef struct PgStat_StatDBEntry
PgStat_Counter sessions_abandoned;
PgStat_Counter sessions_fatal;
PgStat_Counter sessions_killed;
+ PgStat_Counter parallel_workers_to_launch;
+ PgStat_Counter parallel_workers_launched;
TimestampTz stat_reset_timestamp;
} PgStat_StatDBEntry;
@@ -583,6 +585,8 @@ extern void pgstat_report_deadlock(void);
extern void pgstat_report_checksum_failures_in_db(Oid dboid, int failurecount);
extern void pgstat_report_checksum_failure(void);
extern void pgstat_report_connect(Oid dboid);
+extern void pgstat_update_parallel_workers_stats(PgStat_Counter workers_to_launch,
+ PgStat_Counter workers_launched);
#define pgstat_count_buffer_read_time(n) \
(pgStatBlockReadTime += (n))
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3456b821bc..da9a8fe99f 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1073,6 +1073,8 @@ CREATE VIEW pg_stat_database AS
pg_stat_get_db_sessions_abandoned(D.oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(D.oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(D.oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_to_launch(D.oid) as parallel_workers_to_launch,
+ pg_stat_get_db_parallel_workers_launched(D.oid) as parallel_workers_launched,
pg_stat_get_db_stat_reset_time(D.oid) AS stats_reset
FROM (
SELECT 0 AS oid, NULL::name AS datname
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index cc9a594cba..5ca856fd27 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -52,6 +52,7 @@
#include "miscadmin.h"
#include "nodes/queryjumble.h"
#include "parser/parse_relation.h"
+#include "pgstat.h"
#include "rewrite/rewriteHandler.h"
#include "tcop/utility.h"
#include "utils/acl.h"
@@ -483,6 +484,10 @@ standard_ExecutorEnd(QueryDesc *queryDesc)
Assert(estate != NULL);
+ if (estate->es_parallel_workers_to_launch > 0)
+ pgstat_update_parallel_workers_stats((PgStat_Counter) estate->es_parallel_workers_to_launch,
+ (PgStat_Counter) estate->es_parallel_workers_launched);
+
/*
* Check that ExecutorFinish was called, unless in EXPLAIN-only mode. This
* Assert is needed because ExecutorFinish is new as of 9.1, and callers
diff --git a/src/backend/utils/activity/pgstat_database.c b/src/backend/utils/activity/pgstat_database.c
index 29bc090974..7757d2ace7 100644
--- a/src/backend/utils/activity/pgstat_database.c
+++ b/src/backend/utils/activity/pgstat_database.c
@@ -262,6 +262,23 @@ AtEOXact_PgStat_Database(bool isCommit, bool parallel)
}
}
+/*
+ * Notify the stats system about parallel worker information.
+ */
+void
+pgstat_update_parallel_workers_stats(PgStat_Counter workers_to_launch,
+ PgStat_Counter workers_launched)
+{
+ PgStat_StatDBEntry *dbentry;
+
+ if (!OidIsValid(MyDatabaseId))
+ return;
+
+ dbentry = pgstat_prep_database_pending(MyDatabaseId);
+ dbentry->parallel_workers_to_launch += workers_to_launch;
+ dbentry->parallel_workers_launched += workers_launched;
+}
+
/*
* Subroutine for pgstat_report_stat(): Handle xact commit/rollback and I/O
* timings.
@@ -425,6 +442,8 @@ pgstat_database_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
PGSTAT_ACCUM_DBCOUNT(sessions_abandoned);
PGSTAT_ACCUM_DBCOUNT(sessions_fatal);
PGSTAT_ACCUM_DBCOUNT(sessions_killed);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_to_launch);
+ PGSTAT_ACCUM_DBCOUNT(parallel_workers_launched);
#undef PGSTAT_ACCUM_DBCOUNT
pgstat_unlock_entry(entry_ref);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index f7b50e0b5a..60a397dc56 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -1039,6 +1039,12 @@ PG_STAT_GET_DBENTRY_INT64(sessions_fatal)
/* pg_stat_get_db_sessions_killed */
PG_STAT_GET_DBENTRY_INT64(sessions_killed)
+/* pg_stat_get_db_parallel_workers_to_launch */
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_to_launch)
+
+/* pg_stat_get_db_parallel_workers_launched */
+PG_STAT_GET_DBENTRY_INT64(parallel_workers_launched)
+
/* pg_stat_get_db_temp_bytes */
PG_STAT_GET_DBENTRY_INT64(temp_bytes)
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2b47013f11..3014d047fe 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1863,6 +1863,8 @@ pg_stat_database| SELECT oid AS datid,
pg_stat_get_db_sessions_abandoned(oid) AS sessions_abandoned,
pg_stat_get_db_sessions_fatal(oid) AS sessions_fatal,
pg_stat_get_db_sessions_killed(oid) AS sessions_killed,
+ pg_stat_get_db_parallel_workers_to_launch(oid) AS parallel_workers_to_launch,
+ pg_stat_get_db_parallel_workers_launched(oid) AS parallel_workers_launched,
pg_stat_get_db_stat_reset_time(oid) AS stats_reset
FROM ( SELECT 0 AS oid,
NULL::name AS datname
diff --git a/src/test/regress/expected/select_parallel.out b/src/test/regress/expected/select_parallel.out
index d17ade278b..8c31f6460d 100644
--- a/src/test/regress/expected/select_parallel.out
+++ b/src/test/regress/expected/select_parallel.out
@@ -1,6 +1,17 @@
--
-- PARALLEL
--
+-- Save parallel worker stats, used for comparison at the end
+select pg_stat_force_next_flush();
+ pg_stat_force_next_flush
+--------------------------
+
+(1 row)
+
+select parallel_workers_to_launch as parallel_workers_to_launch_before,
+ parallel_workers_launched as parallel_workers_launched_before
+ from pg_stat_database
+ where datname = current_database() \gset
create function sp_parallel_restricted(int) returns int as
$$begin return $1; end$$ language plpgsql parallel restricted;
begin;
@@ -1407,3 +1418,19 @@ CREATE UNIQUE INDEX parallel_hang_idx
SET debug_parallel_query = on;
DELETE FROM parallel_hang WHERE 380 <= i AND i <= 420;
ROLLBACK;
+-- Check parallel worker stats
+select pg_stat_force_next_flush();
+ pg_stat_force_next_flush
+--------------------------
+
+(1 row)
+
+select parallel_workers_to_launch > :'parallel_workers_to_launch_before' AS wrk_to_launch,
+ parallel_workers_launched > :'parallel_workers_launched_before' AS wrk_launched
+ from pg_stat_database
+ where datname = current_database();
+ wrk_to_launch | wrk_launched
+---------------+--------------
+ t | t
+(1 row)
+
diff --git a/src/test/regress/sql/select_parallel.sql b/src/test/regress/sql/select_parallel.sql
index 9ba1328fd2..5b4a6e1088 100644
--- a/src/test/regress/sql/select_parallel.sql
+++ b/src/test/regress/sql/select_parallel.sql
@@ -2,6 +2,13 @@
-- PARALLEL
--
+-- Save parallel worker stats, used for comparison at the end
+select pg_stat_force_next_flush();
+select parallel_workers_to_launch as parallel_workers_to_launch_before,
+ parallel_workers_launched as parallel_workers_launched_before
+ from pg_stat_database
+ where datname = current_database() \gset
+
create function sp_parallel_restricted(int) returns int as
$$begin return $1; end$$ language plpgsql parallel restricted;
@@ -574,3 +581,10 @@ SET debug_parallel_query = on;
DELETE FROM parallel_hang WHERE 380 <= i AND i <= 420;
ROLLBACK;
+
+-- Check parallel worker stats
+select pg_stat_force_next_flush();
+select parallel_workers_to_launch > :'parallel_workers_to_launch_before' AS wrk_to_launch,
+ parallel_workers_launched > :'parallel_workers_launched_before' AS wrk_launched
+ from pg_stat_database
+ where datname = current_database();
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 331315f8d3..840d7f8161 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3611,6 +3611,24 @@ description | Waiting for a newly initialized WAL file to reach durable storage
</para></entry>
</row>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_to_launch</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers planned to be launched by queries on this database
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>parallel_workers_launched</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of parallel workers launched by queries on this database
+ </para></entry>
+ </row>
+
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
--
2.45.2
On 11/8/24 05:08, Michael Paquier wrote:
On Thu, Nov 07, 2024 at 02:36:58PM +0900, Michael Paquier wrote:
Incorrect comment format, about which pgindent does not complain..
.. But pgindent complains in execMain.c and pgstat_database.c. These
are only nits, the patch is fine. If anybody has objections or
comments, feel free.Found a few more things, but overall it was fine. Here is what I have
staged on my local branch.
--
Michael
Hi,
I just reread the patch.
Thanks for the changes. It looks great.
--
Benoit Lobréau
Consultant
http://dalibo.com
On Fri, Nov 08, 2024 at 03:13:35PM +0100, Benoit Lobréau wrote:
I just reread the patch.
Thanks for the changes. It looks great.
Okidoki, applied. If tweaks are necessary depending on the feedback,
like column names, let's tackle things as required. We still have a
good chunk of time for this release cycle.
--
Michael
On 11/11/24 02:51, Michael Paquier wrote:
Okidoki, applied. If tweaks are necessary depending on the feedback,
like column names, let's tackle things as required. We still have a
good chunk of time for this release cycle.
--
Michael
Thanks !
--
Benoit Lobréau
Consultant
http://dalibo.com
Hi,
On Fri, Oct 11, 2024 at 09:33:48AM +0200, Guillaume Lelarge wrote:
FWIW, with the recent commits of the pg_stat_statements patch, you need a
slight change in the patch I sent on this thread. You'll find a patch
attached to do that. You need to apply it after a rebase to master.- if (estate->es_parallelized_workers_planned > 0) { + if (estate->es_parallel_workers_to_launch > 0) { pgstat_update_parallel_workers_stats( - (PgStat_Counter) estate->es_parallelized_workers_planned, - (PgStat_Counter) estate->es_parallelized_workers_launched); + (PgStat_Counter) estate->es_parallel_workers_to_launch, + (PgStat_Counter) estate->es_parallel_workers_launched);
I was wondering about the weird new column name workers_to_launch when I
read the commit message - AFAICT this has been an internal term so far,
and this is the first time we expose it to users?
I personally find (parallel_)workers_planned/launched clearer from a
user perspective, was it discussed that we need to follow the internal
terms here? If so, I missed that discussion in this thread (and the
other thread that lead to cf54a2c00).
Michael
On 11/12/24 15:05, Michael Banck wrote:
I was wondering about the weird new column name workers_to_launch when I
read the commit message - AFAICT this has been an internal term so far,
and this is the first time we expose it to users?I personally find (parallel_)workers_planned/launched clearer from a
user perspective, was it discussed that we need to follow the internal
terms here? If so, I missed that discussion in this thread (and the
other thread that lead to cf54a2c00).Michael
I initiallly called it like that but changed it to mirror the column
name added in pg_stat_statements for coherence sake. I prefer "planned"
but english is clearly not my strong suit and I assumed it meant that
the number of worker planned could change before execution. I just
checked in parallel.c and I don't think it's the case, could it be done
elsewhere ?
--
Benoit Lobréau
Consultant
http://dalibo.com
Hi,
On Tue, Nov 12, 2024 at 03:56:11PM +0100, Benoit Lobr�au wrote:
On 11/12/24 15:05, Michael Banck wrote:
I was wondering about the weird new column name workers_to_launch when I
read the commit message - AFAICT this has been an internal term so far,
and this is the first time we expose it to users?I personally find (parallel_)workers_planned/launched clearer from a
user perspective, was it discussed that we need to follow the internal
terms here? If so, I missed that discussion in this thread (and the
other thread that lead to cf54a2c00).I initiallly called it like that but changed it to mirror the column
name added in pg_stat_statements for coherence sake.
Ah, I mixed up the threads about adding parallel stats to
pg_stat_all_tables and pg_stat_statements - I only reviewed the former,
but in the latter, Michael writes:
|- I've been struggling a bit on the "planned" vs "launched" terms used
|in the names for the counters. It is inconsistent with the backend
|state, where we talk about workers "to launch" and workers "launched".
|"planned" does not really apply to utilities, as this may not be
|planned per se.
I am not sure "backend state" is a good reason (unless it is exposed
somewhere to users?), but the point about utilities does make sense I
guess.
Michael
On 11/12/24 16:24, Michael Banck wrote:
I am not sure "backend state" is a good reason (unless it is exposed
somewhere to users?), but the point about utilities does make sense I
guess.
We only track parallel workers used by queries right now.
Parallel index builds (btree & brin) and vacuum cleanup is not commited
yet since it's not a common occurence. I implemented it in separate
counters.
--
Benoit Lobréau
Consultant
http://dalibo.com