Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Started by Guillaume Lelargeover 1 year ago20 messages
#1Guillaume Lelarge
guillaume@lelarge.info
2 attachment(s)

Hello,

This patch was a bit discussed on [1]/messages/by-id/b4220d15-2e21-0e98-921b-b9892543cc93@dalibo.com, and with more details on [2]/messages/by-id/d657df20-c4bf-63f6-e74c-cb85a81d0383@dalibo.com. It
introduces four new columns in pg_stat_all_tables:

* parallel_seq_scan
* last_parallel_seq_scan
* parallel_idx_scan
* last_parallel_idx_scan

and two new columns in pg_stat_all_indexes:

* parallel_idx_scan
* last_parallel_idx_scan

As Benoit said yesterday, the intent is to help administrators evaluate the
usage of parallel workers in their databases and help configuring
parallelization usage.

A test script (test.sql) is attached. You can execute it with "psql -Xef
test.sql your_database" (your_database should not contain a t1 table as it
will be dropped and recreated).

Here is its result, a bit commented:

DROP TABLE IF EXISTS t1;
DROP TABLE
CREATE TABLE t1 (id integer);
CREATE TABLE
INSERT INTO t1 SELECT generate_series(1, 10_000_000);
INSERT 0 10000000
VACUUM ANALYZE t1;
VACUUM
SELECT relname, seq_scan, last_seq_scan, parallel_seq_scan,
last_parallel_seq_scan FROM pg_stat_user_tables WHERE relname='t1'
-[ RECORD 1 ]----------+---
relname | t1
seq_scan | 0
last_seq_scan |
parallel_seq_scan | 0
last_parallel_seq_scan |

==> no scan at all, the table has just been created

SELECT * FROM t1 LIMIT 1;
id
----
1
(1 row)

SELECT pg_sleep(1);
SELECT relname, seq_scan, last_seq_scan, parallel_seq_scan,
last_parallel_seq_scan FROM pg_stat_user_tables WHERE relname='t1'
-[ RECORD 1 ]----------+------------------------------
relname | t1
seq_scan | 1
last_seq_scan | 2024-08-29 15:43:17.377182+02
parallel_seq_scan | 0
last_parallel_seq_scan |

==> one sequential scan, no parallelization

SELECT count(*) FROM t1;
count
----------
10000000
(1 row)

SELECT pg_sleep(1);
SELECT relname, seq_scan, last_seq_scan, parallel_seq_scan,
last_parallel_seq_scan FROM pg_stat_user_tables WHERE relname='t1'
-[ RECORD 1 ]----------+------------------------------
relname | t1
seq_scan | 4
last_seq_scan | 2024-08-29 15:43:18.504533+02
parallel_seq_scan | 3
last_parallel_seq_scan | 2024-08-29 15:43:18.504533+02

==> one parallel sequential scan
==> I use the default configuration, so parallel_leader_participation = on,
max_parallel_workers_per_gather = 2
==> meaning 3 parallel sequential scans (1 leader, two workers)
==> take note that seq_scan was also incremented... we didn't change the
previous behaviour for this column

CREATE INDEX ON t1(id);
CREATE INDEX
SELECT
indexrelname,idx_scan,last_idx_scan,parallel_idx_scan,last_parallel_idx_scan,idx_tup_read,idx_tup_fetch
FROM pg_stat_user_indexes WHERE relname='t1'
-[ RECORD 1 ]----------+----------
indexrelname | t1_id_idx
idx_scan | 0
last_idx_scan |
parallel_idx_scan | 0
last_parallel_idx_scan |
idx_tup_read | 0
idx_tup_fetch | 0

==> no scan at all, the index has just been created

SELECT * FROM t1 WHERE id=150000;
id
--------
150000
(1 row)

SELECT pg_sleep(1);
SELECT
indexrelname,idx_scan,last_idx_scan,parallel_idx_scan,last_parallel_idx_scan,idx_tup_read,idx_tup_fetch
FROM pg_stat_user_indexes WHERE relname='t1'
-[ RECORD 1 ]----------+------------------------------
indexrelname | t1_id_idx
idx_scan | 1
last_idx_scan | 2024-08-29 15:43:22.020853+02
parallel_idx_scan | 0
last_parallel_idx_scan |
idx_tup_read | 1
idx_tup_fetch | 0

==> one index scan, no parallelization

SELECT * FROM t1 WHERE id BETWEEN 100000 AND 400000;
SELECT pg_sleep(1);
pg_sleep
----------

(1 row)

SELECT
indexrelname,idx_scan,last_idx_scan,parallel_idx_scan,last_parallel_idx_scan,idx_tup_read,idx_tup_fetch
FROM pg_stat_user_indexes WHERE relname='t1'
-[ RECORD 1 ]----------+------------------------------
indexrelname | t1_id_idx
idx_scan | 2
last_idx_scan | 2024-08-29 15:43:23.136665+02
parallel_idx_scan | 0
last_parallel_idx_scan |
idx_tup_read | 300002
idx_tup_fetch | 0

==> another index scan, no parallelization

SELECT count(*) FROM t1 WHERE id BETWEEN 100000 AND 400000;
count
--------
300001
(1 row)

SELECT pg_sleep(1);
SELECT
indexrelname,idx_scan,last_idx_scan,parallel_idx_scan,last_parallel_idx_scan,idx_tup_read,idx_tup_fetch
FROM pg_stat_user_indexes WHERE relname='t1'
-[ RECORD 1 ]----------+-----------------------------
indexrelname | t1_id_idx
idx_scan | 5
last_idx_scan | 2024-08-29 15:43:24.16057+02
parallel_idx_scan | 3
last_parallel_idx_scan | 2024-08-29 15:43:24.16057+02
idx_tup_read | 600003
idx_tup_fetch | 0

==> one parallel index scan
==> same thing, 3 parallel index scans (1 leader, two workers)
==> also, take note that idx_scan was also incremented... we didn't change
the previous behaviour for this column

First time I had to add new columns to a statistics catalog. I'm actually
not sure that we were right to change pg_proc.dat manually. We'll probably
have to fix this.

Documentation is done, but maybe we should also add that seq_scan and
idx_scan also include parallel scan.

Yet to be done: tests. Once there's an agreement on this patch, we'll work
on the tests.

This has been a collective work with Benoit Lobréau, Jehan-Guillaume de
Rorthais, and Franck Boudehen.

Thanks.

Regards.

[1]: /messages/by-id/b4220d15-2e21-0e98-921b-b9892543cc93@dalibo.com
/messages/by-id/b4220d15-2e21-0e98-921b-b9892543cc93@dalibo.com
[2]: /messages/by-id/d657df20-c4bf-63f6-e74c-cb85a81d0383@dalibo.com
/messages/by-id/d657df20-c4bf-63f6-e74c-cb85a81d0383@dalibo.com

--
Guillaume.

Attachments:

test.sqlapplication/sql; name=test.sqlDownload
0001-Add-parallel-columns-for-pg_stat_all_tables-indexes.patchtext/x-patch; charset=US-ASCII; name=0001-Add-parallel-columns-for-pg_stat_all_tables-indexes.patchDownload
From 78fb5406c42c9ecd429e08314d9f3a0bfd112c6a Mon Sep 17 00:00:00 2001
From: Guillaume Lelarge <guillaume.lelarge@dalibo.com>
Date: Wed, 28 Aug 2024 21:35:30 +0200
Subject: [PATCH] Add parallel columns for pg_stat_all_tables,indexes

pg_stat_all_tables gets 4 new columns: parallel_seq_scan,
last_parallel_seq_scan, parallel_idx_scan, last_parallel_idx_scan.

pg_stat_all_indexes gets 2 new columns: parallel_idx_scan,
last_parallel_idx_scan.
---
 doc/src/sgml/monitoring.sgml                 | 57 ++++++++++++++++++++
 src/backend/access/heap/heapam.c             |  2 +
 src/backend/access/nbtree/nbtsearch.c        |  2 +
 src/backend/catalog/system_views.sql         |  6 +++
 src/backend/utils/activity/pgstat_relation.c |  7 ++-
 src/backend/utils/adt/pgstatfuncs.c          |  6 +++
 src/include/catalog/pg_proc.dat              |  8 +++
 src/include/pgstat.h                         | 13 +++++
 8 files changed, 99 insertions(+), 2 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 55417a6fa9..afd5a23528 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3714,6 +3714,25 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_seq_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel sequential scans initiated on this table
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_seq_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel sequential scan on this table, based on the
+       most recent transaction stop time
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>seq_tup_read</structfield> <type>bigint</type>
@@ -3742,6 +3761,25 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_idx_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel index scans initiated on this table
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_idx_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel index scan on this table, based on the
+       most recent transaction stop time
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>idx_tup_fetch</structfield> <type>bigint</type>
@@ -4021,6 +4059,25 @@ description | Waiting for a newly initialized WAL file to reach durable storage
       </para></entry>
      </row>
 
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_idx_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel index scans initiated on this index
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_idx_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel scan on this index, based on the
+       most recent transaction stop time
+      </para></entry>
+     </row>
+
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
        <structfield>idx_tup_read</structfield> <type>bigint</type>
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 91b20147a0..c2f1b8e25e 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -410,6 +410,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)
 	 */
 	if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN)
 		pgstat_count_heap_scan(scan->rs_base.rs_rd);
+  if (scan->rs_base.rs_parallel != NULL)
+		pgstat_count_parallel_heap_scan(scan->rs_base.rs_rd);
 }
 
 /*
diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c
index 2551df8a67..e37ed32bb1 100644
--- a/src/backend/access/nbtree/nbtsearch.c
+++ b/src/backend/access/nbtree/nbtsearch.c
@@ -897,6 +897,8 @@ _bt_first(IndexScanDesc scan, ScanDirection dir)
 	Assert(!BTScanPosIsValid(so->currPos));
 
 	pgstat_count_index_scan(rel);
+	if (scan->parallel_scan != NULL)
+		pgstat_count_parallel_index_scan(rel);
 
 	/*
 	 * Examine the scan keys and eliminate any redundant keys; also mark the
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 19cabc9a47..d78a121114 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -670,9 +670,13 @@ CREATE VIEW pg_stat_all_tables AS
             C.relname AS relname,
             pg_stat_get_numscans(C.oid) AS seq_scan,
             pg_stat_get_lastscan(C.oid) AS last_seq_scan,
+            pg_stat_get_parallelnumscans(C.oid) AS parallel_seq_scan,
+            pg_stat_get_parallellastscan(C.oid) AS last_parallel_seq_scan,
             pg_stat_get_tuples_returned(C.oid) AS seq_tup_read,
             sum(pg_stat_get_numscans(I.indexrelid))::bigint AS idx_scan,
             max(pg_stat_get_lastscan(I.indexrelid)) AS last_idx_scan,
+            sum(pg_stat_get_parallelnumscans(I.indexrelid))::bigint AS parallel_idx_scan,
+            max(pg_stat_get_parallellastscan(I.indexrelid)) AS last_parallel_idx_scan,
             sum(pg_stat_get_tuples_fetched(I.indexrelid))::bigint +
             pg_stat_get_tuples_fetched(C.oid) AS idx_tup_fetch,
             pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins,
@@ -792,6 +796,8 @@ CREATE VIEW pg_stat_all_indexes AS
             I.relname AS indexrelname,
             pg_stat_get_numscans(I.oid) AS idx_scan,
             pg_stat_get_lastscan(I.oid) AS last_idx_scan,
+            pg_stat_get_parallelnumscans(I.oid) AS parallel_idx_scan,
+            pg_stat_get_parallellastscan(I.oid) AS last_parallel_idx_scan,
             pg_stat_get_tuples_returned(I.oid) AS idx_tup_read,
             pg_stat_get_tuples_fetched(I.oid) AS idx_tup_fetch
     FROM pg_class C JOIN
diff --git a/src/backend/utils/activity/pgstat_relation.c b/src/backend/utils/activity/pgstat_relation.c
index 8a3f7d434c..cfdc1d42bf 100644
--- a/src/backend/utils/activity/pgstat_relation.c
+++ b/src/backend/utils/activity/pgstat_relation.c
@@ -829,12 +829,15 @@ pgstat_relation_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 	tabentry = &shtabstats->stats;
 
 	tabentry->numscans += lstats->counts.numscans;
-	if (lstats->counts.numscans)
+	tabentry->parallelnumscans += lstats->counts.parallelnumscans;
+	if (lstats->counts.numscans || lstats->counts.parallelnumscans)
 	{
 		TimestampTz t = GetCurrentTransactionStopTimestamp();
 
-		if (t > tabentry->lastscan)
+		if (t > tabentry->lastscan && lstats->counts.numscans)
 			tabentry->lastscan = t;
+		if (t > tabentry->parallellastscan && lstats->counts.parallelnumscans)
+			tabentry->parallellastscan = t;
 	}
 	tabentry->tuples_returned += lstats->counts.tuples_returned;
 	tabentry->tuples_fetched += lstats->counts.tuples_fetched;
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 3221137123..8b9440ee3b 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -82,6 +82,9 @@ PG_STAT_GET_RELENTRY_INT64(mod_since_analyze)
 /* pg_stat_get_numscans */
 PG_STAT_GET_RELENTRY_INT64(numscans)
 
+/* pg_stat_get_parallelnumscans */
+PG_STAT_GET_RELENTRY_INT64(parallelnumscans)
+
 /* pg_stat_get_tuples_deleted */
 PG_STAT_GET_RELENTRY_INT64(tuples_deleted)
 
@@ -140,6 +143,9 @@ PG_STAT_GET_RELENTRY_TIMESTAMPTZ(last_vacuum_time)
 /* pg_stat_get_lastscan */
 PG_STAT_GET_RELENTRY_TIMESTAMPTZ(lastscan)
 
+/* pg_stat_get_parallellastscan */
+PG_STAT_GET_RELENTRY_TIMESTAMPTZ(parallellastscan)
+
 Datum
 pg_stat_get_function_calls(PG_FUNCTION_ARGS)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4abc6d9526..022c905ea7 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5391,6 +5391,14 @@
   proname => 'pg_stat_get_lastscan', provolatile => 's', proparallel => 'r',
   prorettype => 'timestamptz', proargtypes => 'oid',
   prosrc => 'pg_stat_get_lastscan' },
+{ oid => '9000', descr => 'statistics: number of parallel scans done for table/index',
+  proname => 'pg_stat_get_parallelnumscans', provolatile => 's', proparallel => 'r',
+  prorettype => 'int8', proargtypes => 'oid',
+  prosrc => 'pg_stat_get_parallelnumscans' },
+{ oid => '9001', descr => 'statistics: time of the last parallel scan for table/index',
+  proname => 'pg_stat_get_parallellastscan', provolatile => 's', proparallel => 'r',
+  prorettype => 'timestamptz', proargtypes => 'oid',
+  prosrc => 'pg_stat_get_parallellastscan' },
 { oid => '1929', descr => 'statistics: number of tuples read by seqscan',
   proname => 'pg_stat_get_tuples_returned', provolatile => 's',
   proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f63159c55c..b5011a3d1b 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -190,6 +190,7 @@ typedef struct PgStat_BackendSubEntry
 typedef struct PgStat_TableCounts
 {
 	PgStat_Counter numscans;
+	PgStat_Counter parallelnumscans;
 
 	PgStat_Counter tuples_returned;
 	PgStat_Counter tuples_fetched;
@@ -430,6 +431,8 @@ typedef struct PgStat_StatTabEntry
 {
 	PgStat_Counter numscans;
 	TimestampTz lastscan;
+	PgStat_Counter parallelnumscans;
+	TimestampTz parallellastscan;
 
 	PgStat_Counter tuples_returned;
 	PgStat_Counter tuples_fetched;
@@ -642,6 +645,11 @@ extern void pgstat_report_analyze(Relation rel,
 		if (pgstat_should_count_relation(rel))						\
 			(rel)->pgstat_info->counts.numscans++;					\
 	} while (0)
+#define pgstat_count_parallel_heap_scan(rel)						\
+	do {															\
+		if (pgstat_should_count_relation(rel))						\
+			(rel)->pgstat_info->counts.parallelnumscans++;			\
+	} while (0)
 #define pgstat_count_heap_getnext(rel)								\
 	do {															\
 		if (pgstat_should_count_relation(rel))						\
@@ -657,6 +665,11 @@ extern void pgstat_report_analyze(Relation rel,
 		if (pgstat_should_count_relation(rel))						\
 			(rel)->pgstat_info->counts.numscans++;					\
 	} while (0)
+#define pgstat_count_parallel_index_scan(rel)						\
+	do {															\
+		if (pgstat_should_count_relation(rel))						\
+			(rel)->pgstat_info->counts.parallelnumscans++;			\
+	} while (0)
 #define pgstat_count_index_tuples(rel, n)							\
 	do {															\
 		if (pgstat_should_count_relation(rel))						\
-- 
2.46.0

#2Bertrand Drouvot
bertranddrouvot.pg@gmail.com
In reply to: Guillaume Lelarge (#1)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Hi,

On Thu, Aug 29, 2024 at 04:04:05PM +0200, Guillaume Lelarge wrote:

Hello,

This patch was a bit discussed on [1], and with more details on [2]. It
introduces four new columns in pg_stat_all_tables:

* parallel_seq_scan
* last_parallel_seq_scan
* parallel_idx_scan
* last_parallel_idx_scan

and two new columns in pg_stat_all_indexes:

* parallel_idx_scan
* last_parallel_idx_scan

As Benoit said yesterday, the intent is to help administrators evaluate the
usage of parallel workers in their databases and help configuring
parallelization usage.

Thanks for the patch. I think that's a good idea to provide more instrumentation
in this area. So, +1 regarding this patch.

A few random comments:

1 ===

+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_seq_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel sequential scans initiated on this table
+      </para></entry>
+     </row>

I wonder if we should not update the seq_scan too to indicate that it
includes the parallel_seq_scan.

Same kind of comment for last_seq_scan, idx_scan and last_idx_scan.

2 ===

@@ -410,6 +410,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)
         */
        if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN)
                pgstat_count_heap_scan(scan->rs_base.rs_rd);
+  if (scan->rs_base.rs_parallel != NULL)
+               pgstat_count_parallel_heap_scan(scan->rs_base.rs_rd);

Indentation seems broken.

Shouldn't the parallel counter relies on the "scan->rs_base.rs_flags & SO_TYPE_SEQSCAN"
test too?

What about to get rid of the pgstat_count_parallel_heap_scan and add an extra
bolean parameter to pgstat_count_heap_scan to indicate if counts.parallelnumscans
should be incremented too?

Something like:

pgstat_count_heap_scan(scan->rs_base.rs_rd, scan->rs_base.rs_parallel != NULL)

3 ===

Same comment for pgstat_count_index_scan (add an extra bolean parameter) and
get rid of pgstat_count_parallel_index_scan()).

I think that 2 === and 3 === would help to avoid missing increments should we
add those call to other places in the future.

4 ===

+ if (lstats->counts.numscans || lstats->counts.parallelnumscans)

Is it possible to have (lstats->counts.parallelnumscans) whithout having
(lstats->counts.numscans) ?

First time I had to add new columns to a statistics catalog. I'm actually
not sure that we were right to change pg_proc.dat manually.

I think that's the right way to do.

I don't see a CF entry for this patch. Would you mind creating one so that
we don't lost track of it?

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com

#3Guillaume Lelarge
guillaume@lelarge.info
In reply to: Bertrand Drouvot (#2)
1 attachment(s)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Hi,

Le mer. 4 sept. 2024 à 10:47, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
a écrit :

Hi,

On Thu, Aug 29, 2024 at 04:04:05PM +0200, Guillaume Lelarge wrote:

Hello,

This patch was a bit discussed on [1], and with more details on [2]. It
introduces four new columns in pg_stat_all_tables:

* parallel_seq_scan
* last_parallel_seq_scan
* parallel_idx_scan
* last_parallel_idx_scan

and two new columns in pg_stat_all_indexes:

* parallel_idx_scan
* last_parallel_idx_scan

As Benoit said yesterday, the intent is to help administrators evaluate

the

usage of parallel workers in their databases and help configuring
parallelization usage.

Thanks for the patch. I think that's a good idea to provide more
instrumentation
in this area. So, +1 regarding this patch.

Thanks.

A few random comments:

1 ===

+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_seq_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel sequential scans initiated on this table
+      </para></entry>
+     </row>

I wonder if we should not update the seq_scan too to indicate that it
includes the parallel_seq_scan.

Same kind of comment for last_seq_scan, idx_scan and last_idx_scan.

Yeah, not sure why I didn't do it at first. I was wondering the same thing.
The patch attached does this.

2 ===

@@ -410,6 +410,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool
keep_startblock)
*/
if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN)
pgstat_count_heap_scan(scan->rs_base.rs_rd);
+  if (scan->rs_base.rs_parallel != NULL)
+               pgstat_count_parallel_heap_scan(scan->rs_base.rs_rd);

Indentation seems broken.

My bad, sorry. Fixed in the attached patch.

Shouldn't the parallel counter relies on the "scan->rs_base.rs_flags &
SO_TYPE_SEQSCAN"
test too?

You're right. Fixed in the attached patch.

What about to get rid of the pgstat_count_parallel_heap_scan and add an
extra
bolean parameter to pgstat_count_heap_scan to indicate if
counts.parallelnumscans
should be incremented too?

Something like:

pgstat_count_heap_scan(scan->rs_base.rs_rd, scan->rs_base.rs_parallel !=
NULL)

3 ===

Same comment for pgstat_count_index_scan (add an extra bolean parameter)
and
get rid of pgstat_count_parallel_index_scan()).

I think that 2 === and 3 === would help to avoid missing increments should
we
add those call to other places in the future.

Oh OK, understood. Done for both.

4 ===

+ if (lstats->counts.numscans || lstats->counts.parallelnumscans)

Is it possible to have (lstats->counts.parallelnumscans) whithout having
(lstats->counts.numscans) ?

Nope, parallel scans are included in seq/index scans, as far as I can tell.
I could remove the parallelnumscans testing but it would be less obvious to
read.

First time I had to add new columns to a statistics catalog. I'm actually
not sure that we were right to change pg_proc.dat manually.

I think that's the right way to do.

OK, new patch attached.

I don't see a CF entry for this patch. Would you mind creating one so that
we don't lost track of it?

I don't mind adding it, though I don't know if I should add it to the
September or November commit fest. Which one should I choose?

Thanks.

Regards.

--
Guillaume.

Attachments:

v2-0001-Add-parallel-columns-for-pg_stat_all_tables-index.patchtext/x-patch; charset=US-ASCII; name=v2-0001-Add-parallel-columns-for-pg_stat_all_tables-index.patchDownload
From 6a202b7bd44cf33be13a8f7e0a8dc7077604c3c0 Mon Sep 17 00:00:00 2001
From: Guillaume Lelarge <guillaume.lelarge@dalibo.com>
Date: Wed, 28 Aug 2024 21:35:30 +0200
Subject: [PATCH v2] Add parallel columns for pg_stat_all_tables,indexes

pg_stat_all_tables gets 4 new columns: parallel_seq_scan,
last_parallel_seq_scan, parallel_idx_scan, last_parallel_idx_scan.

pg_stat_all_indexes gets 2 new columns: parallel_idx_scan,
last_parallel_idx_scan.
---
 doc/src/sgml/monitoring.sgml                 | 69 ++++++++++++++++++--
 src/backend/access/brin/brin.c               |  2 +-
 src/backend/access/gin/ginscan.c             |  2 +-
 src/backend/access/gist/gistget.c            |  4 +-
 src/backend/access/hash/hashsearch.c         |  2 +-
 src/backend/access/heap/heapam.c             |  2 +-
 src/backend/access/nbtree/nbtsearch.c        |  2 +-
 src/backend/access/spgist/spgscan.c          |  2 +-
 src/backend/catalog/system_views.sql         |  6 ++
 src/backend/utils/activity/pgstat_relation.c |  7 +-
 src/backend/utils/adt/pgstatfuncs.c          |  6 ++
 src/include/catalog/pg_proc.dat              |  8 +++
 src/include/pgstat.h                         | 23 +++++--
 13 files changed, 113 insertions(+), 22 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 933de6fe07..6886094095 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3773,7 +3773,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>seq_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of sequential scans initiated on this table
+       Number of sequential scans (including parallel ones) initiated on this table
       </para></entry>
      </row>
 
@@ -3782,7 +3782,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_seq_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last sequential scan on this table, based on the
+       The time of the last sequential scan (including parallel ones) on this table, based on the
+       most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_seq_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel sequential scans initiated on this table
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_seq_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel sequential scan on this table, based on the
        most recent transaction stop time
       </para></entry>
      </row>
@@ -3801,7 +3820,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>idx_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of index scans initiated on this table
+       Number of index scans (including parallel ones) initiated on this table
       </para></entry>
      </row>
 
@@ -3810,7 +3829,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_idx_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last index scan on this table, based on the
+       The time of the last index scan (including parallel ones) on this table, based on the
+       most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_idx_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel index scans initiated on this table
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_idx_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel index scan on this table, based on the
        most recent transaction stop time
       </para></entry>
      </row>
@@ -4080,7 +4118,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>idx_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of index scans initiated on this index
+       Number of index scans (including parallel ones) initiated on this index
       </para></entry>
      </row>
 
@@ -4089,7 +4127,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_idx_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last scan on this index, based on the
+       The time of the last scan on this index(including parallel ones), based
+       on the most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_idx_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel index scans initiated on this index
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_idx_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel scan on this index, based on the
        most recent transaction stop time
       </para></entry>
      </row>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 6467bed604..4b5557fcf7 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -580,7 +580,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 
 	opaque = (BrinOpaque *) scan->opaque;
 	bdesc = opaque->bo_bdesc;
-	pgstat_count_index_scan(idxRel);
+	pgstat_count_index_scan(idxRel, false);
 
 	/*
 	 * We need to know the size of the table so that we know how long to
diff --git a/src/backend/access/gin/ginscan.c b/src/backend/access/gin/ginscan.c
index af24d38544..2926a4caf6 100644
--- a/src/backend/access/gin/ginscan.c
+++ b/src/backend/access/gin/ginscan.c
@@ -435,7 +435,7 @@ ginNewScanKey(IndexScanDesc scan)
 
 	MemoryContextSwitchTo(oldCtx);
 
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 }
 
 void
diff --git a/src/backend/access/gist/gistget.c b/src/backend/access/gist/gistget.c
index b35b8a9757..7e89382ce5 100644
--- a/src/backend/access/gist/gistget.c
+++ b/src/backend/access/gist/gistget.c
@@ -624,7 +624,7 @@ gistgettuple(IndexScanDesc scan, ScanDirection dir)
 		/* Begin the scan by processing the root page */
 		GISTSearchItem fakeItem;
 
-		pgstat_count_index_scan(scan->indexRelation);
+		pgstat_count_index_scan(scan->indexRelation, false);
 
 		so->firstCall = false;
 		so->curPageData = so->nPageData = 0;
@@ -749,7 +749,7 @@ gistgetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 	if (!so->qual_ok)
 		return 0;
 
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 
 	/* Begin the scan by processing the root page */
 	so->curPageData = so->nPageData = 0;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 0d99d6abc8..a63edc8372 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -297,7 +297,7 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	HashPageOpaque opaque;
 	HashScanPosItem *currItem;
 
-	pgstat_count_index_scan(rel);
+	pgstat_count_index_scan(rel, false);
 
 	/*
 	 * We do not support hash scans with no index qualification, because we
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 91b20147a0..f21cf50e6e 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -409,7 +409,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)
 	 * and for sample scans we update stats for tuple fetches).
 	 */
 	if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN)
-		pgstat_count_heap_scan(scan->rs_base.rs_rd);
+		pgstat_count_heap_scan(scan->rs_base.rs_rd, (scan->rs_base.rs_parallel != NULL));
 }
 
 /*
diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c
index 2551df8a67..ef50852199 100644
--- a/src/backend/access/nbtree/nbtsearch.c
+++ b/src/backend/access/nbtree/nbtsearch.c
@@ -896,7 +896,7 @@ _bt_first(IndexScanDesc scan, ScanDirection dir)
 
 	Assert(!BTScanPosIsValid(so->currPos));
 
-	pgstat_count_index_scan(rel);
+	pgstat_count_index_scan(rel, (scan->parallel_scan != NULL));
 
 	/*
 	 * Examine the scan keys and eliminate any redundant keys; also mark the
diff --git a/src/backend/access/spgist/spgscan.c b/src/backend/access/spgist/spgscan.c
index 03293a7816..a78fa34570 100644
--- a/src/backend/access/spgist/spgscan.c
+++ b/src/backend/access/spgist/spgscan.c
@@ -422,7 +422,7 @@ spgrescan(IndexScanDesc scan, ScanKey scankey, int nscankeys,
 	resetSpGistScanOpaque(so);
 
 	/* count an indexscan for stats */
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 }
 
 void
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 7fd5d256a1..54b1cd6b40 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -670,9 +670,13 @@ CREATE VIEW pg_stat_all_tables AS
             C.relname AS relname,
             pg_stat_get_numscans(C.oid) AS seq_scan,
             pg_stat_get_lastscan(C.oid) AS last_seq_scan,
+            pg_stat_get_parallelnumscans(C.oid) AS parallel_seq_scan,
+            pg_stat_get_parallellastscan(C.oid) AS last_parallel_seq_scan,
             pg_stat_get_tuples_returned(C.oid) AS seq_tup_read,
             sum(pg_stat_get_numscans(I.indexrelid))::bigint AS idx_scan,
             max(pg_stat_get_lastscan(I.indexrelid)) AS last_idx_scan,
+            sum(pg_stat_get_parallelnumscans(I.indexrelid))::bigint AS parallel_idx_scan,
+            max(pg_stat_get_parallellastscan(I.indexrelid)) AS last_parallel_idx_scan,
             sum(pg_stat_get_tuples_fetched(I.indexrelid))::bigint +
             pg_stat_get_tuples_fetched(C.oid) AS idx_tup_fetch,
             pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins,
@@ -792,6 +796,8 @@ CREATE VIEW pg_stat_all_indexes AS
             I.relname AS indexrelname,
             pg_stat_get_numscans(I.oid) AS idx_scan,
             pg_stat_get_lastscan(I.oid) AS last_idx_scan,
+            pg_stat_get_parallelnumscans(I.oid) AS parallel_idx_scan,
+            pg_stat_get_parallellastscan(I.oid) AS last_parallel_idx_scan,
             pg_stat_get_tuples_returned(I.oid) AS idx_tup_read,
             pg_stat_get_tuples_fetched(I.oid) AS idx_tup_fetch
     FROM pg_class C JOIN
diff --git a/src/backend/utils/activity/pgstat_relation.c b/src/backend/utils/activity/pgstat_relation.c
index 8a3f7d434c..cfdc1d42bf 100644
--- a/src/backend/utils/activity/pgstat_relation.c
+++ b/src/backend/utils/activity/pgstat_relation.c
@@ -829,12 +829,15 @@ pgstat_relation_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 	tabentry = &shtabstats->stats;
 
 	tabentry->numscans += lstats->counts.numscans;
-	if (lstats->counts.numscans)
+	tabentry->parallelnumscans += lstats->counts.parallelnumscans;
+	if (lstats->counts.numscans || lstats->counts.parallelnumscans)
 	{
 		TimestampTz t = GetCurrentTransactionStopTimestamp();
 
-		if (t > tabentry->lastscan)
+		if (t > tabentry->lastscan && lstats->counts.numscans)
 			tabentry->lastscan = t;
+		if (t > tabentry->parallellastscan && lstats->counts.parallelnumscans)
+			tabentry->parallellastscan = t;
 	}
 	tabentry->tuples_returned += lstats->counts.tuples_returned;
 	tabentry->tuples_fetched += lstats->counts.tuples_fetched;
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 97dc09ac0d..30a3849e3d 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -82,6 +82,9 @@ PG_STAT_GET_RELENTRY_INT64(mod_since_analyze)
 /* pg_stat_get_numscans */
 PG_STAT_GET_RELENTRY_INT64(numscans)
 
+/* pg_stat_get_parallelnumscans */
+PG_STAT_GET_RELENTRY_INT64(parallelnumscans)
+
 /* pg_stat_get_tuples_deleted */
 PG_STAT_GET_RELENTRY_INT64(tuples_deleted)
 
@@ -140,6 +143,9 @@ PG_STAT_GET_RELENTRY_TIMESTAMPTZ(last_vacuum_time)
 /* pg_stat_get_lastscan */
 PG_STAT_GET_RELENTRY_TIMESTAMPTZ(lastscan)
 
+/* pg_stat_get_parallellastscan */
+PG_STAT_GET_RELENTRY_TIMESTAMPTZ(parallellastscan)
+
 Datum
 pg_stat_get_function_calls(PG_FUNCTION_ARGS)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index ff5436acac..1cce03a6d2 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5391,6 +5391,14 @@
   proname => 'pg_stat_get_lastscan', provolatile => 's', proparallel => 'r',
   prorettype => 'timestamptz', proargtypes => 'oid',
   prosrc => 'pg_stat_get_lastscan' },
+{ oid => '9000', descr => 'statistics: number of parallel scans done for table/index',
+  proname => 'pg_stat_get_parallelnumscans', provolatile => 's', proparallel => 'r',
+  prorettype => 'int8', proargtypes => 'oid',
+  prosrc => 'pg_stat_get_parallelnumscans' },
+{ oid => '9001', descr => 'statistics: time of the last parallel scan for table/index',
+  proname => 'pg_stat_get_parallellastscan', provolatile => 's', proparallel => 'r',
+  prorettype => 'timestamptz', proargtypes => 'oid',
+  prosrc => 'pg_stat_get_parallellastscan' },
 { oid => '1929', descr => 'statistics: number of tuples read by seqscan',
   proname => 'pg_stat_get_tuples_returned', provolatile => 's',
   proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index be2c91168a..c7f4cc3c57 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -192,6 +192,7 @@ typedef struct PgStat_BackendSubEntry
 typedef struct PgStat_TableCounts
 {
 	PgStat_Counter numscans;
+	PgStat_Counter parallelnumscans;
 
 	PgStat_Counter tuples_returned;
 	PgStat_Counter tuples_fetched;
@@ -433,6 +434,8 @@ typedef struct PgStat_StatTabEntry
 {
 	PgStat_Counter numscans;
 	TimestampTz lastscan;
+	PgStat_Counter parallelnumscans;
+	TimestampTz parallellastscan;
 
 	PgStat_Counter tuples_returned;
 	PgStat_Counter tuples_fetched;
@@ -640,10 +643,14 @@ extern void pgstat_report_analyze(Relation rel,
 
 /* nontransactional event counts are simple enough to inline */
 
-#define pgstat_count_heap_scan(rel)									\
+#define pgstat_count_heap_scan(rel, parallel)						\
 	do {															\
-		if (pgstat_should_count_relation(rel))						\
-			(rel)->pgstat_info->counts.numscans++;					\
+		if (pgstat_should_count_relation(rel)) {					\
+			if (!parallel)											\
+				(rel)->pgstat_info->counts.numscans++;				\
+			else													\
+				(rel)->pgstat_info->counts.parallelnumscans++;		\
+		}															\
 	} while (0)
 #define pgstat_count_heap_getnext(rel)								\
 	do {															\
@@ -655,10 +662,14 @@ extern void pgstat_report_analyze(Relation rel,
 		if (pgstat_should_count_relation(rel))						\
 			(rel)->pgstat_info->counts.tuples_fetched++;			\
 	} while (0)
-#define pgstat_count_index_scan(rel)								\
+#define pgstat_count_index_scan(rel, parallel)						\
 	do {															\
-		if (pgstat_should_count_relation(rel))						\
-			(rel)->pgstat_info->counts.numscans++;					\
+		if (pgstat_should_count_relation(rel)) {					\
+			if (!parallel)											\
+				(rel)->pgstat_info->counts.numscans++;				\
+			else													\
+				(rel)->pgstat_info->counts.parallelnumscans++;		\
+		}															\
 	} while (0)
 #define pgstat_count_index_tuples(rel, n)							\
 	do {															\
-- 
2.46.0

#4Bertrand Drouvot
bertranddrouvot.pg@gmail.com
In reply to: Guillaume Lelarge (#3)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Hi,

On Wed, Sep 04, 2024 at 02:51:51PM +0200, Guillaume Lelarge wrote:

Le mer. 4 sept. 2024 � 10:47, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
a �crit :

I don't see a CF entry for this patch. Would you mind creating one so that
we don't lost track of it?

I don't mind adding it, though I don't know if I should add it to the
September or November commit fest. Which one should I choose?

Thanks! That should be the November one (as the September one already started).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com

#5Guillaume Lelarge
guillaume@lelarge.info
In reply to: Bertrand Drouvot (#4)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Le mer. 4 sept. 2024 à 14:58, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
a écrit :

Hi,

On Wed, Sep 04, 2024 at 02:51:51PM +0200, Guillaume Lelarge wrote:

Le mer. 4 sept. 2024 ą 10:47, Bertrand Drouvot <

bertranddrouvot.pg@gmail.com>

a écrit :

I don't see a CF entry for this patch. Would you mind creating one so

that

we don't lost track of it?

I don't mind adding it, though I don't know if I should add it to the
September or November commit fest. Which one should I choose?

Thanks! That should be the November one (as the September one already
started).

I should have gone to the commit fest website, it says the same. I had the
recollection that it started on the 15th. Anyway, added to the november
commit fest (https://commitfest.postgresql.org/50/5238/).

--
Guillaume.

#6Bertrand Drouvot
bertranddrouvot.pg@gmail.com
In reply to: Guillaume Lelarge (#3)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Hi,

On Wed, Sep 04, 2024 at 02:51:51PM +0200, Guillaume Lelarge wrote:

Hi,

Le mer. 4 sept. 2024 � 10:47, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
a �crit :

What about to get rid of the pgstat_count_parallel_heap_scan and add an
extra
bolean parameter to pgstat_count_heap_scan to indicate if
counts.parallelnumscans
should be incremented too?

Something like:

pgstat_count_heap_scan(scan->rs_base.rs_rd, scan->rs_base.rs_parallel !=
NULL)

3 ===

Same comment for pgstat_count_index_scan (add an extra bolean parameter)
and
get rid of pgstat_count_parallel_index_scan()).

I think that 2 === and 3 === would help to avoid missing increments should
we
add those call to other places in the future.

Oh OK, understood. Done for both.

Thanks for v2!

1 ===

-#define pgstat_count_heap_scan(rel)
+#define pgstat_count_heap_scan(rel, parallel)
        do {
-               if (pgstat_should_count_relation(rel))
-                       (rel)->pgstat_info->counts.numscans++;
+            if (pgstat_should_count_relation(rel)) {
+                       if (!parallel)
+                               (rel)->pgstat_info->counts.numscans++;
+                       else
+                               (rel)->pgstat_info->counts.parallelnumscans++;
+               }

I think counts.numscans has to be incremented in all the cases (so even if
"parallel" is true).

Same comment for pgstat_count_index_scan().

4 ===

+ if (lstats->counts.numscans || lstats->counts.parallelnumscans)

Is it possible to have (lstats->counts.parallelnumscans) whithout having
(lstats->counts.numscans) ?

Nope, parallel scans are included in seq/index scans, as far as I can tell.
I could remove the parallelnumscans testing but it would be less obvious to
read.

2 ===

What about adding a comment instead of this extra check?

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com

#7Guillaume Lelarge
guillaume@lelarge.info
In reply to: Bertrand Drouvot (#6)
1 attachment(s)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Hi,

Le mer. 4 sept. 2024 à 16:18, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
a écrit :

Hi,

On Wed, Sep 04, 2024 at 02:51:51PM +0200, Guillaume Lelarge wrote:

Hi,

Le mer. 4 sept. 2024 à 10:47, Bertrand Drouvot <

bertranddrouvot.pg@gmail.com>

a écrit :

What about to get rid of the pgstat_count_parallel_heap_scan and add an
extra
bolean parameter to pgstat_count_heap_scan to indicate if
counts.parallelnumscans
should be incremented too?

Something like:

pgstat_count_heap_scan(scan->rs_base.rs_rd, scan->rs_base.rs_parallel

!=

NULL)

3 ===

Same comment for pgstat_count_index_scan (add an extra bolean

parameter)

and
get rid of pgstat_count_parallel_index_scan()).

I think that 2 === and 3 === would help to avoid missing increments

should

we
add those call to other places in the future.

Oh OK, understood. Done for both.

Thanks for v2!

1 ===

-#define pgstat_count_heap_scan(rel)
+#define pgstat_count_heap_scan(rel, parallel)
do {
-               if (pgstat_should_count_relation(rel))
-                       (rel)->pgstat_info->counts.numscans++;
+            if (pgstat_should_count_relation(rel)) {
+                       if (!parallel)
+                               (rel)->pgstat_info->counts.numscans++;
+                       else
+
(rel)->pgstat_info->counts.parallelnumscans++;
+               }

I think counts.numscans has to be incremented in all the cases (so even if
"parallel" is true).

Same comment for pgstat_count_index_scan().

You're right, and I've been too quick. Fixed in v3.

4 ===

+ if (lstats->counts.numscans || lstats->counts.parallelnumscans)

Is it possible to have (lstats->counts.parallelnumscans) whithout

having

(lstats->counts.numscans) ?

Nope, parallel scans are included in seq/index scans, as far as I can

tell.

I could remove the parallelnumscans testing but it would be less obvious

to

read.

2 ===

What about adding a comment instead of this extra check?

Done too in v3.

--
Guillaume.

Attachments:

v3-0001-Add-parallel-columns-for-pg_stat_all_tables-index.patchtext/x-patch; charset=US-ASCII; name=v3-0001-Add-parallel-columns-for-pg_stat_all_tables-index.patchDownload
From 97a95650cd220c1b88ab6f3d36149b8860bead1d Mon Sep 17 00:00:00 2001
From: Guillaume Lelarge <guillaume.lelarge@dalibo.com>
Date: Wed, 28 Aug 2024 21:35:30 +0200
Subject: [PATCH v3] Add parallel columns for pg_stat_all_tables,indexes

pg_stat_all_tables gets 4 new columns: parallel_seq_scan,
last_parallel_seq_scan, parallel_idx_scan, last_parallel_idx_scan.

pg_stat_all_indexes gets 2 new columns: parallel_idx_scan,
last_parallel_idx_scan.
---
 doc/src/sgml/monitoring.sgml                 | 69 ++++++++++++++++++--
 src/backend/access/brin/brin.c               |  2 +-
 src/backend/access/gin/ginscan.c             |  2 +-
 src/backend/access/gist/gistget.c            |  4 +-
 src/backend/access/hash/hashsearch.c         |  2 +-
 src/backend/access/heap/heapam.c             |  2 +-
 src/backend/access/nbtree/nbtsearch.c        |  2 +-
 src/backend/access/spgist/spgscan.c          |  2 +-
 src/backend/catalog/system_views.sql         |  6 ++
 src/backend/utils/activity/pgstat_relation.c | 10 ++-
 src/backend/utils/adt/pgstatfuncs.c          |  6 ++
 src/include/catalog/pg_proc.dat              |  8 +++
 src/include/pgstat.h                         | 17 +++--
 13 files changed, 113 insertions(+), 19 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 933de6fe07..6886094095 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3773,7 +3773,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>seq_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of sequential scans initiated on this table
+       Number of sequential scans (including parallel ones) initiated on this table
       </para></entry>
      </row>
 
@@ -3782,7 +3782,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_seq_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last sequential scan on this table, based on the
+       The time of the last sequential scan (including parallel ones) on this table, based on the
+       most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_seq_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel sequential scans initiated on this table
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_seq_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel sequential scan on this table, based on the
        most recent transaction stop time
       </para></entry>
      </row>
@@ -3801,7 +3820,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>idx_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of index scans initiated on this table
+       Number of index scans (including parallel ones) initiated on this table
       </para></entry>
      </row>
 
@@ -3810,7 +3829,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_idx_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last index scan on this table, based on the
+       The time of the last index scan (including parallel ones) on this table, based on the
+       most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_idx_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel index scans initiated on this table
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_idx_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel index scan on this table, based on the
        most recent transaction stop time
       </para></entry>
      </row>
@@ -4080,7 +4118,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>idx_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of index scans initiated on this index
+       Number of index scans (including parallel ones) initiated on this index
       </para></entry>
      </row>
 
@@ -4089,7 +4127,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_idx_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last scan on this index, based on the
+       The time of the last scan on this index(including parallel ones), based
+       on the most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_idx_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel index scans initiated on this index
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_idx_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel scan on this index, based on the
        most recent transaction stop time
       </para></entry>
      </row>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 6467bed604..4b5557fcf7 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -580,7 +580,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 
 	opaque = (BrinOpaque *) scan->opaque;
 	bdesc = opaque->bo_bdesc;
-	pgstat_count_index_scan(idxRel);
+	pgstat_count_index_scan(idxRel, false);
 
 	/*
 	 * We need to know the size of the table so that we know how long to
diff --git a/src/backend/access/gin/ginscan.c b/src/backend/access/gin/ginscan.c
index af24d38544..2926a4caf6 100644
--- a/src/backend/access/gin/ginscan.c
+++ b/src/backend/access/gin/ginscan.c
@@ -435,7 +435,7 @@ ginNewScanKey(IndexScanDesc scan)
 
 	MemoryContextSwitchTo(oldCtx);
 
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 }
 
 void
diff --git a/src/backend/access/gist/gistget.c b/src/backend/access/gist/gistget.c
index b35b8a9757..7e89382ce5 100644
--- a/src/backend/access/gist/gistget.c
+++ b/src/backend/access/gist/gistget.c
@@ -624,7 +624,7 @@ gistgettuple(IndexScanDesc scan, ScanDirection dir)
 		/* Begin the scan by processing the root page */
 		GISTSearchItem fakeItem;
 
-		pgstat_count_index_scan(scan->indexRelation);
+		pgstat_count_index_scan(scan->indexRelation, false);
 
 		so->firstCall = false;
 		so->curPageData = so->nPageData = 0;
@@ -749,7 +749,7 @@ gistgetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 	if (!so->qual_ok)
 		return 0;
 
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 
 	/* Begin the scan by processing the root page */
 	so->curPageData = so->nPageData = 0;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 0d99d6abc8..a63edc8372 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -297,7 +297,7 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	HashPageOpaque opaque;
 	HashScanPosItem *currItem;
 
-	pgstat_count_index_scan(rel);
+	pgstat_count_index_scan(rel, false);
 
 	/*
 	 * We do not support hash scans with no index qualification, because we
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 91b20147a0..f21cf50e6e 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -409,7 +409,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)
 	 * and for sample scans we update stats for tuple fetches).
 	 */
 	if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN)
-		pgstat_count_heap_scan(scan->rs_base.rs_rd);
+		pgstat_count_heap_scan(scan->rs_base.rs_rd, (scan->rs_base.rs_parallel != NULL));
 }
 
 /*
diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c
index 2551df8a67..ef50852199 100644
--- a/src/backend/access/nbtree/nbtsearch.c
+++ b/src/backend/access/nbtree/nbtsearch.c
@@ -896,7 +896,7 @@ _bt_first(IndexScanDesc scan, ScanDirection dir)
 
 	Assert(!BTScanPosIsValid(so->currPos));
 
-	pgstat_count_index_scan(rel);
+	pgstat_count_index_scan(rel, (scan->parallel_scan != NULL));
 
 	/*
 	 * Examine the scan keys and eliminate any redundant keys; also mark the
diff --git a/src/backend/access/spgist/spgscan.c b/src/backend/access/spgist/spgscan.c
index 03293a7816..a78fa34570 100644
--- a/src/backend/access/spgist/spgscan.c
+++ b/src/backend/access/spgist/spgscan.c
@@ -422,7 +422,7 @@ spgrescan(IndexScanDesc scan, ScanKey scankey, int nscankeys,
 	resetSpGistScanOpaque(so);
 
 	/* count an indexscan for stats */
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 }
 
 void
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 7fd5d256a1..54b1cd6b40 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -670,9 +670,13 @@ CREATE VIEW pg_stat_all_tables AS
             C.relname AS relname,
             pg_stat_get_numscans(C.oid) AS seq_scan,
             pg_stat_get_lastscan(C.oid) AS last_seq_scan,
+            pg_stat_get_parallelnumscans(C.oid) AS parallel_seq_scan,
+            pg_stat_get_parallellastscan(C.oid) AS last_parallel_seq_scan,
             pg_stat_get_tuples_returned(C.oid) AS seq_tup_read,
             sum(pg_stat_get_numscans(I.indexrelid))::bigint AS idx_scan,
             max(pg_stat_get_lastscan(I.indexrelid)) AS last_idx_scan,
+            sum(pg_stat_get_parallelnumscans(I.indexrelid))::bigint AS parallel_idx_scan,
+            max(pg_stat_get_parallellastscan(I.indexrelid)) AS last_parallel_idx_scan,
             sum(pg_stat_get_tuples_fetched(I.indexrelid))::bigint +
             pg_stat_get_tuples_fetched(C.oid) AS idx_tup_fetch,
             pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins,
@@ -792,6 +796,8 @@ CREATE VIEW pg_stat_all_indexes AS
             I.relname AS indexrelname,
             pg_stat_get_numscans(I.oid) AS idx_scan,
             pg_stat_get_lastscan(I.oid) AS last_idx_scan,
+            pg_stat_get_parallelnumscans(I.oid) AS parallel_idx_scan,
+            pg_stat_get_parallellastscan(I.oid) AS last_parallel_idx_scan,
             pg_stat_get_tuples_returned(I.oid) AS idx_tup_read,
             pg_stat_get_tuples_fetched(I.oid) AS idx_tup_fetch
     FROM pg_class C JOIN
diff --git a/src/backend/utils/activity/pgstat_relation.c b/src/backend/utils/activity/pgstat_relation.c
index 8a3f7d434c..b88c35d834 100644
--- a/src/backend/utils/activity/pgstat_relation.c
+++ b/src/backend/utils/activity/pgstat_relation.c
@@ -829,12 +829,20 @@ pgstat_relation_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 	tabentry = &shtabstats->stats;
 
 	tabentry->numscans += lstats->counts.numscans;
+	tabentry->parallelnumscans += lstats->counts.parallelnumscans;
+
+	/*
+	 * Don't check counts.parallelnumscans because counts.numscans includes
+	 * counts.parallelnumscans
+	 */
 	if (lstats->counts.numscans)
 	{
 		TimestampTz t = GetCurrentTransactionStopTimestamp();
 
-		if (t > tabentry->lastscan)
+		if (t > tabentry->lastscan && lstats->counts.numscans)
 			tabentry->lastscan = t;
+		if (t > tabentry->parallellastscan && lstats->counts.parallelnumscans)
+			tabentry->parallellastscan = t;
 	}
 	tabentry->tuples_returned += lstats->counts.tuples_returned;
 	tabentry->tuples_fetched += lstats->counts.tuples_fetched;
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 97dc09ac0d..30a3849e3d 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -82,6 +82,9 @@ PG_STAT_GET_RELENTRY_INT64(mod_since_analyze)
 /* pg_stat_get_numscans */
 PG_STAT_GET_RELENTRY_INT64(numscans)
 
+/* pg_stat_get_parallelnumscans */
+PG_STAT_GET_RELENTRY_INT64(parallelnumscans)
+
 /* pg_stat_get_tuples_deleted */
 PG_STAT_GET_RELENTRY_INT64(tuples_deleted)
 
@@ -140,6 +143,9 @@ PG_STAT_GET_RELENTRY_TIMESTAMPTZ(last_vacuum_time)
 /* pg_stat_get_lastscan */
 PG_STAT_GET_RELENTRY_TIMESTAMPTZ(lastscan)
 
+/* pg_stat_get_parallellastscan */
+PG_STAT_GET_RELENTRY_TIMESTAMPTZ(parallellastscan)
+
 Datum
 pg_stat_get_function_calls(PG_FUNCTION_ARGS)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index ff5436acac..1cce03a6d2 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5391,6 +5391,14 @@
   proname => 'pg_stat_get_lastscan', provolatile => 's', proparallel => 'r',
   prorettype => 'timestamptz', proargtypes => 'oid',
   prosrc => 'pg_stat_get_lastscan' },
+{ oid => '9000', descr => 'statistics: number of parallel scans done for table/index',
+  proname => 'pg_stat_get_parallelnumscans', provolatile => 's', proparallel => 'r',
+  prorettype => 'int8', proargtypes => 'oid',
+  prosrc => 'pg_stat_get_parallelnumscans' },
+{ oid => '9001', descr => 'statistics: time of the last parallel scan for table/index',
+  proname => 'pg_stat_get_parallellastscan', provolatile => 's', proparallel => 'r',
+  prorettype => 'timestamptz', proargtypes => 'oid',
+  prosrc => 'pg_stat_get_parallellastscan' },
 { oid => '1929', descr => 'statistics: number of tuples read by seqscan',
   proname => 'pg_stat_get_tuples_returned', provolatile => 's',
   proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index be2c91168a..ceca8bff7d 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -192,6 +192,7 @@ typedef struct PgStat_BackendSubEntry
 typedef struct PgStat_TableCounts
 {
 	PgStat_Counter numscans;
+	PgStat_Counter parallelnumscans;
 
 	PgStat_Counter tuples_returned;
 	PgStat_Counter tuples_fetched;
@@ -433,6 +434,8 @@ typedef struct PgStat_StatTabEntry
 {
 	PgStat_Counter numscans;
 	TimestampTz lastscan;
+	PgStat_Counter parallelnumscans;
+	TimestampTz parallellastscan;
 
 	PgStat_Counter tuples_returned;
 	PgStat_Counter tuples_fetched;
@@ -640,10 +643,13 @@ extern void pgstat_report_analyze(Relation rel,
 
 /* nontransactional event counts are simple enough to inline */
 
-#define pgstat_count_heap_scan(rel)									\
+#define pgstat_count_heap_scan(rel, parallel)						\
 	do {															\
-		if (pgstat_should_count_relation(rel))						\
+		if (pgstat_should_count_relation(rel)) {					\
 			(rel)->pgstat_info->counts.numscans++;					\
+			if (parallel)											\
+				(rel)->pgstat_info->counts.parallelnumscans++;		\
+		}															\
 	} while (0)
 #define pgstat_count_heap_getnext(rel)								\
 	do {															\
@@ -655,10 +661,13 @@ extern void pgstat_report_analyze(Relation rel,
 		if (pgstat_should_count_relation(rel))						\
 			(rel)->pgstat_info->counts.tuples_fetched++;			\
 	} while (0)
-#define pgstat_count_index_scan(rel)								\
+#define pgstat_count_index_scan(rel, parallel)						\
 	do {															\
-		if (pgstat_should_count_relation(rel))						\
+		if (pgstat_should_count_relation(rel)) {					\
 			(rel)->pgstat_info->counts.numscans++;					\
+			if (parallel)											\
+				(rel)->pgstat_info->counts.parallelnumscans++;		\
+		}															\
 	} while (0)
 #define pgstat_count_index_tuples(rel, n)							\
 	do {															\
-- 
2.46.0

#8Bertrand Drouvot
bertranddrouvot.pg@gmail.com
In reply to: Guillaume Lelarge (#7)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Hi,

On Wed, Sep 04, 2024 at 04:37:19PM +0200, Guillaume Lelarge wrote:

Hi,

Le mer. 4 sept. 2024 � 16:18, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
a �crit :

What about adding a comment instead of this extra check?

Done too in v3.

Thanks!

1 ===

+       /*
+        * Don't check counts.parallelnumscans because counts.numscans includes
+        * counts.parallelnumscans
+        */

"." is missing at the end of the comment.

2 ===

-               if (t > tabentry->lastscan)
+               if (t > tabentry->lastscan && lstats->counts.numscans)

The extra check on lstats->counts.numscans is not needed as it's already done
a few lines before.

3 ===

+ if (t > tabentry->parallellastscan && lstats->counts.parallelnumscans)

This one makes sense.

And now I'm wondering if the extra comment added in v3 is really worth it (and
does not sound confusing)? I mean, the parallel check is done once we passe
the initial test on counts.numscans. I think the code is clear enough without
this extra comment, thoughts?

4 ===

What about adding a few tests? or do you want to wait a bit more to see if "
there's an agreement on this patch" (as you stated at the start of this thread).

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com

#9Guillaume Lelarge
guillaume@lelarge.info
In reply to: Bertrand Drouvot (#8)
1 attachment(s)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Le jeu. 5 sept. 2024 à 07:36, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
a écrit :

Hi,

On Wed, Sep 04, 2024 at 04:37:19PM +0200, Guillaume Lelarge wrote:

Hi,

Le mer. 4 sept. 2024 à 16:18, Bertrand Drouvot <

bertranddrouvot.pg@gmail.com>

a écrit :

What about adding a comment instead of this extra check?

Done too in v3.

Thanks!

1 ===

+       /*
+        * Don't check counts.parallelnumscans because counts.numscans
includes
+        * counts.parallelnumscans
+        */

"." is missing at the end of the comment.

Fixed in v4.

2 ===

-               if (t > tabentry->lastscan)
+               if (t > tabentry->lastscan && lstats->counts.numscans)

The extra check on lstats->counts.numscans is not needed as it's already
done
a few lines before.

Fixed in v4.

3 ===

+ if (t > tabentry->parallellastscan &&
lstats->counts.parallelnumscans)

This one makes sense.

And now I'm wondering if the extra comment added in v3 is really worth it
(and
does not sound confusing)? I mean, the parallel check is done once we passe
the initial test on counts.numscans. I think the code is clear enough
without
this extra comment, thoughts?

I'm not sure I understand you here. I kinda like the extra comment though.

4 ===

What about adding a few tests? or do you want to wait a bit more to see if
"
there's an agreement on this patch" (as you stated at the start of this
thread).

Guess I can start working on that now. It will take some time as I've never
done it before. Good thing I added the patch on the November commit fest :)

Thanks again.

Regards.

--
Guillaume.

Attachments:

v4-0001-Add-parallel-columns-for-pg_stat_all_tables-index.patchtext/x-patch; charset=US-ASCII; name=v4-0001-Add-parallel-columns-for-pg_stat_all_tables-index.patchDownload
From 6c92e70cd2698f24fe14069f675b7934e2f95bfe Mon Sep 17 00:00:00 2001
From: Guillaume Lelarge <guillaume.lelarge@dalibo.com>
Date: Wed, 28 Aug 2024 21:35:30 +0200
Subject: [PATCH v4] Add parallel columns for pg_stat_all_tables,indexes

pg_stat_all_tables gets 4 new columns: parallel_seq_scan,
last_parallel_seq_scan, parallel_idx_scan, last_parallel_idx_scan.

pg_stat_all_indexes gets 2 new columns: parallel_idx_scan,
last_parallel_idx_scan.
---
 doc/src/sgml/monitoring.sgml                 | 69 ++++++++++++++++++--
 src/backend/access/brin/brin.c               |  2 +-
 src/backend/access/gin/ginscan.c             |  2 +-
 src/backend/access/gist/gistget.c            |  4 +-
 src/backend/access/hash/hashsearch.c         |  2 +-
 src/backend/access/heap/heapam.c             |  2 +-
 src/backend/access/nbtree/nbtsearch.c        |  2 +-
 src/backend/access/spgist/spgscan.c          |  2 +-
 src/backend/catalog/system_views.sql         |  6 ++
 src/backend/utils/activity/pgstat_relation.c |  8 +++
 src/backend/utils/adt/pgstatfuncs.c          |  6 ++
 src/include/catalog/pg_proc.dat              |  8 +++
 src/include/pgstat.h                         | 17 +++--
 13 files changed, 112 insertions(+), 18 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 933de6fe07..6886094095 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3773,7 +3773,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>seq_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of sequential scans initiated on this table
+       Number of sequential scans (including parallel ones) initiated on this table
       </para></entry>
      </row>
 
@@ -3782,7 +3782,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_seq_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last sequential scan on this table, based on the
+       The time of the last sequential scan (including parallel ones) on this table, based on the
+       most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_seq_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel sequential scans initiated on this table
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_seq_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel sequential scan on this table, based on the
        most recent transaction stop time
       </para></entry>
      </row>
@@ -3801,7 +3820,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>idx_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of index scans initiated on this table
+       Number of index scans (including parallel ones) initiated on this table
       </para></entry>
      </row>
 
@@ -3810,7 +3829,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_idx_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last index scan on this table, based on the
+       The time of the last index scan (including parallel ones) on this table, based on the
+       most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_idx_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel index scans initiated on this table
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_idx_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel index scan on this table, based on the
        most recent transaction stop time
       </para></entry>
      </row>
@@ -4080,7 +4118,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>idx_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of index scans initiated on this index
+       Number of index scans (including parallel ones) initiated on this index
       </para></entry>
      </row>
 
@@ -4089,7 +4127,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_idx_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last scan on this index, based on the
+       The time of the last scan on this index(including parallel ones), based
+       on the most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_idx_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel index scans initiated on this index
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_idx_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel scan on this index, based on the
        most recent transaction stop time
       </para></entry>
      </row>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 6467bed604..4b5557fcf7 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -580,7 +580,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 
 	opaque = (BrinOpaque *) scan->opaque;
 	bdesc = opaque->bo_bdesc;
-	pgstat_count_index_scan(idxRel);
+	pgstat_count_index_scan(idxRel, false);
 
 	/*
 	 * We need to know the size of the table so that we know how long to
diff --git a/src/backend/access/gin/ginscan.c b/src/backend/access/gin/ginscan.c
index af24d38544..2926a4caf6 100644
--- a/src/backend/access/gin/ginscan.c
+++ b/src/backend/access/gin/ginscan.c
@@ -435,7 +435,7 @@ ginNewScanKey(IndexScanDesc scan)
 
 	MemoryContextSwitchTo(oldCtx);
 
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 }
 
 void
diff --git a/src/backend/access/gist/gistget.c b/src/backend/access/gist/gistget.c
index b35b8a9757..7e89382ce5 100644
--- a/src/backend/access/gist/gistget.c
+++ b/src/backend/access/gist/gistget.c
@@ -624,7 +624,7 @@ gistgettuple(IndexScanDesc scan, ScanDirection dir)
 		/* Begin the scan by processing the root page */
 		GISTSearchItem fakeItem;
 
-		pgstat_count_index_scan(scan->indexRelation);
+		pgstat_count_index_scan(scan->indexRelation, false);
 
 		so->firstCall = false;
 		so->curPageData = so->nPageData = 0;
@@ -749,7 +749,7 @@ gistgetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 	if (!so->qual_ok)
 		return 0;
 
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 
 	/* Begin the scan by processing the root page */
 	so->curPageData = so->nPageData = 0;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 0d99d6abc8..a63edc8372 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -297,7 +297,7 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	HashPageOpaque opaque;
 	HashScanPosItem *currItem;
 
-	pgstat_count_index_scan(rel);
+	pgstat_count_index_scan(rel, false);
 
 	/*
 	 * We do not support hash scans with no index qualification, because we
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 91b20147a0..f21cf50e6e 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -409,7 +409,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)
 	 * and for sample scans we update stats for tuple fetches).
 	 */
 	if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN)
-		pgstat_count_heap_scan(scan->rs_base.rs_rd);
+		pgstat_count_heap_scan(scan->rs_base.rs_rd, (scan->rs_base.rs_parallel != NULL));
 }
 
 /*
diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c
index 2551df8a67..ef50852199 100644
--- a/src/backend/access/nbtree/nbtsearch.c
+++ b/src/backend/access/nbtree/nbtsearch.c
@@ -896,7 +896,7 @@ _bt_first(IndexScanDesc scan, ScanDirection dir)
 
 	Assert(!BTScanPosIsValid(so->currPos));
 
-	pgstat_count_index_scan(rel);
+	pgstat_count_index_scan(rel, (scan->parallel_scan != NULL));
 
 	/*
 	 * Examine the scan keys and eliminate any redundant keys; also mark the
diff --git a/src/backend/access/spgist/spgscan.c b/src/backend/access/spgist/spgscan.c
index 03293a7816..a78fa34570 100644
--- a/src/backend/access/spgist/spgscan.c
+++ b/src/backend/access/spgist/spgscan.c
@@ -422,7 +422,7 @@ spgrescan(IndexScanDesc scan, ScanKey scankey, int nscankeys,
 	resetSpGistScanOpaque(so);
 
 	/* count an indexscan for stats */
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 }
 
 void
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 7fd5d256a1..54b1cd6b40 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -670,9 +670,13 @@ CREATE VIEW pg_stat_all_tables AS
             C.relname AS relname,
             pg_stat_get_numscans(C.oid) AS seq_scan,
             pg_stat_get_lastscan(C.oid) AS last_seq_scan,
+            pg_stat_get_parallelnumscans(C.oid) AS parallel_seq_scan,
+            pg_stat_get_parallellastscan(C.oid) AS last_parallel_seq_scan,
             pg_stat_get_tuples_returned(C.oid) AS seq_tup_read,
             sum(pg_stat_get_numscans(I.indexrelid))::bigint AS idx_scan,
             max(pg_stat_get_lastscan(I.indexrelid)) AS last_idx_scan,
+            sum(pg_stat_get_parallelnumscans(I.indexrelid))::bigint AS parallel_idx_scan,
+            max(pg_stat_get_parallellastscan(I.indexrelid)) AS last_parallel_idx_scan,
             sum(pg_stat_get_tuples_fetched(I.indexrelid))::bigint +
             pg_stat_get_tuples_fetched(C.oid) AS idx_tup_fetch,
             pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins,
@@ -792,6 +796,8 @@ CREATE VIEW pg_stat_all_indexes AS
             I.relname AS indexrelname,
             pg_stat_get_numscans(I.oid) AS idx_scan,
             pg_stat_get_lastscan(I.oid) AS last_idx_scan,
+            pg_stat_get_parallelnumscans(I.oid) AS parallel_idx_scan,
+            pg_stat_get_parallellastscan(I.oid) AS last_parallel_idx_scan,
             pg_stat_get_tuples_returned(I.oid) AS idx_tup_read,
             pg_stat_get_tuples_fetched(I.oid) AS idx_tup_fetch
     FROM pg_class C JOIN
diff --git a/src/backend/utils/activity/pgstat_relation.c b/src/backend/utils/activity/pgstat_relation.c
index 8a3f7d434c..766c56524e 100644
--- a/src/backend/utils/activity/pgstat_relation.c
+++ b/src/backend/utils/activity/pgstat_relation.c
@@ -829,12 +829,20 @@ pgstat_relation_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 	tabentry = &shtabstats->stats;
 
 	tabentry->numscans += lstats->counts.numscans;
+	tabentry->parallelnumscans += lstats->counts.parallelnumscans;
+
+	/*
+	 * Don't check counts.parallelnumscans because counts.numscans includes
+	 * counts.parallelnumscans.
+	 */
 	if (lstats->counts.numscans)
 	{
 		TimestampTz t = GetCurrentTransactionStopTimestamp();
 
 		if (t > tabentry->lastscan)
 			tabentry->lastscan = t;
+		if (t > tabentry->parallellastscan && lstats->counts.parallelnumscans)
+			tabentry->parallellastscan = t;
 	}
 	tabentry->tuples_returned += lstats->counts.tuples_returned;
 	tabentry->tuples_fetched += lstats->counts.tuples_fetched;
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 97dc09ac0d..30a3849e3d 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -82,6 +82,9 @@ PG_STAT_GET_RELENTRY_INT64(mod_since_analyze)
 /* pg_stat_get_numscans */
 PG_STAT_GET_RELENTRY_INT64(numscans)
 
+/* pg_stat_get_parallelnumscans */
+PG_STAT_GET_RELENTRY_INT64(parallelnumscans)
+
 /* pg_stat_get_tuples_deleted */
 PG_STAT_GET_RELENTRY_INT64(tuples_deleted)
 
@@ -140,6 +143,9 @@ PG_STAT_GET_RELENTRY_TIMESTAMPTZ(last_vacuum_time)
 /* pg_stat_get_lastscan */
 PG_STAT_GET_RELENTRY_TIMESTAMPTZ(lastscan)
 
+/* pg_stat_get_parallellastscan */
+PG_STAT_GET_RELENTRY_TIMESTAMPTZ(parallellastscan)
+
 Datum
 pg_stat_get_function_calls(PG_FUNCTION_ARGS)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index ff5436acac..1cce03a6d2 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5391,6 +5391,14 @@
   proname => 'pg_stat_get_lastscan', provolatile => 's', proparallel => 'r',
   prorettype => 'timestamptz', proargtypes => 'oid',
   prosrc => 'pg_stat_get_lastscan' },
+{ oid => '9000', descr => 'statistics: number of parallel scans done for table/index',
+  proname => 'pg_stat_get_parallelnumscans', provolatile => 's', proparallel => 'r',
+  prorettype => 'int8', proargtypes => 'oid',
+  prosrc => 'pg_stat_get_parallelnumscans' },
+{ oid => '9001', descr => 'statistics: time of the last parallel scan for table/index',
+  proname => 'pg_stat_get_parallellastscan', provolatile => 's', proparallel => 'r',
+  prorettype => 'timestamptz', proargtypes => 'oid',
+  prosrc => 'pg_stat_get_parallellastscan' },
 { oid => '1929', descr => 'statistics: number of tuples read by seqscan',
   proname => 'pg_stat_get_tuples_returned', provolatile => 's',
   proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index be2c91168a..ceca8bff7d 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -192,6 +192,7 @@ typedef struct PgStat_BackendSubEntry
 typedef struct PgStat_TableCounts
 {
 	PgStat_Counter numscans;
+	PgStat_Counter parallelnumscans;
 
 	PgStat_Counter tuples_returned;
 	PgStat_Counter tuples_fetched;
@@ -433,6 +434,8 @@ typedef struct PgStat_StatTabEntry
 {
 	PgStat_Counter numscans;
 	TimestampTz lastscan;
+	PgStat_Counter parallelnumscans;
+	TimestampTz parallellastscan;
 
 	PgStat_Counter tuples_returned;
 	PgStat_Counter tuples_fetched;
@@ -640,10 +643,13 @@ extern void pgstat_report_analyze(Relation rel,
 
 /* nontransactional event counts are simple enough to inline */
 
-#define pgstat_count_heap_scan(rel)									\
+#define pgstat_count_heap_scan(rel, parallel)						\
 	do {															\
-		if (pgstat_should_count_relation(rel))						\
+		if (pgstat_should_count_relation(rel)) {					\
 			(rel)->pgstat_info->counts.numscans++;					\
+			if (parallel)											\
+				(rel)->pgstat_info->counts.parallelnumscans++;		\
+		}															\
 	} while (0)
 #define pgstat_count_heap_getnext(rel)								\
 	do {															\
@@ -655,10 +661,13 @@ extern void pgstat_report_analyze(Relation rel,
 		if (pgstat_should_count_relation(rel))						\
 			(rel)->pgstat_info->counts.tuples_fetched++;			\
 	} while (0)
-#define pgstat_count_index_scan(rel)								\
+#define pgstat_count_index_scan(rel, parallel)						\
 	do {															\
-		if (pgstat_should_count_relation(rel))						\
+		if (pgstat_should_count_relation(rel)) {					\
 			(rel)->pgstat_info->counts.numscans++;					\
+			if (parallel)											\
+				(rel)->pgstat_info->counts.parallelnumscans++;		\
+		}															\
 	} while (0)
 #define pgstat_count_index_tuples(rel, n)							\
 	do {															\
-- 
2.46.0

#10Guillaume Lelarge
guillaume@lelarge.info
In reply to: Guillaume Lelarge (#9)
1 attachment(s)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Hello,

Le jeu. 5 sept. 2024 à 08:19, Guillaume Lelarge <guillaume@lelarge.info> a
écrit :

Le jeu. 5 sept. 2024 à 07:36, Bertrand Drouvot <
bertranddrouvot.pg@gmail.com> a écrit :

Hi,

On Wed, Sep 04, 2024 at 04:37:19PM +0200, Guillaume Lelarge wrote:

Hi,

Le mer. 4 sept. 2024 à 16:18, Bertrand Drouvot <

bertranddrouvot.pg@gmail.com>

a écrit :

What about adding a comment instead of this extra check?

Done too in v3.

Thanks!

1 ===

+       /*
+        * Don't check counts.parallelnumscans because counts.numscans
includes
+        * counts.parallelnumscans
+        */

"." is missing at the end of the comment.

Fixed in v4.

2 ===

-               if (t > tabentry->lastscan)
+               if (t > tabentry->lastscan && lstats->counts.numscans)

The extra check on lstats->counts.numscans is not needed as it's already
done
a few lines before.

Fixed in v4.

3 ===

+ if (t > tabentry->parallellastscan &&
lstats->counts.parallelnumscans)

This one makes sense.

And now I'm wondering if the extra comment added in v3 is really worth it
(and
does not sound confusing)? I mean, the parallel check is done once we
passe
the initial test on counts.numscans. I think the code is clear enough
without
this extra comment, thoughts?

I'm not sure I understand you here. I kinda like the extra comment though.

4 ===

What about adding a few tests? or do you want to wait a bit more to see
if "
there's an agreement on this patch" (as you stated at the start of this
thread).

Guess I can start working on that now. It will take some time as I've
never done it before. Good thing I added the patch on the November commit
fest :)

Finally found some time to work on this. Tests added on v5 patch (attached).

Regards.

--
Guillaume.

Attachments:

v5-0001-Add-parallel-columns-for-pg_stat_all_tables-index.patchtext/x-patch; charset=US-ASCII; name=v5-0001-Add-parallel-columns-for-pg_stat_all_tables-index.patchDownload
From 92474720b3178f74517958fededcf6797de58552 Mon Sep 17 00:00:00 2001
From: Guillaume Lelarge <guillaume.lelarge@dalibo.com>
Date: Sun, 6 Oct 2024 21:50:17 +0200
Subject: [PATCH v5] Add parallel columns for pg_stat_all_tables,indexes

pg_stat_all_tables gets 4 new columns: parallel_seq_scan,
last_parallel_seq_scan, parallel_idx_scan, last_parallel_idx_scan.

pg_stat_all_indexes gets 2 new columns: parallel_idx_scan,
last_parallel_idx_scan.
---
 doc/src/sgml/monitoring.sgml                 |  69 ++++++-
 src/backend/access/brin/brin.c               |   2 +-
 src/backend/access/gin/ginscan.c             |   2 +-
 src/backend/access/gist/gistget.c            |   4 +-
 src/backend/access/hash/hashsearch.c         |   2 +-
 src/backend/access/heap/heapam.c             |   2 +-
 src/backend/access/nbtree/nbtsearch.c        |  12 +-
 src/backend/access/spgist/spgscan.c          |   2 +-
 src/backend/catalog/system_views.sql         |   6 +
 src/backend/utils/activity/pgstat_relation.c |   8 +
 src/backend/utils/adt/pgstatfuncs.c          |   6 +
 src/include/catalog/pg_proc.dat              |   8 +
 src/include/pgstat.h                         |  17 +-
 src/test/regress/expected/rules.out          |  18 ++
 src/test/regress/expected/stats.out          | 194 +++++++++++++++++++
 src/test/regress/sql/stats.sql               |  92 +++++++++
 16 files changed, 421 insertions(+), 23 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 331315f8d3..aeaabb0ffe 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3803,7 +3803,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>seq_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of sequential scans initiated on this table
+       Number of sequential scans (including parallel ones) initiated on this table
       </para></entry>
      </row>
 
@@ -3812,7 +3812,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_seq_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last sequential scan on this table, based on the
+       The time of the last sequential scan (including parallel ones) on this table, based on the
+       most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_seq_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel sequential scans initiated on this table
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_seq_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel sequential scan on this table, based on the
        most recent transaction stop time
       </para></entry>
      </row>
@@ -3831,7 +3850,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>idx_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of index scans initiated on this table
+       Number of index scans (including parallel ones) initiated on this table
       </para></entry>
      </row>
 
@@ -3840,7 +3859,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_idx_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last index scan on this table, based on the
+       The time of the last index scan (including parallel ones) on this table, based on the
+       most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_idx_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel index scans initiated on this table
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_idx_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel index scan on this table, based on the
        most recent transaction stop time
       </para></entry>
      </row>
@@ -4110,7 +4148,7 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>idx_scan</structfield> <type>bigint</type>
       </para>
       <para>
-       Number of index scans initiated on this index
+       Number of index scans (including parallel ones) initiated on this index
       </para></entry>
      </row>
 
@@ -4119,7 +4157,26 @@ description | Waiting for a newly initialized WAL file to reach durable storage
        <structfield>last_idx_scan</structfield> <type>timestamp with time zone</type>
       </para>
       <para>
-       The time of the last scan on this index, based on the
+       The time of the last scan on this index(including parallel ones), based
+       on the most recent transaction stop time
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>parallel_idx_scan</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of parallel index scans initiated on this index
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>last_parallel_idx_scan</structfield> <type>timestamp with time zone</type>
+      </para>
+      <para>
+       The time of the last parallel scan on this index, based on the
        most recent transaction stop time
       </para></entry>
      </row>
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index c0b978119a..0fffa67301 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -584,7 +584,7 @@ bringetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 
 	opaque = (BrinOpaque *) scan->opaque;
 	bdesc = opaque->bo_bdesc;
-	pgstat_count_index_scan(idxRel);
+	pgstat_count_index_scan(idxRel, false);
 
 	/*
 	 * We need to know the size of the table so that we know how long to
diff --git a/src/backend/access/gin/ginscan.c b/src/backend/access/gin/ginscan.c
index f2fd62afbb..d2001d6054 100644
--- a/src/backend/access/gin/ginscan.c
+++ b/src/backend/access/gin/ginscan.c
@@ -435,7 +435,7 @@ ginNewScanKey(IndexScanDesc scan)
 
 	MemoryContextSwitchTo(oldCtx);
 
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 }
 
 void
diff --git a/src/backend/access/gist/gistget.c b/src/backend/access/gist/gistget.c
index b35b8a9757..7e89382ce5 100644
--- a/src/backend/access/gist/gistget.c
+++ b/src/backend/access/gist/gistget.c
@@ -624,7 +624,7 @@ gistgettuple(IndexScanDesc scan, ScanDirection dir)
 		/* Begin the scan by processing the root page */
 		GISTSearchItem fakeItem;
 
-		pgstat_count_index_scan(scan->indexRelation);
+		pgstat_count_index_scan(scan->indexRelation, false);
 
 		so->firstCall = false;
 		so->curPageData = so->nPageData = 0;
@@ -749,7 +749,7 @@ gistgetbitmap(IndexScanDesc scan, TIDBitmap *tbm)
 	if (!so->qual_ok)
 		return 0;
 
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 
 	/* Begin the scan by processing the root page */
 	so->curPageData = so->nPageData = 0;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 0d99d6abc8..a63edc8372 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -297,7 +297,7 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	HashPageOpaque opaque;
 	HashScanPosItem *currItem;
 
-	pgstat_count_index_scan(rel);
+	pgstat_count_index_scan(rel, false);
 
 	/*
 	 * We do not support hash scans with no index qualification, because we
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index da5e656a08..099b08242f 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -400,7 +400,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)
 	 * and for sample scans we update stats for tuple fetches).
 	 */
 	if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN)
-		pgstat_count_heap_scan(scan->rs_base.rs_rd);
+		pgstat_count_heap_scan(scan->rs_base.rs_rd, (scan->rs_base.rs_parallel != NULL));
 }
 
 /*
diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c
index fff7c89ead..8115b0fa2d 100644
--- a/src/backend/access/nbtree/nbtsearch.c
+++ b/src/backend/access/nbtree/nbtsearch.c
@@ -912,6 +912,12 @@ _bt_first(IndexScanDesc scan, ScanDirection dir)
 		return false;
 	}
 
+	/*
+	 * Count an indexscan for stats, now that we know that we'll call
+	 * _bt_search/_bt_endpoint below
+	 */
+	pgstat_count_index_scan(rel, (scan->parallel_scan != NULL));
+
 	/*
 	 * For parallel scans, get the starting page from shared state. If the
 	 * scan has not started, proceed to find out first leaf page in the usual
@@ -958,12 +964,6 @@ _bt_first(IndexScanDesc scan, ScanDirection dir)
 		_bt_start_array_keys(scan, dir);
 	}
 
-	/*
-	 * Count an indexscan for stats, now that we know that we'll call
-	 * _bt_search/_bt_endpoint below
-	 */
-	pgstat_count_index_scan(rel);
-
 	/*----------
 	 * Examine the scan keys to discover where we need to start the scan.
 	 *
diff --git a/src/backend/access/spgist/spgscan.c b/src/backend/access/spgist/spgscan.c
index 3017861859..fe3c9979df 100644
--- a/src/backend/access/spgist/spgscan.c
+++ b/src/backend/access/spgist/spgscan.c
@@ -420,7 +420,7 @@ spgrescan(IndexScanDesc scan, ScanKey scankey, int nscankeys,
 	resetSpGistScanOpaque(so);
 
 	/* count an indexscan for stats */
-	pgstat_count_index_scan(scan->indexRelation);
+	pgstat_count_index_scan(scan->indexRelation, false);
 }
 
 void
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3456b821bc..b062af32fb 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -670,9 +670,13 @@ CREATE VIEW pg_stat_all_tables AS
             C.relname AS relname,
             pg_stat_get_numscans(C.oid) AS seq_scan,
             pg_stat_get_lastscan(C.oid) AS last_seq_scan,
+            pg_stat_get_parallelnumscans(C.oid) AS parallel_seq_scan,
+            pg_stat_get_parallellastscan(C.oid) AS last_parallel_seq_scan,
             pg_stat_get_tuples_returned(C.oid) AS seq_tup_read,
             sum(pg_stat_get_numscans(I.indexrelid))::bigint AS idx_scan,
             max(pg_stat_get_lastscan(I.indexrelid)) AS last_idx_scan,
+            sum(pg_stat_get_parallelnumscans(I.indexrelid))::bigint AS parallel_idx_scan,
+            max(pg_stat_get_parallellastscan(I.indexrelid)) AS last_parallel_idx_scan,
             sum(pg_stat_get_tuples_fetched(I.indexrelid))::bigint +
             pg_stat_get_tuples_fetched(C.oid) AS idx_tup_fetch,
             pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins,
@@ -792,6 +796,8 @@ CREATE VIEW pg_stat_all_indexes AS
             I.relname AS indexrelname,
             pg_stat_get_numscans(I.oid) AS idx_scan,
             pg_stat_get_lastscan(I.oid) AS last_idx_scan,
+            pg_stat_get_parallelnumscans(I.oid) AS parallel_idx_scan,
+            pg_stat_get_parallellastscan(I.oid) AS last_parallel_idx_scan,
             pg_stat_get_tuples_returned(I.oid) AS idx_tup_read,
             pg_stat_get_tuples_fetched(I.oid) AS idx_tup_fetch
     FROM pg_class C JOIN
diff --git a/src/backend/utils/activity/pgstat_relation.c b/src/backend/utils/activity/pgstat_relation.c
index 8a3f7d434c..766c56524e 100644
--- a/src/backend/utils/activity/pgstat_relation.c
+++ b/src/backend/utils/activity/pgstat_relation.c
@@ -829,12 +829,20 @@ pgstat_relation_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)
 	tabentry = &shtabstats->stats;
 
 	tabentry->numscans += lstats->counts.numscans;
+	tabentry->parallelnumscans += lstats->counts.parallelnumscans;
+
+	/*
+	 * Don't check counts.parallelnumscans because counts.numscans includes
+	 * counts.parallelnumscans.
+	 */
 	if (lstats->counts.numscans)
 	{
 		TimestampTz t = GetCurrentTransactionStopTimestamp();
 
 		if (t > tabentry->lastscan)
 			tabentry->lastscan = t;
+		if (t > tabentry->parallellastscan && lstats->counts.parallelnumscans)
+			tabentry->parallellastscan = t;
 	}
 	tabentry->tuples_returned += lstats->counts.tuples_returned;
 	tabentry->tuples_fetched += lstats->counts.tuples_fetched;
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index f7b50e0b5a..2bb2e7bdbc 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -82,6 +82,9 @@ PG_STAT_GET_RELENTRY_INT64(mod_since_analyze)
 /* pg_stat_get_numscans */
 PG_STAT_GET_RELENTRY_INT64(numscans)
 
+/* pg_stat_get_parallelnumscans */
+PG_STAT_GET_RELENTRY_INT64(parallelnumscans)
+
 /* pg_stat_get_tuples_deleted */
 PG_STAT_GET_RELENTRY_INT64(tuples_deleted)
 
@@ -140,6 +143,9 @@ PG_STAT_GET_RELENTRY_TIMESTAMPTZ(last_vacuum_time)
 /* pg_stat_get_lastscan */
 PG_STAT_GET_RELENTRY_TIMESTAMPTZ(lastscan)
 
+/* pg_stat_get_parallellastscan */
+PG_STAT_GET_RELENTRY_TIMESTAMPTZ(parallellastscan)
+
 Datum
 pg_stat_get_function_calls(PG_FUNCTION_ARGS)
 {
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 77f54a79e6..e92a924dd2 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5443,6 +5443,14 @@
   proname => 'pg_stat_get_lastscan', provolatile => 's', proparallel => 'r',
   prorettype => 'timestamptz', proargtypes => 'oid',
   prosrc => 'pg_stat_get_lastscan' },
+{ oid => '9000', descr => 'statistics: number of parallel scans done for table/index',
+  proname => 'pg_stat_get_parallelnumscans', provolatile => 's', proparallel => 'r',
+  prorettype => 'int8', proargtypes => 'oid',
+  prosrc => 'pg_stat_get_parallelnumscans' },
+{ oid => '9001', descr => 'statistics: time of the last parallel scan for table/index',
+  proname => 'pg_stat_get_parallellastscan', provolatile => 's', proparallel => 'r',
+  prorettype => 'timestamptz', proargtypes => 'oid',
+  prosrc => 'pg_stat_get_parallellastscan' },
 { oid => '1929', descr => 'statistics: number of tuples read by seqscan',
   proname => 'pg_stat_get_tuples_returned', provolatile => 's',
   proparallel => 'r', prorettype => 'int8', proargtypes => 'oid',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index df53fa2d4f..55dbeefd24 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -192,6 +192,7 @@ typedef struct PgStat_BackendSubEntry
 typedef struct PgStat_TableCounts
 {
 	PgStat_Counter numscans;
+	PgStat_Counter parallelnumscans;
 
 	PgStat_Counter tuples_returned;
 	PgStat_Counter tuples_fetched;
@@ -435,6 +436,8 @@ typedef struct PgStat_StatTabEntry
 {
 	PgStat_Counter numscans;
 	TimestampTz lastscan;
+	PgStat_Counter parallelnumscans;
+	TimestampTz parallellastscan;
 
 	PgStat_Counter tuples_returned;
 	PgStat_Counter tuples_fetched;
@@ -642,10 +645,13 @@ extern void pgstat_report_analyze(Relation rel,
 
 /* nontransactional event counts are simple enough to inline */
 
-#define pgstat_count_heap_scan(rel)									\
+#define pgstat_count_heap_scan(rel, parallel)						\
 	do {															\
-		if (pgstat_should_count_relation(rel))						\
+		if (pgstat_should_count_relation(rel)) {					\
 			(rel)->pgstat_info->counts.numscans++;					\
+			if (parallel)											\
+				(rel)->pgstat_info->counts.parallelnumscans++;		\
+		}															\
 	} while (0)
 #define pgstat_count_heap_getnext(rel)								\
 	do {															\
@@ -657,10 +663,13 @@ extern void pgstat_report_analyze(Relation rel,
 		if (pgstat_should_count_relation(rel))						\
 			(rel)->pgstat_info->counts.tuples_fetched++;			\
 	} while (0)
-#define pgstat_count_index_scan(rel)								\
+#define pgstat_count_index_scan(rel, parallel)						\
 	do {															\
-		if (pgstat_should_count_relation(rel))						\
+		if (pgstat_should_count_relation(rel)) {					\
 			(rel)->pgstat_info->counts.numscans++;					\
+			if (parallel)											\
+				(rel)->pgstat_info->counts.parallelnumscans++;		\
+		}															\
 	} while (0)
 #define pgstat_count_index_tuples(rel, n)							\
 	do {															\
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2b47013f11..a8c390fd89 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1772,6 +1772,8 @@ pg_stat_all_indexes| SELECT c.oid AS relid,
     i.relname AS indexrelname,
     pg_stat_get_numscans(i.oid) AS idx_scan,
     pg_stat_get_lastscan(i.oid) AS last_idx_scan,
+    pg_stat_get_parallelnumscans(i.oid) AS parallel_idx_scan,
+    pg_stat_get_parallellastscan(i.oid) AS last_parallel_idx_scan,
     pg_stat_get_tuples_returned(i.oid) AS idx_tup_read,
     pg_stat_get_tuples_fetched(i.oid) AS idx_tup_fetch
    FROM (((pg_class c
@@ -1784,9 +1786,13 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     c.relname,
     pg_stat_get_numscans(c.oid) AS seq_scan,
     pg_stat_get_lastscan(c.oid) AS last_seq_scan,
+    pg_stat_get_parallelnumscans(c.oid) AS parallel_seq_scan,
+    pg_stat_get_parallellastscan(c.oid) AS last_parallel_seq_scan,
     pg_stat_get_tuples_returned(c.oid) AS seq_tup_read,
     (sum(pg_stat_get_numscans(i.indexrelid)))::bigint AS idx_scan,
     max(pg_stat_get_lastscan(i.indexrelid)) AS last_idx_scan,
+    (sum(pg_stat_get_parallelnumscans(i.indexrelid)))::bigint AS parallel_idx_scan,
+    max(pg_stat_get_parallellastscan(i.indexrelid)) AS last_parallel_idx_scan,
     ((sum(pg_stat_get_tuples_fetched(i.indexrelid)))::bigint + pg_stat_get_tuples_fetched(c.oid)) AS idx_tup_fetch,
     pg_stat_get_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
@@ -2157,6 +2163,8 @@ pg_stat_sys_indexes| SELECT relid,
     indexrelname,
     idx_scan,
     last_idx_scan,
+    parallel_idx_scan,
+    last_parallel_idx_scan,
     idx_tup_read,
     idx_tup_fetch
    FROM pg_stat_all_indexes
@@ -2166,9 +2174,13 @@ pg_stat_sys_tables| SELECT relid,
     relname,
     seq_scan,
     last_seq_scan,
+    parallel_seq_scan,
+    last_parallel_seq_scan,
     seq_tup_read,
     idx_scan,
     last_idx_scan,
+    parallel_idx_scan,
+    last_parallel_idx_scan,
     idx_tup_fetch,
     n_tup_ins,
     n_tup_upd,
@@ -2205,6 +2217,8 @@ pg_stat_user_indexes| SELECT relid,
     indexrelname,
     idx_scan,
     last_idx_scan,
+    parallel_idx_scan,
+    last_parallel_idx_scan,
     idx_tup_read,
     idx_tup_fetch
    FROM pg_stat_all_indexes
@@ -2214,9 +2228,13 @@ pg_stat_user_tables| SELECT relid,
     relname,
     seq_scan,
     last_seq_scan,
+    parallel_seq_scan,
+    last_parallel_seq_scan,
     seq_tup_read,
     idx_scan,
     last_idx_scan,
+    parallel_idx_scan,
+    last_parallel_idx_scan,
     idx_tup_fetch,
     n_tup_ins,
     n_tup_upd,
diff --git a/src/test/regress/expected/stats.out b/src/test/regress/expected/stats.out
index 56771f83ed..2fe229dd5a 100644
--- a/src/test/regress/expected/stats.out
+++ b/src/test/regress/expected/stats.out
@@ -764,6 +764,200 @@ FROM pg_stat_all_tables WHERE relid = 'test_last_scan'::regclass;
         2 | t      |        3 | t
 (1 row)
 
+-----
+-- Test that parallel_seq_scan, last_parallel_seq_scan, parallel_idx_scan, last_parallel_idx_scan are correctly maintained
+--
+-- We can't use a temporary table for parallel test. So, perform test using a permanent table,
+-- but disable autovacuum on it. That way autovacuum etc won't
+-- interfere. To be able to check that timestamps increase, we sleep for 100ms
+-- between tests, assuming that there aren't systems with a coarser timestamp
+-- granularity.
+-----
+BEGIN;
+CREATE TABLE test_parallel_scan(idx_col int primary key, noidx_col int) WITH (autovacuum_enabled = false);
+INSERT INTO test_parallel_scan(idx_col, noidx_col) SELECT i, i FROM generate_series(1, 10_000) i;
+COMMIT;
+VACUUM ANALYZE test_parallel_scan;
+SELECT pg_stat_force_next_flush();
+ pg_stat_force_next_flush 
+--------------------------
+ 
+(1 row)
+
+SELECT parallel_seq_scan, last_parallel_seq_scan, parallel_idx_scan, last_parallel_idx_scan
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass;
+ parallel_seq_scan | last_parallel_seq_scan | parallel_idx_scan | last_parallel_idx_scan 
+-------------------+------------------------+-------------------+------------------------
+                 0 |                        |                 0 | 
+(1 row)
+
+-- ensure we start out with exactly one parallel index and parallel sequential scan
+BEGIN;
+SET LOCAL parallel_setup_cost TO 0;
+SET LOCAL min_parallel_table_scan_size TO '100kB';
+SET LOCAL min_parallel_index_scan_size TO '100kB';
+SET LOCAL enable_seqscan TO on;
+SET LOCAL enable_indexscan TO on;
+SET LOCAL enable_bitmapscan TO off;
+SET LOCAL enable_indexonlyscan TO off;
+EXPLAIN (COSTS off) SELECT count(*) FROM test_parallel_scan WHERE noidx_col = 1;
+                     QUERY PLAN                      
+-----------------------------------------------------
+ Aggregate
+   ->  Gather
+         Workers Planned: 2
+         ->  Parallel Seq Scan on test_parallel_scan
+               Filter: (noidx_col = 1)
+(5 rows)
+
+SELECT count(*) FROM test_parallel_scan WHERE noidx_col = 1;
+ count 
+-------
+     1
+(1 row)
+
+SET LOCAL enable_seqscan TO off;
+EXPLAIN (COSTS off) SELECT count(*) FROM test_parallel_scan WHERE idx_col BETWEEN 1 AND 6000;
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Finalize Aggregate
+   ->  Gather
+         Workers Planned: 1
+         ->  Partial Aggregate
+               ->  Parallel Index Scan using test_parallel_scan_pkey on test_parallel_scan
+                     Index Cond: ((idx_col >= 1) AND (idx_col <= 6000))
+(6 rows)
+
+SELECT count(*) FROM test_parallel_scan WHERE idx_col BETWEEN 1 AND 6000;
+ count 
+-------
+  6000
+(1 row)
+
+SELECT pg_stat_force_next_flush();
+ pg_stat_force_next_flush 
+--------------------------
+ 
+(1 row)
+
+COMMIT;
+-- fetch timestamps from before the next test
+SELECT parallel_seq_scan, parallel_idx_scan
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass;
+ parallel_seq_scan | parallel_idx_scan 
+-------------------+-------------------
+                 3 |                 2
+(1 row)
+
+SELECT last_parallel_seq_scan AS test_last_seq, last_parallel_idx_scan AS test_last_idx
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass \gset
+SELECT pg_sleep(0.1); -- assume a minimum timestamp granularity of 100ms
+ pg_sleep 
+----------
+ 
+(1 row)
+
+-- cause one parallel sequential scan
+BEGIN;
+SET LOCAL parallel_setup_cost TO 0;
+SET LOCAL min_parallel_table_scan_size TO '100kB';
+SET LOCAL min_parallel_index_scan_size TO '100kB';
+SET LOCAL enable_seqscan TO on;
+SET LOCAL enable_indexscan TO off;
+SET LOCAL enable_bitmapscan TO off;
+SET LOCAL enable_indexonlyscan TO off;
+EXPLAIN (COSTS off) SELECT count(*) FROM test_parallel_scan WHERE noidx_col = 1;
+                     QUERY PLAN                      
+-----------------------------------------------------
+ Aggregate
+   ->  Gather
+         Workers Planned: 2
+         ->  Parallel Seq Scan on test_parallel_scan
+               Filter: (noidx_col = 1)
+(5 rows)
+
+SELECT count(*) FROM test_parallel_scan WHERE noidx_col = 1;
+ count 
+-------
+     1
+(1 row)
+
+SELECT pg_stat_force_next_flush();
+ pg_stat_force_next_flush 
+--------------------------
+ 
+(1 row)
+
+COMMIT;
+-- check that just sequential scan stats were incremented
+SELECT parallel_seq_scan, :'test_last_seq' < last_parallel_seq_scan AS seq_ok,
+       parallel_idx_scan, :'test_last_idx' = last_parallel_idx_scan AS idx_ok
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass;
+ parallel_seq_scan | seq_ok | parallel_idx_scan | idx_ok 
+-------------------+--------+-------------------+--------
+                 6 | t      |                 2 | t
+(1 row)
+
+-- fetch timestamps from before the next test
+SELECT last_parallel_seq_scan AS test_last_seq, last_parallel_idx_scan AS test_last_idx
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass \gset
+SELECT pg_sleep(0.1);
+ pg_sleep 
+----------
+ 
+(1 row)
+
+-- cause one parallel index scan
+BEGIN;
+SET LOCAL parallel_setup_cost TO 0;
+SET LOCAL min_parallel_table_scan_size TO '100kB';
+SET LOCAL min_parallel_index_scan_size TO '100kB';
+SET LOCAL enable_seqscan TO off;
+SET LOCAL enable_indexscan TO on;
+SET LOCAL enable_bitmapscan TO off;
+SET LOCAL enable_indexonlyscan TO off;
+EXPLAIN (COSTS off) SELECT count(*) FROM test_parallel_scan WHERE idx_col BETWEEN 1 AND 6000;
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Finalize Aggregate
+   ->  Gather
+         Workers Planned: 1
+         ->  Partial Aggregate
+               ->  Parallel Index Scan using test_parallel_scan_pkey on test_parallel_scan
+                     Index Cond: ((idx_col >= 1) AND (idx_col <= 6000))
+(6 rows)
+
+SELECT count(*) FROM test_parallel_scan WHERE idx_col BETWEEN 1 AND 6000;
+ count 
+-------
+  6000
+(1 row)
+
+SELECT pg_stat_force_next_flush();
+ pg_stat_force_next_flush 
+--------------------------
+ 
+(1 row)
+
+COMMIT;
+-- check that just index scan stats were incremented
+SELECT parallel_seq_scan, :'test_last_seq' = last_parallel_seq_scan AS seq_ok,
+       parallel_idx_scan, :'test_last_idx' < last_parallel_idx_scan AS idx_ok
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass;
+ parallel_seq_scan | seq_ok | parallel_idx_scan | idx_ok 
+-------------------+--------+-------------------+--------
+                 6 | t      |                 4 | t
+(1 row)
+
+-- fetch timestamps from before the next test
+SELECT last_parallel_seq_scan AS test_last_seq, last_parallel_idx_scan AS test_last_idx
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass \gset
+SELECT pg_sleep(0.1);
+ pg_sleep 
+----------
+ 
+(1 row)
+
 -----
 -- Test reset of some stats for shared table
 -----
diff --git a/src/test/regress/sql/stats.sql b/src/test/regress/sql/stats.sql
index 7147cc2f89..5eb92fbf55 100644
--- a/src/test/regress/sql/stats.sql
+++ b/src/test/regress/sql/stats.sql
@@ -376,6 +376,98 @@ COMMIT;
 SELECT seq_scan, :'test_last_seq' = last_seq_scan AS seq_ok, idx_scan, :'test_last_idx' < last_idx_scan AS idx_ok
 FROM pg_stat_all_tables WHERE relid = 'test_last_scan'::regclass;
 
+-----
+-- Test that parallel_seq_scan, last_parallel_seq_scan, parallel_idx_scan, last_parallel_idx_scan are correctly maintained
+--
+-- We can't use a temporary table for parallel test. So, perform test using a permanent table,
+-- but disable autovacuum on it. That way autovacuum etc won't
+-- interfere. To be able to check that timestamps increase, we sleep for 100ms
+-- between tests, assuming that there aren't systems with a coarser timestamp
+-- granularity.
+-----
+
+BEGIN;
+CREATE TABLE test_parallel_scan(idx_col int primary key, noidx_col int) WITH (autovacuum_enabled = false);
+INSERT INTO test_parallel_scan(idx_col, noidx_col) SELECT i, i FROM generate_series(1, 10_000) i;
+COMMIT;
+
+VACUUM ANALYZE test_parallel_scan;
+
+SELECT pg_stat_force_next_flush();
+
+SELECT parallel_seq_scan, last_parallel_seq_scan, parallel_idx_scan, last_parallel_idx_scan
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass;
+
+-- ensure we start out with exactly one parallel index and parallel sequential scan
+BEGIN;
+SET LOCAL parallel_setup_cost TO 0;
+SET LOCAL min_parallel_table_scan_size TO '100kB';
+SET LOCAL min_parallel_index_scan_size TO '100kB';
+SET LOCAL enable_seqscan TO on;
+SET LOCAL enable_indexscan TO on;
+SET LOCAL enable_bitmapscan TO off;
+SET LOCAL enable_indexonlyscan TO off;
+EXPLAIN (COSTS off) SELECT count(*) FROM test_parallel_scan WHERE noidx_col = 1;
+SELECT count(*) FROM test_parallel_scan WHERE noidx_col = 1;
+SET LOCAL enable_seqscan TO off;
+EXPLAIN (COSTS off) SELECT count(*) FROM test_parallel_scan WHERE idx_col BETWEEN 1 AND 6000;
+SELECT count(*) FROM test_parallel_scan WHERE idx_col BETWEEN 1 AND 6000;
+SELECT pg_stat_force_next_flush();
+COMMIT;
+
+-- fetch timestamps from before the next test
+SELECT parallel_seq_scan, parallel_idx_scan
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass;
+SELECT last_parallel_seq_scan AS test_last_seq, last_parallel_idx_scan AS test_last_idx
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass \gset
+SELECT pg_sleep(0.1); -- assume a minimum timestamp granularity of 100ms
+
+-- cause one parallel sequential scan
+BEGIN;
+SET LOCAL parallel_setup_cost TO 0;
+SET LOCAL min_parallel_table_scan_size TO '100kB';
+SET LOCAL min_parallel_index_scan_size TO '100kB';
+SET LOCAL enable_seqscan TO on;
+SET LOCAL enable_indexscan TO off;
+SET LOCAL enable_bitmapscan TO off;
+SET LOCAL enable_indexonlyscan TO off;
+EXPLAIN (COSTS off) SELECT count(*) FROM test_parallel_scan WHERE noidx_col = 1;
+SELECT count(*) FROM test_parallel_scan WHERE noidx_col = 1;
+SELECT pg_stat_force_next_flush();
+COMMIT;
+-- check that just sequential scan stats were incremented
+SELECT parallel_seq_scan, :'test_last_seq' < last_parallel_seq_scan AS seq_ok,
+       parallel_idx_scan, :'test_last_idx' = last_parallel_idx_scan AS idx_ok
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass;
+
+-- fetch timestamps from before the next test
+SELECT last_parallel_seq_scan AS test_last_seq, last_parallel_idx_scan AS test_last_idx
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass \gset
+SELECT pg_sleep(0.1);
+
+-- cause one parallel index scan
+BEGIN;
+SET LOCAL parallel_setup_cost TO 0;
+SET LOCAL min_parallel_table_scan_size TO '100kB';
+SET LOCAL min_parallel_index_scan_size TO '100kB';
+SET LOCAL enable_seqscan TO off;
+SET LOCAL enable_indexscan TO on;
+SET LOCAL enable_bitmapscan TO off;
+SET LOCAL enable_indexonlyscan TO off;
+EXPLAIN (COSTS off) SELECT count(*) FROM test_parallel_scan WHERE idx_col BETWEEN 1 AND 6000;
+SELECT count(*) FROM test_parallel_scan WHERE idx_col BETWEEN 1 AND 6000;
+SELECT pg_stat_force_next_flush();
+COMMIT;
+-- check that just index scan stats were incremented
+SELECT parallel_seq_scan, :'test_last_seq' = last_parallel_seq_scan AS seq_ok,
+       parallel_idx_scan, :'test_last_idx' < last_parallel_idx_scan AS idx_ok
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass;
+
+-- fetch timestamps from before the next test
+SELECT last_parallel_seq_scan AS test_last_seq, last_parallel_idx_scan AS test_last_idx
+FROM pg_stat_all_tables WHERE relid = 'test_parallel_scan'::regclass \gset
+SELECT pg_sleep(0.1);
+
 -----
 -- Test reset of some stats for shared table
 -----
-- 
2.46.2

#11Alena Rybakina
a.rybakina@postgrespro.ru
In reply to: Guillaume Lelarge (#10)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Hi!

Finally found some time to work on this. Tests added on v5 patch
(attached).

Maybe I'm not aware of the whole context of the thread and maybe my
questions will seem a bit stupid, but honestly
it's not entirely clear to me how this statistics will help to adjust
the number of parallel workers.
We may have situations when during overestimation of the cardinality
during query optimization a several number of parallel workers were
unjustifiably generated and vice versa -
due to a high workload only a few number of workers were generated.
How do we identify such cases so as not to increase or decrease the
number of parallel workers when it is not necessary?

--
Regards,
Alena Rybakina
Postgres Professional

#12Michael Paquier
michael@paquier.xyz
In reply to: Alena Rybakina (#11)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

On Mon, Oct 07, 2024 at 12:43:18AM +0300, Alena Rybakina wrote:

Maybe I'm not aware of the whole context of the thread and maybe my
questions will seem a bit stupid, but honestly
it's not entirely clear to me how this statistics will help to adjust the
number of parallel workers.
We may have situations when during overestimation of the cardinality during
query optimization a several number of parallel workers were unjustifiably
generated and vice versa -
due to a high workload only a few number of workers were generated.
How do we identify such cases so as not to increase or decrease the number
of parallel workers when it is not necessary?

Well. For spiky workloads, only these numbers are not going to help.
If you can map them with the number of times a query related to these
tables has been called, something that pg_stat_statements would be
able to show more information about.

FWIW, I have doubts that these numbers attached to this portion of the
system are always useful. For OLTP workloads, parallel workers would
unlikely be spawned because even with JOINs we won't work with a high
number of tuples that require them. This could be interested with
analytics, however complex query sequences mean that we'd still need
to look at all the plans involving the relations where there is an
unbalance of planned/spawned workers, because these can usually
involve quite a few gather nodes. At the end of the day, it seems to
me that we would still need data that involves statements to track
down specific plans that are starving. If your application does not
have that many statements, looking at individial plans is OK, but if
you have hundreds of them to dig into, this is time-consuming and
stats at table/index level don't offer data in terms of stuff that
stands out and needs adjustments.

And this is without the argument of bloating more the stats entries
for each table, even if it matters less now that these stats are in
shmem lately.
--
Michael

#13Guillaume Lelarge
guillaume@lelarge.info
In reply to: Michael Paquier (#12)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Le lun. 7 oct. 2024 à 02:41, Michael Paquier <michael@paquier.xyz> a écrit :

On Mon, Oct 07, 2024 at 12:43:18AM +0300, Alena Rybakina wrote:

Maybe I'm not aware of the whole context of the thread and maybe my
questions will seem a bit stupid, but honestly
it's not entirely clear to me how this statistics will help to adjust the
number of parallel workers.
We may have situations when during overestimation of the cardinality

during

query optimization a several number of parallel workers were

unjustifiably

generated and vice versa -
due to a high workload only a few number of workers were generated.
How do we identify such cases so as not to increase or decrease the

number

of parallel workers when it is not necessary?

Well. For spiky workloads, only these numbers are not going to help.
If you can map them with the number of times a query related to these
tables has been called, something that pg_stat_statements would be
able to show more information about.

FWIW, I have doubts that these numbers attached to this portion of the
system are always useful. For OLTP workloads, parallel workers would
unlikely be spawned because even with JOINs we won't work with a high
number of tuples that require them. This could be interested with
analytics, however complex query sequences mean that we'd still need
to look at all the plans involving the relations where there is an
unbalance of planned/spawned workers, because these can usually
involve quite a few gather nodes. At the end of the day, it seems to
me that we would still need data that involves statements to track
down specific plans that are starving. If your application does not
have that many statements, looking at individial plans is OK, but if
you have hundreds of them to dig into, this is time-consuming and
stats at table/index level don't offer data in terms of stuff that
stands out and needs adjustments.

And this is without the argument of bloating more the stats entries
for each table, even if it matters less now that these stats are in
shmem lately.

We need granularity because we have granularity in the config. There is
pg_stat_database because it gives the whole picture and it is easier to
monitor. And then, there is pg_stat_statements to analyze problematic
statements. And finally there is pg_stat_all* because you can set
parallel_workers on a specific table.

Anyway, offering various ways of getting the same information is not
unheard of. Pretty much like temp_files/temp_bytes in pg_stat_database,
temp_blks_read/temp_blks_written in pg_stat_statements and log_temp_files
in log files if you ask me :)

--
Guillaume.

#14Alena Rybakina
a.rybakina@postgrespro.ru
In reply to: Michael Paquier (#12)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

On 07.10.2024 03:41, Michael Paquier wrote:

On Mon, Oct 07, 2024 at 12:43:18AM +0300, Alena Rybakina wrote:

Maybe I'm not aware of the whole context of the thread and maybe my
questions will seem a bit stupid, but honestly
it's not entirely clear to me how this statistics will help to adjust the
number of parallel workers.
We may have situations when during overestimation of the cardinality during
query optimization a several number of parallel workers were unjustifiably
generated and vice versa -
due to a high workload only a few number of workers were generated.
How do we identify such cases so as not to increase or decrease the number
of parallel workers when it is not necessary?

Well. For spiky workloads, only these numbers are not going to help.
If you can map them with the number of times a query related to these
tables has been called, something that pg_stat_statements would be
able to show more information about.

FWIW, I have doubts that these numbers attached to this portion of the
system are always useful. For OLTP workloads, parallel workers would
unlikely be spawned because even with JOINs we won't work with a high
number of tuples that require them. This could be interested with
analytics, however complex query sequences mean that we'd still need
to look at all the plans involving the relations where there is an
unbalance of planned/spawned workers, because these can usually
involve quite a few gather nodes. At the end of the day, it seems to
me that we would still need data that involves statements to track
down specific plans that are starving. If your application does not
have that many statements, looking at individial plans is OK, but if
you have hundreds of them to dig into, this is time-consuming and
stats at table/index level don't offer data in terms of stuff that
stands out and needs adjustments.

And this is without the argument of bloating more the stats entries
for each table, even if it matters less now that these stats are in
shmem lately.

To be honest, it’s not entirely clear to me how these statistics will
help in setting up parallel workers.

As I understand, we need additional tools for analytics, which are
available in pg_stat_statements, but how then does it work? maybe you
have the opportunity to demonstrate this?

--
Regards,
Alena Rybakina
Postgres Professional

#15Alena Rybakina
a.rybakina@postgrespro.ru
In reply to: Guillaume Lelarge (#13)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

On 07.10.2024 09:34, Guillaume Lelarge wrote:

We need granularity because we have granularity in the config. There
is pg_stat_database because it gives the whole picture and it is
easier to monitor. And then, there is pg_stat_statements to analyze
problematic statements. And finally there is pg_stat_all* because you
can set parallel_workers on a specific table.

yes, I agree with you. Even when I experimented with vacuum settings for
database and used my vacuum statistics patch [0]https://commitfest.postgresql.org/50/5012/ for analyzes , I first
looked at this change in the number of blocks or deleted rows at the
database level,
and only then did an analysis of each table and index.

[0]: https://commitfest.postgresql.org/50/5012/

--
Regards,
Alena Rybakina
Postgres Professional

#16Michael Paquier
michael@paquier.xyz
In reply to: Alena Rybakina (#15)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

On Tue, Oct 08, 2024 at 06:24:54PM +0300, Alena Rybakina wrote:

yes, I agree with you. Even when I experimented with vacuum settings for
database and used my vacuum statistics patch [0] for analyzes , I first
looked at this change in the number of blocks or deleted rows at the
database level,
and only then did an analysis of each table and index.

[0] https://commitfest.postgresql.org/50/5012/

As hinted on other related threads like around [1]/messages/by-id/Zywxw7vqPLBfVfXN@paquier.xyz -- Michael, I am so-so about
the proposal of these numbers at table and index level now that we
have e7a9496de906 and 5d4298e75f25.

In such cases, I apply the concept that I call the "Mention Bien" (or
when you get a baccalaureat diploma with honors and with a 14~16/20 in
France). What we have is not perfect, still it's good enough to get
a 14/20 IMO, making hopefully 70~80% of users happy with these new
metrics. Perhaps I'm wrong, but I'd be curious to know if this
thread's proposal is required at all at the end.

I have not looked at the logging proposal yet.

[1]: /messages/by-id/Zywxw7vqPLBfVfXN@paquier.xyz -- Michael
--
Michael

#17Guillaume Lelarge
guillaume@lelarge.info
In reply to: Michael Paquier (#16)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Le lun. 11 nov. 2024 à 03:05, Michael Paquier <michael@paquier.xyz> a
écrit :

On Tue, Oct 08, 2024 at 06:24:54PM +0300, Alena Rybakina wrote:

yes, I agree with you. Even when I experimented with vacuum settings for
database and used my vacuum statistics patch [0] for analyzes , I first
looked at this change in the number of blocks or deleted rows at the
database level,
and only then did an analysis of each table and index.

[0] https://commitfest.postgresql.org/50/5012/

As hinted on other related threads like around [1], I am so-so about
the proposal of these numbers at table and index level now that we
have e7a9496de906 and 5d4298e75f25.

In such cases, I apply the concept that I call the "Mention Bien" (or
when you get a baccalaureat diploma with honors and with a 14~16/20 in
France). What we have is not perfect, still it's good enough to get
a 14/20 IMO, making hopefully 70~80% of users happy with these new
metrics. Perhaps I'm wrong, but I'd be curious to know if this
thread's proposal is required at all at the end.

I agree with you. We'll see if we need more, but it's already good to have
the metrics already commited.

I have not looked at the logging proposal yet.

I hope you'll have time to look at it. It seems to me very important to get
that kind of info in the logs.

Thanks again.

[1]: /messages/by-id/Zywxw7vqPLBfVfXN@paquier.xyz
--
Michael

--
Guillaume.

#18Robert Haas
robertmhaas@gmail.com
In reply to: Michael Paquier (#16)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

On Sun, Nov 10, 2024 at 9:05 PM Michael Paquier <michael@paquier.xyz> wrote:

As hinted on other related threads like around [1], I am so-so about
the proposal of these numbers at table and index level now that we
have e7a9496de906 and 5d4298e75f25.

I think the question to which we don't have a clear answer is: for
what purpose would you want to be able to distinguish parallel and
non-parallel scans on a per-table basis?

I think it's fairly clear why the existing counters exist at a table
level. If an index isn't getting very many index scans, perhaps it's
useless -- or worse than useless if it interferes with HOT -- and
should be dropped. On the other hand if a table is getting a lot of
sequential scans even though it happens to be quite large, perhaps new
indexes are needed. Or if the indexes that we expect to get used are
not the same as those actually getting used, perhaps we want to add or
drop indexes or adjust the queries.

But it is unclear to me what sort of tuning we would do based on
knowing how many of the scans on a certain table or a certain index
were parallel vs non-parallel. I have not fully reviewed the threads
linked in the original post; but I did look at them briefly and did
not immediately see discussion of the specific counters proposed here.
I also don't see anything in this thread that clearly explains why we
should want this exact thing. I don't want to make it sound like I
know that this is useless; I'm sure that Guillaume probably has lots
of hands-on tuning experience with this stuff that I lack. But the
reasons aren't clearly spelled out as far as I can see, and I'm having
some trouble imagining what they are.

Compare the parallel worker draught stuff. It's really clear how that
is intended to be used. If we're routinely failing to launch workers,
then either max_parallel_workers_per_gather is too high or
max_parallel_workers is too low. Now, I will admit that I have a few
doubts about whether that feature will get much real-world use but it
seems hard to doubt that it HAS a use. In this case, that seems less
clear.

--
Robert Haas
EDB: http://www.enterprisedb.com

#19Michael Paquier
michael@paquier.xyz
In reply to: Robert Haas (#18)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

On Mon, Nov 11, 2024 at 11:06:43AM -0500, Robert Haas wrote:

But it is unclear to me what sort of tuning we would do based on
knowing how many of the scans on a certain table or a certain index
were parallel vs non-parallel. I have not fully reviewed the threads
linked in the original post; but I did look at them briefly and did
not immediately see discussion of the specific counters proposed here.
I also don't see anything in this thread that clearly explains why we
should want this exact thing. I don't want to make it sound like I
know that this is useless; I'm sure that Guillaume probably has lots
of hands-on tuning experience with this stuff that I lack. But the
reasons aren't clearly spelled out as far as I can see, and I'm having
some trouble imagining what they are.

Thanks for the summary. My main worry is that these are kind of hard
to act on for tuning when aggregated at relation level (Guillaume,
feel free to counter-argue!). The main point that comes into mind is
that single table scans would be mostly involved with OLTP workloads
or simple joins, where parallel workers are of little use. That could
be much more interesting for analytical-ish workloads with more
complex plan pattern where one or more Gather or GatherMerge nodes are
involved. Still, even in this case I suspect that most users will
finish by looking at plan patterns, and that these counters added for
index or tables would have a limited impact at the end.
--
Michael

#20Bertrand Drouvot
bertranddrouvot.pg@gmail.com
In reply to: Michael Paquier (#19)
Re: Add parallel columns for seq scan and index scan on pg_stat_all_tables and _indexes

Hi,

On Tue, Nov 12, 2024 at 12:41:19PM +0900, Michael Paquier wrote:

On Mon, Nov 11, 2024 at 11:06:43AM -0500, Robert Haas wrote:

But it is unclear to me what sort of tuning we would do based on
knowing how many of the scans on a certain table or a certain index
were parallel vs non-parallel. I have not fully reviewed the threads
linked in the original post; but I did look at them briefly and did
not immediately see discussion of the specific counters proposed here.
I also don't see anything in this thread that clearly explains why we
should want this exact thing. I don't want to make it sound like I
know that this is useless; I'm sure that Guillaume probably has lots
of hands-on tuning experience with this stuff that I lack. But the
reasons aren't clearly spelled out as far as I can see, and I'm having
some trouble imagining what they are.

Thanks for the summary. My main worry is that these are kind of hard
to act on for tuning when aggregated at relation level (Guillaume,
feel free to counter-argue!). The main point that comes into mind is
that single table scans would be mostly involved with OLTP workloads
or simple joins, where parallel workers are of little use. That could
be much more interesting for analytical-ish workloads with more
complex plan pattern where one or more Gather or GatherMerge nodes are
involved. Still, even in this case I suspect that most users will
finish by looking at plan patterns, and that these counters added for
index or tables would have a limited impact at the end.

While working on flushing stats outside of transaction boundaries (patch not
shared yet but linked to [1]/messages/by-id/aVvgJu0BhnmzBWZ1@ip-10-97-1-34.eu-west-3.compute.internal), I realized that parallel workers could lead to
incomplete and misleading statistics. Indeed, they update "their" relation
stats during their shutdown regardless of the "main" transaction status.

It means that, for example, stats like seq_scan, last_seq_scan and seq_tup_read
are updated by the parallel workers during their shutdown while the main
transaction has not finished. The stats are then somehow incomplete because the main
worker has not updated its stats yet. I think that could lead to misleading stats
that a patch like this one could help to address. For example, parallel workers
could update parallel_* dedicated stats and leave the non parallel_* stats update
responsibility to the main worker when the transaction finishes. That would make
the non parallel_* stats consistent whether parallel workers are used or not.

Thoughts?

[1]: /messages/by-id/aVvgJu0BhnmzBWZ1@ip-10-97-1-34.eu-west-3.compute.internal

Regards,

--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com