Re: Add index scan progress to pg_stat_progress_vacuum

Started by Imseih (AWS), Samiover 3 years ago1 messages
#1Imseih (AWS), Sami
simseih@amazon.com
1 attachment(s)

One idea would be to add a flag, say report_parallel_vacuum_progress,
to IndexVacuumInfo struct and expect index AM to check and update the
parallel index vacuum progress, say every 1GB blocks processed. The
flag is true only when the leader process is vacuuming an index.

Sorry for the long delay on this. I have taken the approach as suggested
by Sawada-san and Robert and attached is v12.

1. The patch introduces a new counter in the shared memory already
used by the parallel leader and workers to keep track of the number
of indexes completed. This way there is no reason to loop through
the index status everytime we want to get the status of indexes completed.

2. A new function in vacuumparallel.c will be used to update
the progress of a indexes completed by reading from the
counter created in point #1.

3. The function is called during the vacuum_delay_point as a
matter of convenience, since it's called in all major vacuum
loops. The function will only do anything if the caller
sets a boolean to report progress. Doing so will also ensure
progress is being reported in case the parallel workers completed
before the leader.

4. Rather than adding any complexity to WaitForParallelWorkersToFinish
and introducing a new callback, vacuumparallel.c will wait until
the number of vacuum workers is 0 and then process to call
WaitForParallelWorkersToFinish as it does.

5. Went back to the idea of adding a new view called pg_stat_progress_vacuum_index
which is accomplished by adding a new type called VACUUM_PARALLEL in progress.h

Thanks,

Sami Imseih
Amazon Web Servies (AWS)

Attachments:

v12-0001--Show-progress-for-index-vacuums.patchapplication/octet-stream; name=v12-0001--Show-progress-for-index-vacuums.patchDownload
From fd394f0bf01406f850206a6c4a81ff187a685a69 Mon Sep 17 00:00:00 2001
From: "Imseih (AWS)" <simseih@88665a22795f.ant.amazon.com>
Date: Mon, 10 Oct 2022 11:22:25 -0500
Subject: [PATCH v12 1/1] Add 2 new columns to pg_stat_progress_vacuum. The
 columns are indexes_total as the total indexes to be vacuumed or cleaned and
 indexes_processed as the number of indexes vacuumed or cleaned up so far.

Also, introduce a new view called pg_stat_progress_vacuum_index that
exposes the current index being vacuumed.

Author: Sami Imseih, based on suggestions by Nathan Bossart, Peter Geoghegan and Masahiko Sawada
Reviewed by: Nathan Bossart, Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/flat/5478DFCD-2333-401A-B2F0-0D186AB09228@amazon.com
---
 doc/src/sgml/monitoring.sgml          |  92 ++++++++++++++++++++++
 doc/src/sgml/ref/vacuum.sgml          |   8 +-
 src/backend/access/heap/vacuumlazy.c  |  44 ++++++++++-
 src/backend/catalog/system_views.sql  |  20 ++++-
 src/backend/commands/vacuum.c         |   6 ++
 src/backend/commands/vacuumparallel.c | 106 +++++++++++++++++++++++++-
 src/backend/utils/adt/pgstatfuncs.c   |   2 +
 src/include/commands/progress.h       |   4 +
 src/include/commands/vacuum.h         |   2 +
 src/include/utils/backend_progress.h  |   7 +-
 src/test/regress/expected/rules.out   |  17 ++++-
 11 files changed, 301 insertions(+), 7 deletions(-)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 342b20ebeb..473c76e6e8 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -392,6 +392,15 @@ postgres   27093  0.0  0.0  30096  2752 ?        Ss   11:34   0:00 postgres: ser
       </entry>
      </row>
 
+     <row>
+      <entry><structname>pg_stat_progress_vacuum_index</structname><indexterm><primary>pg_stat_progress_vacuum_index</primary>
+       </indexterm>
+      </entry>
+      <entry>One row for each backend (including autovacuum worker processes) performing the <literal>vacuuming indexes</literal>
+       or <literal>cleaning up indexes</literal> phase of a <command>VACUUM</command>.
+      </entry>
+     </row>
+
      <row>
       <entry><structname>pg_stat_progress_cluster</structname><indexterm><primary>pg_stat_progress_cluster</primary></indexterm></entry>
       <entry>One row for each backend running
@@ -6414,6 +6423,89 @@ FROM pg_stat_get_backend_idset() AS backendid;
        Number of dead tuples collected since the last index vacuum cycle.
       </para></entry>
      </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>indexes_total</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of indexes that wil be vacuumed. This value will be
+       <literal>0</literal> if there are no indexes to vacuum or
+       vacuum failsafe is triggered. See <xref linkend="guc-vacuum-failsafe-age"/>
+       for more on vacuum failsafe.
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>indexes_completed</structfield> <type>bigint</type>
+      </para>
+      <para>
+       Number of indexes vacuumed in the current vacuum cycle.
+      </para></entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+  <indexterm>
+   <primary>pg_stat_progress_vacuum_index</primary>
+  </indexterm>
+
+  <para>
+   Whenever <command>VACUUM</command> is running, the
+   <structname>pg_stat_progress_vacuum_index</structname> view will contain
+   one row for each backend (including autovacuum worker processes) performing
+   the <literal>vacuuming indexes</literal> or <literal>cleaning up indexes</literal>
+   phase of a <command>VACUUM</command>.
+  </para>
+
+  <table id="pg-stat-progress-vacuum-view_index" xreflabel="pg_stat_progress_vacuum_index">
+   <title><structname>pg_stat_progress_vacuum_index</structname> View</title>
+   <tgroup cols="1">
+    <thead>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       Column Type
+      </para>
+      <para>
+       Description
+      </para></entry>
+     </row>
+    </thead>
+
+    <tbody>
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>pid</structfield> <type>integer</type>
+      </para>
+      <para>
+       Process ID of backend.
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>leader_pid</structfield> <type>integer</type>
+      </para>
+      <para>
+       Process ID of the leader backend in a parallel <command>VACUUM</command>. This value
+       will match the <structfield>pid</structfield> value whenever the leader
+       is processing an index or the <command>VACUUM</command> is not using parallel.
+       This field can be joined to <structfield>pid</structfield>
+       of <structfield>pg_stat_progress_vacuum</structfield> to get more details about
+       the <command>VACUUM</command>.
+      </para></entry>
+     </row>
+
+     <row>
+      <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>indexrelid</structfield> <type>integer</type>
+      </para>
+      <para>
+       OID of the index being processed in the current vacuum phase.
+      </para></entry>
+     </row>
     </tbody>
    </tgroup>
   </table>
diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml
index c582021d29..0c08d9ac6d 100644
--- a/doc/src/sgml/ref/vacuum.sgml
+++ b/doc/src/sgml/ref/vacuum.sgml
@@ -411,7 +411,13 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class="paramet
    <para>
     Each backend running <command>VACUUM</command> without the
     <literal>FULL</literal> option will report its progress in the
-    <structname>pg_stat_progress_vacuum</structname> view. Backends running
+    <structname>pg_stat_progress_vacuum</structname> view.
+    <structname>pg_stat_progress_vacuum</structname> view. Whenever a
+    <command>VACUUM</command> is in the <literal>vacuuming indexes</literal>
+    or <literal>cleaning up indexes</literal> phase,
+    the current index being processed is reported in
+    <structname>pg_stat_progress_vacuum_index</structname>.
+    <command>VACUUM FULL</command> will report their progress in the Backends running
     <command>VACUUM FULL</command> will instead report their progress in the
     <structname>pg_stat_progress_cluster</structname> view. See
     <xref linkend="vacuum-progress-reporting"/> and
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index dfbe37472f..b057d18dd9 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -350,8 +350,10 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 		}
 	}
 
+	/* start the vacuum progress command and report the leader pid. */
 	pgstat_progress_start_command(PROGRESS_COMMAND_VACUUM,
 								  RelationGetRelid(rel));
+	pgstat_progress_update_param(PROGRESS_VACUUM_LEADER_PID, MyProcPid);
 
 	/*
 	 * Get OldestXmin cutoff, which is used to determine which deleted tuples
@@ -420,6 +422,10 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
 	vacrel->rel = rel;
 	vac_open_indexes(vacrel->rel, RowExclusiveLock, &vacrel->nindexes,
 					 &vacrel->indrels);
+
+	/* report number of indexes to vacuum */
+	pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_TOTAL, vacrel->nindexes);
+
 	if (instrument && vacrel->nindexes > 0)
 	{
 		/* Copy index names used by instrumentation (not error reporting) */
@@ -2337,10 +2343,21 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
 			Relation	indrel = vacrel->indrels[idx];
 			IndexBulkDeleteResult *istat = vacrel->indstats[idx];
 
+			/* report the index relid being vacuumed */
+			pgstat_progress_update_param(PROGRESS_VACUUM_INDRELID, RelationGetRelid(indrel));
+
 			vacrel->indstats[idx] =
 				lazy_vacuum_one_index(indrel, istat, vacrel->old_live_tuples,
 									  vacrel);
 
+			/*
+			 * Done vacuuming an index.
+			 * Increment the indexes completed and reset the index relid to 0
+			 */
+			pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,
+										 idx + 1);
+			pgstat_progress_update_param(PROGRESS_VACUUM_INDRELID, 0);
+
 			if (lazy_check_wraparound_failsafe(vacrel))
 			{
 				/* Wraparound emergency -- end current index scan */
@@ -2384,6 +2401,13 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
 	pgstat_progress_update_param(PROGRESS_VACUUM_NUM_INDEX_VACUUMS,
 								 vacrel->num_index_scans);
 
+	/*
+	 * Reset the indexes completed at this point.
+	 * If we end up in another index vacuum cycle, we will
+	 * start counting from the start.
+	 */
+	pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED, 0);
+
 	return allindexes;
 }
 
@@ -2633,10 +2657,17 @@ lazy_check_wraparound_failsafe(LVRelState *vacrel)
 	{
 		vacrel->failsafe_active = true;
 
-		/* Disable index vacuuming, index cleanup, and heap rel truncation */
+		/*
+		 * Disable index vacuuming, index cleanup, and heap rel truncation
+		 *
+		 * Also, report to progress.h that we are no longer tracking
+		 * index vacuum/cleanup.
+		 */
 		vacrel->do_index_vacuuming = false;
 		vacrel->do_index_cleanup = false;
 		vacrel->do_rel_truncate = false;
+		pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_TOTAL, 0);
+		pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED, 0);
 
 		ereport(WARNING,
 				(errmsg("bypassing nonessential maintenance of table \"%s.%s.%s\" as a failsafe after %d index scans",
@@ -2681,9 +2712,20 @@ lazy_cleanup_all_indexes(LVRelState *vacrel)
 			Relation	indrel = vacrel->indrels[idx];
 			IndexBulkDeleteResult *istat = vacrel->indstats[idx];
 
+			/*+ report the index relid being cleaned */
+			pgstat_progress_update_param(PROGRESS_VACUUM_INDRELID, RelationGetRelid(indrel));
+
 			vacrel->indstats[idx] =
 				lazy_cleanup_one_index(indrel, istat, reltuples,
 									   estimated_count, vacrel);
+
+			/*
+			 * Done cleaning an index.
+			 * Increment the indexes completed and reset the index relid to 0
+			 */
+			pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,
+										 idx + 1);
+			pgstat_progress_update_param(PROGRESS_VACUUM_INDRELID, 0);
 		}
 	}
 	else
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 55f7ec79e0..5b5e7b4080 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1162,10 +1162,28 @@ CREATE VIEW pg_stat_progress_vacuum AS
                       END AS phase,
         S.param2 AS heap_blks_total, S.param3 AS heap_blks_scanned,
         S.param4 AS heap_blks_vacuumed, S.param5 AS index_vacuum_count,
-        S.param6 AS max_dead_tuples, S.param7 AS num_dead_tuples
+        S.param6 AS max_dead_tuples, S.param7 AS num_dead_tuples,
+        S.param10 AS indexes_total, S.param11 AS indexes_completed
     FROM pg_stat_get_progress_info('VACUUM') AS S
         LEFT JOIN pg_database D ON S.datid = D.oid;
 
+CREATE VIEW pg_stat_progress_vacuum_index AS
+    SELECT
+        S.pid AS pid,
+        S.param9 AS leader_pid,
+        S.param8 AS indexrelid
+    FROM pg_stat_get_progress_info('VACUUM') AS S
+        LEFT JOIN pg_database D ON S.datid = D.oid
+    WHERE S.param1 in (2, 4) AND S.param8 > 0
+    UNION ALL
+    SELECT
+        S.pid AS pid,
+        S.param9 AS leader_pid,
+        S.param8 AS indexrelid
+    FROM pg_stat_get_progress_info('VACUUM_PARALLEL') AS S
+        LEFT JOIN pg_database D ON S.datid = D.oid
+    WHERE S.param1 in (2, 4) AND S.param8 > 0;
+
 CREATE VIEW pg_stat_progress_cluster AS
     SELECT
         S.pid AS pid,
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 7ccde07de9..19a2692704 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -2173,12 +2173,18 @@ vac_close_indexes(int nindexes, Relation *Irel, LOCKMODE lockmode)
  *
  * This should be called in each major loop of VACUUM processing,
  * typically once per page processed.
+ *
+ * NOTE: For convenience, parallel_vacuum_progress_report, is called
+ * here so the leader can report the number of indexes vacuumed in
+ * while inside all the major VACUUM loops.
  */
 void
 vacuum_delay_point(void)
 {
 	double		msec = 0;
 
+	parallel_vacuum_progress_report();
+
 	/* Always check for interrupts */
 	CHECK_FOR_INTERRUPTS();
 
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index f26d796e52..65a71cbac9 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -30,6 +30,7 @@
 #include "access/table.h"
 #include "access/xact.h"
 #include "catalog/index.h"
+#include "commands/progress.h"
 #include "commands/vacuum.h"
 #include "optimizer/paths.h"
 #include "pgstat.h"
@@ -103,6 +104,20 @@ typedef struct PVShared
 
 	/* Counter for vacuuming and cleanup */
 	pg_atomic_uint32 idx;
+
+	/*
+	 * Counter for vacuuming and cleanup progress reporting.
+	 * This value is used to report index vacuum/cleanup progress
+	 * in parallel_vacuum_progress_report. We keep this
+	 * counter to avoid having to loop through
+	 * ParallelVacuumState->indstats to determine the number
+	 * of indexes completed.
+	 */
+	pg_atomic_uint32 idx_completed_progress;
+
+	/* track the leader pid of a parallel vacuum */
+	int leader_pid;
+
 } PVShared;
 
 /* Status used during parallel index vacuum or cleanup */
@@ -214,6 +229,9 @@ static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_inde
 												   bool vacuum);
 static void parallel_vacuum_error_callback(void *arg);
 
+static pg_atomic_uint32 *index_vacuum_completed = NULL;
+static bool report_parallel_vacuum_progress = false;
+
 /*
  * Try to enter parallel mode and create a parallel context.  Then initialize
  * shared memory state.
@@ -364,6 +382,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	pg_atomic_init_u32(&(shared->cost_balance), 0);
 	pg_atomic_init_u32(&(shared->active_nworkers), 0);
 	pg_atomic_init_u32(&(shared->idx), 0);
+	pg_atomic_init_u32(&(shared->idx_completed_progress), 0);
+
+	shared->leader_pid = MyProcPid;
 
 	shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared);
 	pvs->shared = shared;
@@ -618,8 +639,9 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 													vacuum));
 	}
 
-	/* Reset the parallel index processing counter */
+	/* Reset the parallel index processing counter ( index proress counter also ) */
 	pg_atomic_write_u32(&(pvs->shared->idx), 0);
+	pg_atomic_write_u32(&(pvs->shared->idx_completed_progress), 0);
 
 	/* Setup the shared cost-based vacuum delay and launch workers */
 	if (nworkers > 0)
@@ -657,6 +679,13 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 			/* Enable shared cost balance for leader backend */
 			VacuumSharedCostBalance = &(pvs->shared->cost_balance);
 			VacuumActiveNWorkers = &(pvs->shared->active_nworkers);
+
+			/*
+			 * If we are launching a parallel vacuum/cleanup,
+			 * setup the tracking.
+			 */
+			report_parallel_vacuum_progress = true;
+			index_vacuum_completed = &(pvs->shared->idx_completed_progress);
 		}
 
 		if (vacuum)
@@ -682,13 +711,36 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	parallel_vacuum_process_safe_indexes(pvs);
 
+	/*
+	 * In case the leader completes vacuuming all
+	 * its indexes before the parallel workers do,
+	 * it can spin here waiting for the number of
+	 * active workers to complete while reporting
+	 * the progress of indexes vacuumed.
+	 *
+	 */
+	if (VacuumActiveNWorkers)
+	{
+		while (pg_atomic_read_u32(VacuumActiveNWorkers) > 0)
+		{
+			parallel_vacuum_progress_report();
+		}
+	}
+
 	/*
 	 * Next, accumulate buffer and WAL usage.  (This must wait for the workers
 	 * to finish, or we might get incomplete data.)
 	 */
 	if (nworkers > 0)
 	{
-		/* Wait for all vacuum workers to finish */
+		/*
+		 * Wait for all vacuum workers to finish
+		 *
+		 * We must do this even if we know we don't
+		 * have anymore active workers. See the
+		 * WaitForParallelWorkersToFinish commentary
+		 * as to why.
+		 */
 		WaitForParallelWorkersToFinish(pvs->pcxt);
 
 		for (int i = 0; i < pvs->pcxt->nworkers_launched; i++)
@@ -719,6 +771,14 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 		VacuumSharedCostBalance = NULL;
 		VacuumActiveNWorkers = NULL;
 	}
+
+	/*
+	 * Disabled index vacuum progress reporting.
+	 * If we havee another index vacuum cycle, the
+	 * progress reporting will be re-enabled then.
+	 */
+	index_vacuum_completed = NULL;
+	report_parallel_vacuum_progress = false;
 }
 
 /*
@@ -823,6 +883,21 @@ parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
 	IndexBulkDeleteResult *istat = NULL;
 	IndexBulkDeleteResult *istat_res;
 	IndexVacuumInfo ivinfo;
+	Oid indrelid = RelationGetRelid(indrel);
+
+	/*
+	 * If we are a parallel worker, start a PROGRESS_COMMAND_VACUUM_PARALLEL
+	 * command for progress reporting, and set the leader pid.
+	 */
+	if (IsParallelWorker())
+	{
+		pgstat_progress_start_command(PROGRESS_COMMAND_VACUUM_PARALLEL,
+									  pvs->shared->relid);
+		pgstat_progress_update_param(PROGRESS_VACUUM_LEADER_PID, pvs->shared->leader_pid);
+	}
+
+	/* report the index being vacuumed or cleaned up */
+	pgstat_progress_update_param(PROGRESS_VACUUM_INDRELID, indrelid);
 
 	/*
 	 * Update the pointer to the corresponding bulk-deletion result if someone
@@ -846,9 +921,11 @@ parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
 	switch (indstats->status)
 	{
 		case PARALLEL_INDVAC_STATUS_NEED_BULKDELETE:
+			pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, PROGRESS_VACUUM_PHASE_VACUUM_INDEX);
 			istat_res = vac_bulkdel_one_index(&ivinfo, istat, pvs->dead_items);
 			break;
 		case PARALLEL_INDVAC_STATUS_NEED_CLEANUP:
+			pgstat_progress_update_param(PROGRESS_VACUUM_PHASE, PROGRESS_VACUUM_PHASE_INDEX_CLEANUP);
 			istat_res = vac_cleanup_one_index(&ivinfo, istat);
 			break;
 		default:
@@ -888,6 +965,17 @@ parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
 	pvs->status = PARALLEL_INDVAC_STATUS_COMPLETED;
 	pfree(pvs->indname);
 	pvs->indname = NULL;
+
+	/*
+	 * Reset the index relid begin vacuumed and incremebts the
+	 * index vacuum counter.
+	 */
+	pgstat_progress_update_param(PROGRESS_VACUUM_INDRELID, 0);
+	pg_atomic_add_fetch_u32(&(pvs->shared->idx_completed_progress), 1);
+
+	/* if we are a parallel worker, end the command */
+	if (IsParallelWorker())
+		pgstat_progress_end_command();
 }
 
 /*
@@ -1072,3 +1160,17 @@ parallel_vacuum_error_callback(void *arg)
 			return;
 	}
 }
+
+/*
+ * Read the number of indexes vacuumed from the shared counter
+ * and report it to progress.h
+ */
+void
+parallel_vacuum_progress_report(void)
+{
+	if (IsParallelWorker() || !report_parallel_vacuum_progress)
+		return;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,
+								 pg_atomic_read_u32(index_vacuum_completed));
+}
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index eadd8464ff..1584a9bd7d 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -484,6 +484,8 @@ pg_stat_get_progress_info(PG_FUNCTION_ARGS)
 		cmdtype = PROGRESS_COMMAND_BASEBACKUP;
 	else if (pg_strcasecmp(cmd, "COPY") == 0)
 		cmdtype = PROGRESS_COMMAND_COPY;
+	else if (pg_strcasecmp(cmd, "VACUUM_PARALLEL") == 0)
+		cmdtype = PROGRESS_COMMAND_VACUUM_PARALLEL;
 	else
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h
index a28938caf4..410eec217b 100644
--- a/src/include/commands/progress.h
+++ b/src/include/commands/progress.h
@@ -25,6 +25,10 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_INDRELID				7
+#define PROGRESS_VACUUM_LEADER_PID				8
+#define PROGRESS_VACUUM_INDEX_TOTAL             9
+#define PROGRESS_VACUUM_INDEX_COMPLETED         10
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 5d816ba7f4..96901d7234 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -338,4 +338,6 @@ extern double anl_random_fract(void);
 extern double anl_init_selection_state(int n);
 extern double anl_get_next_S(double t, int n, double *stateptr);
 
+extern void parallel_vacuum_progress_report(void);
+
 #endif							/* VACUUM_H */
diff --git a/src/include/utils/backend_progress.h b/src/include/utils/backend_progress.h
index 47bf8029b0..74d9dfb4c7 100644
--- a/src/include/utils/backend_progress.h
+++ b/src/include/utils/backend_progress.h
@@ -17,6 +17,10 @@
 
 /* ----------
  * Command type for progress reporting purposes
+ *
+ * Note: PROGRESS_COMMAND_VACUUM_PARALLEL is not
+ * a command per se, but this type is used to track
+ * parallel vacuum workers progress.
  * ----------
  */
 typedef enum ProgressCommandType
@@ -27,7 +31,8 @@ typedef enum ProgressCommandType
 	PROGRESS_COMMAND_CLUSTER,
 	PROGRESS_COMMAND_CREATE_INDEX,
 	PROGRESS_COMMAND_BASEBACKUP,
-	PROGRESS_COMMAND_COPY
+	PROGRESS_COMMAND_COPY,
+	PROGRESS_COMMAND_VACUUM_PARALLEL
 } ProgressCommandType;
 
 #define PGSTAT_NUM_PROGRESS_PARAM	20
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 9dd137415e..d7d6148d52 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2013,9 +2013,24 @@ pg_stat_progress_vacuum| SELECT s.pid,
     s.param4 AS heap_blks_vacuumed,
     s.param5 AS index_vacuum_count,
     s.param6 AS max_dead_tuples,
-    s.param7 AS num_dead_tuples
+    s.param7 AS num_dead_tuples,
+    s.param10 AS indexes_total,
+    s.param11 AS indexes_completed
    FROM (pg_stat_get_progress_info('VACUUM'::text) s(pid, datid, relid, param1, param2, param3, param4, param5, param6, param7, param8, param9, param10, param11, param12, param13, param14, param15, param16, param17, param18, param19, param20)
      LEFT JOIN pg_database d ON ((s.datid = d.oid)));
+pg_stat_progress_vacuum_index| SELECT s.pid,
+    s.param9 AS leader_pid,
+    s.param8 AS indexrelid
+   FROM (pg_stat_get_progress_info('VACUUM'::text) s(pid, datid, relid, param1, param2, param3, param4, param5, param6, param7, param8, param9, param10, param11, param12, param13, param14, param15, param16, param17, param18, param19, param20)
+     LEFT JOIN pg_database d ON ((s.datid = d.oid)))
+  WHERE ((s.param1 = ANY (ARRAY[(2)::bigint, (4)::bigint])) AND (s.param8 > 0))
+UNION ALL
+ SELECT s.pid,
+    s.param9 AS leader_pid,
+    s.param8 AS indexrelid
+   FROM (pg_stat_get_progress_info('VACUUM_PARALLEL'::text) s(pid, datid, relid, param1, param2, param3, param4, param5, param6, param7, param8, param9, param10, param11, param12, param13, param14, param15, param16, param17, param18, param19, param20)
+     LEFT JOIN pg_database d ON ((s.datid = d.oid)))
+  WHERE ((s.param1 = ANY (ARRAY[(2)::bigint, (4)::bigint])) AND (s.param8 > 0));
 pg_stat_recovery_prefetch| SELECT s.stats_reset,
     s.prefetch,
     s.hit,
-- 
2.32.1 (Apple Git-133)