maintenance_work_mem = 64kB doesn't work for vacuum

Started by Masahiko Sawada10 months ago13 messages
#1Masahiko Sawada
sawada.mshk@gmail.com

Hi,

Commit bbf668d66fbf6 (back-patched to v17) lowered the minimum
maintenance_work_mem to 64kB, but it doesn't work for parallel vacuum
cases since the minimum dsa segment size (DSA_MIN_SEGMENT_SIZE) is
256kB. As soon as the radix tree allocates its control object and the
root node, the memory usage exceeds the maintenance_work_mem limit and
vacuum ends up with the following assertion:

TRAP: failed Assert("vacrel->lpdead_item_pages > 0"), File:
"vacuumlazy.c", Line: 2447, PID: 3208575

On build without --enable-cassert, vacuum never ends.

I've tried to lower DSA_MIN_SEGMENT_SIZE to 64kB but no luck.
Investigating it further, dsa creates a 64kB superblock for each size
class and when creating a new shared radix tree we need to create two
superblocks: one for the radix tree control data (64 bytes) and
another one for the root node (40 bytes). Also, each superblock
requires a span data, which uses 1 page (4096kB). Therefore, we need
at least 136kB for a shared radix tree even when it's empty.

A simple fix is to bump the minimum maintenance_work_mem to 256kB. We
would break the compatibility for backbranch (i.e. v17) but I guess
it's unlikely that existing v17 users are using less than 1MB
maintenance_work_mem (the release note doesn't mention the fact that
we lowered the minimum value).

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#2David Rowley
dgrowleyml@gmail.com
In reply to: Masahiko Sawada (#1)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Mon, 10 Mar 2025 at 07:46, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

A simple fix is to bump the minimum maintenance_work_mem to 256kB. We
would break the compatibility for backbranch (i.e. v17) but I guess
it's unlikely that existing v17 users are using less than 1MB
maintenance_work_mem (the release note doesn't mention the fact that
we lowered the minimum value).

Could you do something similar to what's in hash_agg_check_limits()
where we check we've got at least 1 item before bailing before we've
used up the all the prescribed memory? That seems like a safer coding
practise as if in the future the minimum usage for a DSM segment goes
above 256KB, the bug comes back again.

David

#3John Naylor
johncnaylorls@gmail.com
In reply to: Masahiko Sawada (#1)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Mon, Mar 10, 2025 at 1:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Commit bbf668d66fbf6 (back-patched to v17) lowered the minimum
maintenance_work_mem to 64kB, but it doesn't work for parallel vacuum

That was done in the first place to make a regression test for a bug
fix easier, but that test never got committed. In any case I found it
worked back in July:

/messages/by-id/CANWCAZZb7wd403wHQQUJZjkF+RWKAAa+WARP0Rj0EyMcfcdN9Q@mail.gmail.com

--
John Naylor
Amazon Web Services

#4Melanie Plageman
melanieplageman@gmail.com
In reply to: John Naylor (#3)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Sun, Mar 9, 2025 at 9:24 PM John Naylor <johncnaylorls@gmail.com> wrote:

On Mon, Mar 10, 2025 at 1:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Commit bbf668d66fbf6 (back-patched to v17) lowered the minimum
maintenance_work_mem to 64kB, but it doesn't work for parallel vacuum

That was done in the first place to make a regression test for a bug
fix easier, but that test never got committed. In any case I found it
worked back in July:

Yes, I would like to keep the lower minimum. I really do have every
intention of committing that test. Apologies for taking so long.
Raising the limit to 256 kB might make the test take too long. And I
think it's nice to have that coverage (not just of the vacuum bug but
of the multi-index vacuum pass vacuum in a natural setting [as opposed
to the tidstore test module]). I don't recall if we have that
elsewhere.

- Melanie

#5David Rowley
dgrowleyml@gmail.com
In reply to: David Rowley (#2)
1 attachment(s)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Mon, 10 Mar 2025 at 10:30, David Rowley <dgrowleyml@gmail.com> wrote:

Could you do something similar to what's in hash_agg_check_limits()
where we check we've got at least 1 item before bailing before we've
used up the all the prescribed memory? That seems like a safer coding
practise as if in the future the minimum usage for a DSM segment goes
above 256KB, the bug comes back again.

FWIW, I had something like the attached in mind.

David

Attachments:

ensure_weve_at_least_one_page_before_vacuum_index_cleanup.patchapplication/octet-stream; name=ensure_weve_at_least_one_page_before_vacuum_index_cleanup.patchDownload
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 3b91d02605a..05a8a0fd595 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -1263,9 +1263,12 @@ lazy_scan_heap(LVRelState *vacrel)
 		 * Consider if we definitely have enough space to process TIDs on page
 		 * already.  If we are close to overrunning the available space for
 		 * dead_items TIDs, pause and do a cycle of vacuuming before we tackle
-		 * this page.
+		 * this page.  However, let's force at least one page-worth of tuples to be
+		 * stored as to ensure we do at least some work when the memory configured
+		 * is so low that we run out before storing anything.
 		 */
-		if (TidStoreMemoryUsage(vacrel->dead_items) > vacrel->dead_items_info->max_bytes)
+		if (vacrel->lpdead_item_pages > 0 &&
+			TidStoreMemoryUsage(vacrel->dead_items) > vacrel->dead_items_info->max_bytes)
 		{
 			/*
 			 * Before beginning index vacuuming, we release any pin we may
#6Masahiko Sawada
sawada.mshk@gmail.com
In reply to: David Rowley (#5)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Sun, Mar 9, 2025 at 7:03 PM David Rowley <dgrowleyml@gmail.com> wrote:

On Mon, 10 Mar 2025 at 10:30, David Rowley <dgrowleyml@gmail.com> wrote:

Could you do something similar to what's in hash_agg_check_limits()
where we check we've got at least 1 item before bailing before we've
used up the all the prescribed memory? That seems like a safer coding
practise as if in the future the minimum usage for a DSM segment goes
above 256KB, the bug comes back again.

FWIW, I had something like the attached in mind.

Thank you for the patch! I like your idea. This means that even if we
set maintenance_work_mem to 64kB the memory usage would not actually
be limited to 64kB but probably we're fine as it's primarily testing
purpose.

Regarding that patch, we need to note that the lpdead_items is a
counter that is not reset in the entire vacuum. Therefore, with
maintenance_work_mem = 64kB, once we collect at least one lpdead item,
we perform a cycle of index vacuuming and heap vacuuming for every
subsequent block even if they don't have a lpdead item. I think we
should use vacrel->dead_items_info->num_items instead.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#7David Rowley
dgrowleyml@gmail.com
In reply to: Masahiko Sawada (#6)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Mon, 10 Mar 2025 at 17:22, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Regarding that patch, we need to note that the lpdead_items is a
counter that is not reset in the entire vacuum. Therefore, with
maintenance_work_mem = 64kB, once we collect at least one lpdead item,
we perform a cycle of index vacuuming and heap vacuuming for every
subsequent block even if they don't have a lpdead item. I think we
should use vacrel->dead_items_info->num_items instead.

OK, I didn't study the code enough to realise that. My patch was only
intended as an indication of what I thought. Please feel free to
proceed with your own patch using the correct field.

When playing with parallel vacuum, I also wondered if there should be
some heuristic that avoids parallel vacuum unless the user
specifically asked for it in the command when maintenance_work_mem is
set to something far too low.

Take the following case as an example:
set maintenance_work_mem=64;
create table aa(a int primary key, b int unique);
insert into aa select a,a from generate_Series(1,1000000) a;
delete from aa;

-- try a vacuum with no parallelism
vacuum (verbose, parallel 0) aa;

system usage: CPU: user: 0.53 s, system: 0.00 s, elapsed: 0.57 s

If I did the following instead:

vacuum (verbose) aa;

The vacuum goes parallel and it takes a very long time due to
launching a parallel worker to do 1 page worth of tuples. I see the
following message 4425 times

INFO: launched 1 parallel vacuum worker for index vacuuming (planned: 1)

and takes about 30 seconds to complete: system usage: CPU: user: 14.00
s, system: 0.81 s, elapsed: 30.86 s

Shouldn't the code in parallel_vacuum_compute_workers() try and pick a
good value for the workers based on the available memory and table
size when the user does not explicitly specify how many workers they
want?

David

#8Masahiko Sawada
sawada.mshk@gmail.com
In reply to: David Rowley (#7)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Mon, Mar 10, 2025 at 2:53 AM David Rowley <dgrowleyml@gmail.com> wrote:

On Mon, 10 Mar 2025 at 17:22, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Regarding that patch, we need to note that the lpdead_items is a
counter that is not reset in the entire vacuum. Therefore, with
maintenance_work_mem = 64kB, once we collect at least one lpdead item,
we perform a cycle of index vacuuming and heap vacuuming for every
subsequent block even if they don't have a lpdead item. I think we
should use vacrel->dead_items_info->num_items instead.

OK, I didn't study the code enough to realise that. My patch was only
intended as an indication of what I thought. Please feel free to
proceed with your own patch using the correct field.

When playing with parallel vacuum, I also wondered if there should be
some heuristic that avoids parallel vacuum unless the user
specifically asked for it in the command when maintenance_work_mem is
set to something far too low.

Take the following case as an example:
set maintenance_work_mem=64;
create table aa(a int primary key, b int unique);
insert into aa select a,a from generate_Series(1,1000000) a;
delete from aa;

-- try a vacuum with no parallelism
vacuum (verbose, parallel 0) aa;

system usage: CPU: user: 0.53 s, system: 0.00 s, elapsed: 0.57 s

If I did the following instead:

vacuum (verbose) aa;

The vacuum goes parallel and it takes a very long time due to
launching a parallel worker to do 1 page worth of tuples. I see the
following message 4425 times

INFO: launched 1 parallel vacuum worker for index vacuuming (planned: 1)

and takes about 30 seconds to complete: system usage: CPU: user: 14.00
s, system: 0.81 s, elapsed: 30.86 s

Shouldn't the code in parallel_vacuum_compute_workers() try and pick a
good value for the workers based on the available memory and table
size when the user does not explicitly specify how many workers they
want?

I think in your case the threshold of min_parallel_index_scan_size
didn't work well. Given that one worker is assigned to one index and
index vacuum time mostly depends on the index size, the index size
would be a good criterion to decide the parallel degree. For example,
even if the table has only one dead item, index vacuuming would take a
long time if indexes are large as we scan the whole indexes in common
cases (e.g. btree indexes), in which we would like to use parallel
index vacuuming. Also, even if the table has many dead items but its
indexes are small (e.g., expression indexes), it would be better not
to use parallel index vacuuming.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#9Masahiko Sawada
sawada.mshk@gmail.com
In reply to: David Rowley (#7)
1 attachment(s)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Mon, Mar 10, 2025 at 2:53 AM David Rowley <dgrowleyml@gmail.com> wrote:

On Mon, 10 Mar 2025 at 17:22, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Regarding that patch, we need to note that the lpdead_items is a
counter that is not reset in the entire vacuum. Therefore, with
maintenance_work_mem = 64kB, once we collect at least one lpdead item,
we perform a cycle of index vacuuming and heap vacuuming for every
subsequent block even if they don't have a lpdead item. I think we
should use vacrel->dead_items_info->num_items instead.

OK, I didn't study the code enough to realise that. My patch was only
intended as an indication of what I thought. Please feel free to
proceed with your own patch using the correct field.

I've attached the patch. I added the minimum regression tests for that.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

Attachments:

0001-Fix-assertion-failure-in-parallel-vacuum-with-minima.patchapplication/octet-stream; name=0001-Fix-assertion-failure-in-parallel-vacuum-with-minima.patchDownload
From cea57780afa350da755c66c8ebb0558fb0913f1f Mon Sep 17 00:00:00 2001
From: Masahiko Sawada <sawada.mshk@gmail.com>
Date: Mon, 17 Mar 2025 09:20:47 -0700
Subject: [PATCH] Fix assertion failure in parallel vacuum with minimal
 maintenance_work_mem setting.

bbf668d66fbf lowered the minimum value of maintenance_work_mem to
64kB. However, in parallel vacuum cases, since the initial underlying
DSA size is 256kB, it attempts to perform a cycle of index vacuuming and
table vacuuming with an empty TID store, resulting in an assertion
failure.

This commit ensures that at least one page is processed before
index vacuuming and table vacuuming begins.

Backpatched to 17, where the minimum maintenance_work_mem value was
lowered.

Reviewed-by:
Discussion: https://postgr.es/m/CAD21AoCEAmbkkXSKbj4dB+5pJDRL4ZHxrCiLBgES_g_g8mVi1Q@mail.gmail.com
Backpatch-through: 17
---
 src/backend/access/heap/vacuumlazy.c |  7 +++++--
 src/test/regress/expected/vacuum.out | 11 +++++++++++
 src/test/regress/sql/vacuum.sql      | 12 ++++++++++++
 3 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 3b91d02605a..e0e0213f046 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -1263,9 +1263,12 @@ lazy_scan_heap(LVRelState *vacrel)
 		 * Consider if we definitely have enough space to process TIDs on page
 		 * already.  If we are close to overrunning the available space for
 		 * dead_items TIDs, pause and do a cycle of vacuuming before we tackle
-		 * this page.
+		 * this page. However, let's force at least one page-worth of tuples
+		 * to be stored as to ensure we do at least some work when the memory
+		 * configured is so low that we run out before storing anything.
 		 */
-		if (TidStoreMemoryUsage(vacrel->dead_items) > vacrel->dead_items_info->max_bytes)
+		if (vacrel->dead_items_info->num_items > 0 &&
+			TidStoreMemoryUsage(vacrel->dead_items) > vacrel->dead_items_info->max_bytes)
 		{
 			/*
 			 * Before beginning index vacuuming, we release any pin we may
diff --git a/src/test/regress/expected/vacuum.out b/src/test/regress/expected/vacuum.out
index 1a07dbf67d6..d514be145cc 100644
--- a/src/test/regress/expected/vacuum.out
+++ b/src/test/regress/expected/vacuum.out
@@ -148,6 +148,10 @@ CREATE INDEX brin_pvactst ON pvactst USING brin (i);
 CREATE INDEX gin_pvactst ON pvactst USING gin (a);
 CREATE INDEX gist_pvactst ON pvactst USING gist (p);
 CREATE INDEX spgist_pvactst ON pvactst USING spgist (p);
+CREATE TABLE pvactst2 (i INT) WITH (autovacuum_enabled = off);
+INSERT INTO pvactst2 SELECT generate_series(1, 1000);
+CREATE INDEX ON pvactst2 (i);
+CREATE INDEX ON pvactst2 (i);
 -- VACUUM invokes parallel index cleanup
 SET min_parallel_index_scan_size to 0;
 VACUUM (PARALLEL 2) pvactst;
@@ -167,6 +171,12 @@ VACUUM (PARALLEL) pvactst; -- error, cannot use PARALLEL option without parallel
 ERROR:  parallel option requires a value between 0 and 1024
 LINE 1: VACUUM (PARALLEL) pvactst;
                 ^
+-- Test parallel vacuum with the minimum maintenance_work_mem with and without
+-- dead tuples.
+SET maintenance_work_mem TO 64;
+VACUUM (PARALLEL 2) pvactst;
+UPDATE pvactst SET i = i WHERE i < 1000;
+VACUUM (PARALLEL 2) pvactst;
 -- Test different combinations of parallel and full options for temporary tables
 CREATE TEMPORARY TABLE tmp (a int PRIMARY KEY);
 CREATE INDEX tmp_idx1 ON tmp (a);
@@ -174,6 +184,7 @@ VACUUM (PARALLEL 1, FULL FALSE) tmp; -- parallel vacuum disabled for temp tables
 WARNING:  disabling parallel option of vacuum on "tmp" --- cannot vacuum temporary tables in parallel
 VACUUM (PARALLEL 0, FULL TRUE) tmp; -- can specify parallel disabled (even though that's implied by FULL)
 RESET min_parallel_index_scan_size;
+RESET maintenance_work_mem;
 DROP TABLE pvactst;
 -- INDEX_CLEANUP option
 CREATE TABLE no_index_cleanup (i INT PRIMARY KEY, t TEXT);
diff --git a/src/test/regress/sql/vacuum.sql b/src/test/regress/sql/vacuum.sql
index 5e55079e718..2cc973ff2da 100644
--- a/src/test/regress/sql/vacuum.sql
+++ b/src/test/regress/sql/vacuum.sql
@@ -113,6 +113,10 @@ CREATE INDEX brin_pvactst ON pvactst USING brin (i);
 CREATE INDEX gin_pvactst ON pvactst USING gin (a);
 CREATE INDEX gist_pvactst ON pvactst USING gist (p);
 CREATE INDEX spgist_pvactst ON pvactst USING spgist (p);
+CREATE TABLE pvactst2 (i INT) WITH (autovacuum_enabled = off);
+INSERT INTO pvactst2 SELECT generate_series(1, 1000);
+CREATE INDEX ON pvactst2 (i);
+CREATE INDEX ON pvactst2 (i);
 
 -- VACUUM invokes parallel index cleanup
 SET min_parallel_index_scan_size to 0;
@@ -130,12 +134,20 @@ VACUUM (PARALLEL 2, INDEX_CLEANUP FALSE) pvactst;
 VACUUM (PARALLEL 2, FULL TRUE) pvactst; -- error, cannot use both PARALLEL and FULL
 VACUUM (PARALLEL) pvactst; -- error, cannot use PARALLEL option without parallel degree
 
+-- Test parallel vacuum with the minimum maintenance_work_mem with and without
+-- dead tuples.
+SET maintenance_work_mem TO 64;
+VACUUM (PARALLEL 2) pvactst;
+UPDATE pvactst SET i = i WHERE i < 1000;
+VACUUM (PARALLEL 2) pvactst;
+
 -- Test different combinations of parallel and full options for temporary tables
 CREATE TEMPORARY TABLE tmp (a int PRIMARY KEY);
 CREATE INDEX tmp_idx1 ON tmp (a);
 VACUUM (PARALLEL 1, FULL FALSE) tmp; -- parallel vacuum disabled for temp tables
 VACUUM (PARALLEL 0, FULL TRUE) tmp; -- can specify parallel disabled (even though that's implied by FULL)
 RESET min_parallel_index_scan_size;
+RESET maintenance_work_mem;
 DROP TABLE pvactst;
 
 -- INDEX_CLEANUP option
-- 
2.43.5

#10David Rowley
dgrowleyml@gmail.com
In reply to: Masahiko Sawada (#9)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Tue, 18 Mar 2025 at 05:49, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I've attached the patch. I added the minimum regression tests for that.

I think the change to vacuumlazy.c is ok. The new test you've added
creates a table called pvactst2 but then adds a test that uses the
pvactst table.

Did you mean to skip the DROP TABLE pvactst2;?

Is there a reason to keep the maintenance_work_mem=64 for the
subsequent existing test?

David

#11Masahiko Sawada
sawada.mshk@gmail.com
In reply to: David Rowley (#10)
1 attachment(s)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Mon, Mar 17, 2025 at 7:06 PM David Rowley <dgrowleyml@gmail.com> wrote:

On Tue, 18 Mar 2025 at 05:49, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I've attached the patch. I added the minimum regression tests for that.

I think the change to vacuumlazy.c is ok. The new test you've added
creates a table called pvactst2 but then adds a test that uses the
pvactst table.

Fixed.

Did you mean to skip the DROP TABLE pvactst2;?

Yes, added DROP TABLE pvactst2.

Is there a reason to keep the maintenance_work_mem=64 for the
subsequent existing test?

No, I reset it immediately after tests for pvactst2.

I've attached the updated patch.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

Attachments:

v2-0001-Fix-assertion-failure-in-parallel-vacuum-with-min.patchapplication/octet-stream; name=v2-0001-Fix-assertion-failure-in-parallel-vacuum-with-min.patchDownload
From 156e2b7098fd9d11ab7287f50bac4a1ea71ad028 Mon Sep 17 00:00:00 2001
From: Masahiko Sawada <sawada.mshk@gmail.com>
Date: Mon, 17 Mar 2025 09:20:47 -0700
Subject: [PATCH v2] Fix assertion failure in parallel vacuum with minimal
 maintenance_work_mem setting.

bbf668d66fbf lowered the minimum value of maintenance_work_mem to
64kB. However, in parallel vacuum cases, since the initial underlying
DSA size is 256kB, it attempts to perform a cycle of index vacuuming and
table vacuuming with an empty TID store, resulting in an assertion
failure.

This commit ensures that at least one page is processed before
index vacuuming and table vacuuming begins.

Backpatched to 17, where the minimum maintenance_work_mem value was
lowered.

Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/CAD21AoCEAmbkkXSKbj4dB+5pJDRL4ZHxrCiLBgES_g_g8mVi1Q@mail.gmail.com
Backpatch-through: 17
---
 src/backend/access/heap/vacuumlazy.c |  7 +++++--
 src/test/regress/expected/vacuum.out | 12 ++++++++++++
 src/test/regress/sql/vacuum.sql      | 13 +++++++++++++
 3 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 3b91d02605a..e0e0213f046 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -1263,9 +1263,12 @@ lazy_scan_heap(LVRelState *vacrel)
 		 * Consider if we definitely have enough space to process TIDs on page
 		 * already.  If we are close to overrunning the available space for
 		 * dead_items TIDs, pause and do a cycle of vacuuming before we tackle
-		 * this page.
+		 * this page. However, let's force at least one page-worth of tuples
+		 * to be stored as to ensure we do at least some work when the memory
+		 * configured is so low that we run out before storing anything.
 		 */
-		if (TidStoreMemoryUsage(vacrel->dead_items) > vacrel->dead_items_info->max_bytes)
+		if (vacrel->dead_items_info->num_items > 0 &&
+			TidStoreMemoryUsage(vacrel->dead_items) > vacrel->dead_items_info->max_bytes)
 		{
 			/*
 			 * Before beginning index vacuuming, we release any pin we may
diff --git a/src/test/regress/expected/vacuum.out b/src/test/regress/expected/vacuum.out
index 1a07dbf67d6..3f91b69b324 100644
--- a/src/test/regress/expected/vacuum.out
+++ b/src/test/regress/expected/vacuum.out
@@ -148,6 +148,10 @@ CREATE INDEX brin_pvactst ON pvactst USING brin (i);
 CREATE INDEX gin_pvactst ON pvactst USING gin (a);
 CREATE INDEX gist_pvactst ON pvactst USING gist (p);
 CREATE INDEX spgist_pvactst ON pvactst USING spgist (p);
+CREATE TABLE pvactst2 (i INT) WITH (autovacuum_enabled = off);
+INSERT INTO pvactst2 SELECT generate_series(1, 1000);
+CREATE INDEX ON pvactst2 (i);
+CREATE INDEX ON pvactst2 (i);
 -- VACUUM invokes parallel index cleanup
 SET min_parallel_index_scan_size to 0;
 VACUUM (PARALLEL 2) pvactst;
@@ -167,6 +171,13 @@ VACUUM (PARALLEL) pvactst; -- error, cannot use PARALLEL option without parallel
 ERROR:  parallel option requires a value between 0 and 1024
 LINE 1: VACUUM (PARALLEL) pvactst;
                 ^
+-- Test parallel vacuum using the minimum maintenance_work_mem with and without
+-- dead tuples.
+SET maintenance_work_mem TO 64;
+VACUUM (PARALLEL 2) pvactst2;
+DELETE FROM pvactst2 WHERE i < 1000;
+VACUUM (PARALLEL 2) pvactst2;
+RESET maintenance_work_mem;
 -- Test different combinations of parallel and full options for temporary tables
 CREATE TEMPORARY TABLE tmp (a int PRIMARY KEY);
 CREATE INDEX tmp_idx1 ON tmp (a);
@@ -175,6 +186,7 @@ WARNING:  disabling parallel option of vacuum on "tmp" --- cannot vacuum tempora
 VACUUM (PARALLEL 0, FULL TRUE) tmp; -- can specify parallel disabled (even though that's implied by FULL)
 RESET min_parallel_index_scan_size;
 DROP TABLE pvactst;
+DROP TABLE pvactst2;
 -- INDEX_CLEANUP option
 CREATE TABLE no_index_cleanup (i INT PRIMARY KEY, t TEXT);
 -- Use uncompressed data stored in toast.
diff --git a/src/test/regress/sql/vacuum.sql b/src/test/regress/sql/vacuum.sql
index 5e55079e718..058add027f1 100644
--- a/src/test/regress/sql/vacuum.sql
+++ b/src/test/regress/sql/vacuum.sql
@@ -113,6 +113,10 @@ CREATE INDEX brin_pvactst ON pvactst USING brin (i);
 CREATE INDEX gin_pvactst ON pvactst USING gin (a);
 CREATE INDEX gist_pvactst ON pvactst USING gist (p);
 CREATE INDEX spgist_pvactst ON pvactst USING spgist (p);
+CREATE TABLE pvactst2 (i INT) WITH (autovacuum_enabled = off);
+INSERT INTO pvactst2 SELECT generate_series(1, 1000);
+CREATE INDEX ON pvactst2 (i);
+CREATE INDEX ON pvactst2 (i);
 
 -- VACUUM invokes parallel index cleanup
 SET min_parallel_index_scan_size to 0;
@@ -130,6 +134,14 @@ VACUUM (PARALLEL 2, INDEX_CLEANUP FALSE) pvactst;
 VACUUM (PARALLEL 2, FULL TRUE) pvactst; -- error, cannot use both PARALLEL and FULL
 VACUUM (PARALLEL) pvactst; -- error, cannot use PARALLEL option without parallel degree
 
+-- Test parallel vacuum using the minimum maintenance_work_mem with and without
+-- dead tuples.
+SET maintenance_work_mem TO 64;
+VACUUM (PARALLEL 2) pvactst2;
+DELETE FROM pvactst2 WHERE i < 1000;
+VACUUM (PARALLEL 2) pvactst2;
+RESET maintenance_work_mem;
+
 -- Test different combinations of parallel and full options for temporary tables
 CREATE TEMPORARY TABLE tmp (a int PRIMARY KEY);
 CREATE INDEX tmp_idx1 ON tmp (a);
@@ -137,6 +149,7 @@ VACUUM (PARALLEL 1, FULL FALSE) tmp; -- parallel vacuum disabled for temp tables
 VACUUM (PARALLEL 0, FULL TRUE) tmp; -- can specify parallel disabled (even though that's implied by FULL)
 RESET min_parallel_index_scan_size;
 DROP TABLE pvactst;
+DROP TABLE pvactst2;
 
 -- INDEX_CLEANUP option
 CREATE TABLE no_index_cleanup (i INT PRIMARY KEY, t TEXT);
-- 
2.43.5

#12David Rowley
dgrowleyml@gmail.com
In reply to: Masahiko Sawada (#11)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Tue, 18 Mar 2025 at 19:34, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I've attached the updated patch.

Looks good to me.

David

#13Masahiko Sawada
sawada.mshk@gmail.com
In reply to: David Rowley (#12)
Re: maintenance_work_mem = 64kB doesn't work for vacuum

On Mon, Mar 17, 2025 at 11:54 PM David Rowley <dgrowleyml@gmail.com> wrote:

On Tue, 18 Mar 2025 at 19:34, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I've attached the updated patch.

Looks good to me.

Thank you for reviewing the patch. Pushed (backpatched to v17).

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com