[bug?] Missed parallel safety checks, and wrong parallel safety
Hello,
I think we've found a few existing problems with handling the parallel safety of functions while doing an experiment. Could I hear your opinions on what we should do? I'd be willing to create and submit a patch to fix them.
The experiment is to add a parallel safety check in FunctionCallInvoke() and run the regression test with force_parallel_mode=regress. The added check errors out with ereport(ERROR) when the about-to-be-called function is parallel unsafe and the process is currently in parallel mode. 6 test cases failed because the following parallel-unsafe functions were called:
dsnowball_init
balkifnull
int44out
text_w_default_out
widget_out
The first function is created in src/backend/snowball/snowball_create.sql for full text search. The remaining functions are created during the regression test run.
The relevant issues follow.
(1)
All the above functions are actually parallel safe looking at their implementations. It seems that their CREATE FUNCTION statements are just missing PARALLEL SAFE specifications, so I think I'll add them. dsnowball_lexize() may also be parallel safe.
(2)
I'm afraid the above phenomenon reveals that postgres overlooks parallel safety checks in some places. Specifically, we noticed the following:
* User-defined aggregate
CREATE AGGREGATE allows to specify parallel safety of the aggregate itself and the planner checks it, but the support function of the aggregate is not checked. OTOH, the document clearly says:
https://www.postgresql.org/docs/devel/xaggr.html
"Worth noting also is that for an aggregate to be executed in parallel, the aggregate itself must be marked PARALLEL SAFE. The parallel-safety markings on its support functions are not consulted."
https://www.postgresql.org/docs/devel/sql-createaggregate.html
"An aggregate will not be considered for parallelization if it is marked PARALLEL UNSAFE (which is the default!) or PARALLEL RESTRICTED. Note that the parallel-safety markings of the aggregate's support functions are not consulted by the planner, only the marking of the aggregate itself."
Can we check the parallel safety of aggregate support functions during statement execution and error out? Is there any reason not to do so?
* User-defined data type
The input, output, send,receive, and other functions of a UDT are not checked for parallel safety. Is there any good reason to not check them other than the concern about performance?
* Functions for full text search
Should CREATE TEXT SEARCH TEMPLATE ensure that the functions are parallel safe? (Those functions could be changed to parallel unsafe later with ALTER FUNCTION, though.)
(3) Built-in UDFs are not checked for parallel safety
The functions defined in fmgr_builtins[], which are derived from pg_proc.dat, are not checked. Most of them are marked parallel safe, but some are paralel unsaferestricted.
Besides, changing their parallel safety with ALTER FUNCTION PARALLEL does not affect the selection of query plan. This is because fmgr_builtins[] does not have a member for parallel safety.
Should we add a member for parallel safety in fmgr_builtins[], and disallow ALTER FUNCTION to change the parallel safety of builtin UDFs?
Regards
Takayuki Tsunakawa
On Tue, Apr 20, 2021 at 2:23 PM tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:
(2)
I'm afraid the above phenomenon reveals that postgres overlooks parallel safety checks in some places. Specifically, we noticed the following:* User-defined aggregate
CREATE AGGREGATE allows to specify parallel safety of the aggregate itself and the planner checks it, but the support function of the aggregate is not checked. OTOH, the document clearly says:https://www.postgresql.org/docs/devel/xaggr.html
"Worth noting also is that for an aggregate to be executed in parallel, the aggregate itself must be marked PARALLEL SAFE. The parallel-safety markings on its support functions are not consulted."
https://www.postgresql.org/docs/devel/sql-createaggregate.html
"An aggregate will not be considered for parallelization if it is marked PARALLEL UNSAFE (which is the default!) or PARALLEL RESTRICTED. Note that the parallel-safety markings of the aggregate's support functions are not consulted by the planner, only the marking of the aggregate itself."
IMO, the reason for not checking the parallel safety of the support
functions is that the functions themselves can have whole lot of other
functions (can be nested as well) which might be quite hard to check
at the planning time. That is why the job of marking an aggregate as
parallel safe is best left to the user. They have to mark the aggreage
parallel unsafe if at least one support function is parallel unsafe,
otherwise parallel safe.
Can we check the parallel safety of aggregate support functions during statement execution and error out? Is there any reason not to do so?
And if we were to do above, within the function execution API, we need
to know where the function got called from(?). It is best left to the
user to decide whether a function/aggregate is parallel safe or not.
This is the main reason we have declarative constructs like parallel
safe/unsafe/restricted.
For core functions, we definitely should properly mark parallel
safe/restricted/unsafe tags wherever possible.
Please correct me If I miss something.
With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com
Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:
On Tue, Apr 20, 2021 at 2:23 PM tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:https://www.postgresql.org/docs/devel/xaggr.html
"Worth noting also is that for an aggregate to be executed in parallel, the aggregate itself must be marked PARALLEL SAFE. The parallel-safety markings on its support functions are not consulted."
IMO, the reason for not checking the parallel safety of the support
functions is that the functions themselves can have whole lot of other
functions (can be nested as well) which might be quite hard to check
at the planning time. That is why the job of marking an aggregate as
parallel safe is best left to the user.
Yes. I think the documentation is perfectly clear that this is
intentional; I don't see a need to change it.
Should we add a member for parallel safety in fmgr_builtins[], and disallow ALTER FUNCTION to change the parallel safety of builtin UDFs?
No. You'd have to be superuser anyway to do that, and we're not in the
habit of trying to put training wheels on superusers.
Don't have an opinion about the other points yet.
regards, tom lane
From: Tom Lane <tgl@sss.pgh.pa.us>
Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:
IMO, the reason for not checking the parallel safety of the support
functions is that the functions themselves can have whole lot of other
functions (can be nested as well) which might be quite hard to check
at the planning time. That is why the job of marking an aggregate as
parallel safe is best left to the user.Yes. I think the documentation is perfectly clear that this is
intentional; I don't see a need to change it.
OK, that's what I expected. I understood from this that the Postgres's stance toward parallel safety is that Postgres does its best effort to check parallel safety (as far as it doesn't hurt performance much, and perhaps the core code doesn't get very complex), and the user should be responsible for the actual parallel safety of ancillary objects (in this case, support functions for an aggregate) of the target object that he/she marked as parallel safe.
Should we add a member for parallel safety in fmgr_builtins[], and disallow
ALTER FUNCTION to change the parallel safety of builtin UDFs?
No. You'd have to be superuser anyway to do that, and we're not in the
habit of trying to put training wheels on superusers.
Understood. However, we may add the parallel safety member in fmgr_builtins[] in another thread for parallel INSERT SELECT. I'd appreciate your comment on this if you see any concern.
Don't have an opinion about the other points yet.
I'd like to have your comments on them, too. But I understand you must be so busy at least until the beta release of PG 14.
Regards
Takayuki Tsunakawa
"tsunakawa.takay@fujitsu.com" <tsunakawa.takay@fujitsu.com> writes:
From: Tom Lane <tgl@sss.pgh.pa.us>
No. You'd have to be superuser anyway to do that, and we're not in the
habit of trying to put training wheels on superusers.
Understood. However, we may add the parallel safety member in fmgr_builtins[] in another thread for parallel INSERT SELECT. I'd appreciate your comment on this if you see any concern.
[ raised eyebrow... ] I find it very hard to understand why that would
be necessary, or even a good idea. Not least because there's no spare
room there; you'd have to incur a substantial enlargement of the
array to add another flag. But also, that would indeed lock down
the value of the parallel-safety flag, and that seems like a fairly
bad idea.
regards, tom lane
From: Tom Lane <tgl@sss.pgh.pa.us>
[ raised eyebrow... ] I find it very hard to understand why that would
be necessary, or even a good idea. Not least because there's no spare
room there; you'd have to incur a substantial enlargement of the
array to add another flag. But also, that would indeed lock down
the value of the parallel-safety flag, and that seems like a fairly
bad idea.
You're right, FmgrBuiltins is already fully packed (24 bytes on 64-bit machines). Enlarging the frequently accessed fmgr_builtins array may wreak unexpectedly large adverse effect on performance.
I wanted to check the parallel safety of functions, which various objects (data type, index, trigger, etc.) come down to, in FunctionCallInvoke() and other few places. But maybe we skip the check for built-in functions. That's a matter of where we draw a line between where we check and where we don't.
Regards
Takayuki Tsunakawa
I think we've found a few existing problems with handling the parallel safety of
functions while doing an experiment. Could I hear your opinions on what we
should do? I'd be willing to create and submit a patch to fix them.The experiment is to add a parallel safety check in FunctionCallInvoke() and run
the regression test with force_parallel_mode=regress. The added check
errors out with ereport(ERROR) when the about-to-be-called function is
parallel unsafe and the process is currently in parallel mode. 6 test cases failed
because the following parallel-unsafe functions were called:dsnowball_init
balkifnull
int44out
text_w_default_out
widget_outThe first function is created in src/backend/snowball/snowball_create.sql for
full text search. The remaining functions are created during the regression
test run.(1)
All the above functions are actually parallel safe looking at their
implementations. It seems that their CREATE FUNCTION statements are just
missing PARALLEL SAFE specifications, so I think I'll add them.
dsnowball_lexize() may also be parallel safe.
I agree that it's better to mark the function with correct parallel safety lable.
Especially for the above functions which will be executed in parallel mode.
It will be friendly to developer and user who is working on something related to parallel test.
So, I attached the patch to mark the above functions parallel safe.
Best regards,
houzj
Attachments:
0001-fix-testcase-with-wrong-parallel-safety-flag.patchapplication/octet-stream; name=0001-fix-testcase-with-wrong-parallel-safety-flag.patchDownload
From 37e56e57ad0593ab30a0e64c44ca7b0bbb64d9c7 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Wed, 21 Apr 2021 15:26:39 +0800
Subject: [PATCH] fix-testcase-with-wrong-parallel-safety-flag
---
src/backend/snowball/snowball_func.sql.in | 4 ++--
src/test/regress/expected/aggregates.out | 1 +
src/test/regress/expected/create_type.out | 4 ++--
src/test/regress/expected/domain.out | 2 +-
src/test/regress/input/create_function_1.source | 4 ++--
src/test/regress/output/create_function_1.source | 4 ++--
src/test/regress/sql/aggregates.sql | 1 +
src/test/regress/sql/create_type.sql | 4 ++--
src/test/regress/sql/domain.sql | 2 +-
9 files changed, 14 insertions(+), 12 deletions(-)
diff --git a/src/backend/snowball/snowball_func.sql.in b/src/backend/snowball/snowball_func.sql.in
index cb1eaca4fb..08bf3397e4 100644
--- a/src/backend/snowball/snowball_func.sql.in
+++ b/src/backend/snowball/snowball_func.sql.in
@@ -21,11 +21,11 @@ SET search_path = pg_catalog;
CREATE FUNCTION dsnowball_init(INTERNAL)
RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_init'
-LANGUAGE C STRICT;
+LANGUAGE C STRICT PARALLEL SAFE;
CREATE FUNCTION dsnowball_lexize(INTERNAL, INTERNAL, INTERNAL, INTERNAL)
RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_lexize'
-LANGUAGE C STRICT;
+LANGUAGE C STRICT PARALLEL SAFE;
CREATE TEXT SEARCH TEMPLATE snowball
(INIT = dsnowball_init,
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index ca06d41dd0..2a4a83fab7 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -2386,6 +2386,7 @@ rollback;
BEGIN;
CREATE FUNCTION balkifnull(int8, int4)
RETURNS int8
+PARALLEL SAFE
STRICT
LANGUAGE plpgsql AS $$
BEGIN
diff --git a/src/test/regress/expected/create_type.out b/src/test/regress/expected/create_type.out
index 14394cc95c..eb1c6bdcd2 100644
--- a/src/test/regress/expected/create_type.out
+++ b/src/test/regress/expected/create_type.out
@@ -48,7 +48,7 @@ NOTICE: return type int42 is only a shell
CREATE FUNCTION int42_out(int42)
RETURNS cstring
AS 'int4out'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type int42 is only a shell
CREATE FUNCTION text_w_default_in(cstring)
RETURNS text_w_default
@@ -58,7 +58,7 @@ NOTICE: return type text_w_default is only a shell
CREATE FUNCTION text_w_default_out(text_w_default)
RETURNS cstring
AS 'textout'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type text_w_default is only a shell
CREATE TYPE int42 (
internallength = 4,
diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out
index 411d5c003e..c82d189823 100644
--- a/src/test/regress/expected/domain.out
+++ b/src/test/regress/expected/domain.out
@@ -1067,7 +1067,7 @@ drop domain di;
-- this has caused issues in the past
--
create function sql_is_distinct_from(anyelement, anyelement)
-returns boolean language sql
+returns boolean language sql parallel safe
as 'select $1 is distinct from $2 limit 1';
create domain inotnull int
check (sql_is_distinct_from(value, null));
diff --git a/src/test/regress/input/create_function_1.source b/src/test/regress/input/create_function_1.source
index 6c69b7fe6c..b9a4e7af38 100644
--- a/src/test/regress/input/create_function_1.source
+++ b/src/test/regress/input/create_function_1.source
@@ -10,7 +10,7 @@ CREATE FUNCTION widget_in(cstring)
CREATE FUNCTION widget_out(widget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
CREATE FUNCTION int44in(cstring)
RETURNS city_budget
@@ -20,7 +20,7 @@ CREATE FUNCTION int44in(cstring)
CREATE FUNCTION int44out(city_budget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
CREATE FUNCTION check_primary_key ()
RETURNS trigger
diff --git a/src/test/regress/output/create_function_1.source b/src/test/regress/output/create_function_1.source
index c66146db9d..0c1390e8c5 100644
--- a/src/test/regress/output/create_function_1.source
+++ b/src/test/regress/output/create_function_1.source
@@ -10,7 +10,7 @@ DETAIL: Creating a shell type definition.
CREATE FUNCTION widget_out(widget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type widget is only a shell
CREATE FUNCTION int44in(cstring)
RETURNS city_budget
@@ -21,7 +21,7 @@ DETAIL: Creating a shell type definition.
CREATE FUNCTION int44out(city_budget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type city_budget is only a shell
CREATE FUNCTION check_primary_key ()
RETURNS trigger
diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql
index eb80a2fe06..68990b5b5f 100644
--- a/src/test/regress/sql/aggregates.sql
+++ b/src/test/regress/sql/aggregates.sql
@@ -978,6 +978,7 @@ rollback;
BEGIN;
CREATE FUNCTION balkifnull(int8, int4)
RETURNS int8
+PARALLEL SAFE
STRICT
LANGUAGE plpgsql AS $$
BEGIN
diff --git a/src/test/regress/sql/create_type.sql b/src/test/regress/sql/create_type.sql
index a32a9e6795..285707e532 100644
--- a/src/test/regress/sql/create_type.sql
+++ b/src/test/regress/sql/create_type.sql
@@ -51,7 +51,7 @@ CREATE FUNCTION int42_in(cstring)
CREATE FUNCTION int42_out(int42)
RETURNS cstring
AS 'int4out'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
CREATE FUNCTION text_w_default_in(cstring)
RETURNS text_w_default
AS 'textin'
@@ -59,7 +59,7 @@ CREATE FUNCTION text_w_default_in(cstring)
CREATE FUNCTION text_w_default_out(text_w_default)
RETURNS cstring
AS 'textout'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
CREATE TYPE int42 (
internallength = 4,
diff --git a/src/test/regress/sql/domain.sql b/src/test/regress/sql/domain.sql
index 549c0b5adf..a022ae4223 100644
--- a/src/test/regress/sql/domain.sql
+++ b/src/test/regress/sql/domain.sql
@@ -724,7 +724,7 @@ drop domain di;
--
create function sql_is_distinct_from(anyelement, anyelement)
-returns boolean language sql
+returns boolean language sql parallel safe
as 'select $1 is distinct from $2 limit 1';
create domain inotnull int
--
2.18.4
On Wed, Apr 21, 2021 at 8:12 AM tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:
From: Tom Lane <tgl@sss.pgh.pa.us>
[ raised eyebrow... ] I find it very hard to understand why that would
be necessary, or even a good idea. Not least because there's no spare
room there; you'd have to incur a substantial enlargement of the
array to add another flag. But also, that would indeed lock down
the value of the parallel-safety flag, and that seems like a fairly
bad idea.You're right, FmgrBuiltins is already fully packed (24 bytes on 64-bit machines). Enlarging the frequently accessed fmgr_builtins array may wreak unexpectedly large adverse effect on performance.
I wanted to check the parallel safety of functions, which various objects (data type, index, trigger, etc.) come down to, in FunctionCallInvoke() and other few places. But maybe we skip the check for built-in functions. That's a matter of where we draw a line between where we check and where we don't.
IIUC, the idea here is to check for parallel safety of functions at
someplace in the code during function invocation so that if we execute
any parallel unsafe/restricted function via parallel worker then we
error out. If so, isn't it possible to deal with built-in and
non-built-in functions in the same way?
I think we want to have some safety checks for functions as we have
for transaction id in AssignTransactionId(), command id in
CommandCounterIncrement(), for write operations in
heap_prepare_insert(), etc. Is that correct?
--
With Regards,
Amit Kapila.
Amit Kapila <amit.kapila16@gmail.com> writes:
On Wed, Apr 21, 2021 at 8:12 AM tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:From: Tom Lane <tgl@sss.pgh.pa.us>
[ raised eyebrow... ] I find it very hard to understand why that would
be necessary, or even a good idea.
IIUC, the idea here is to check for parallel safety of functions at
someplace in the code during function invocation so that if we execute
any parallel unsafe/restricted function via parallel worker then we
error out. If so, isn't it possible to deal with built-in and
non-built-in functions in the same way?
Yeah, one of the reasons I doubt this is a great idea is that you'd
still have to fetch the pg_proc row for non-built-in functions.
The obvious place to install such a check is fmgr_info(), which is
fetching said row anyway for other purposes, so it's really hard to
see how adding anything to FmgrBuiltin is going to help.
regards, tom lane
From: Tom Lane <tgl@sss.pgh.pa.us>
Amit Kapila <amit.kapila16@gmail.com> writes:
IIUC, the idea here is to check for parallel safety of functions at
someplace in the code during function invocation so that if we execute
any parallel unsafe/restricted function via parallel worker then we
error out. If so, isn't it possible to deal with built-in and
non-built-in functions in the same way?Yeah, one of the reasons I doubt this is a great idea is that you'd
still have to fetch the pg_proc row for non-built-in functions.The obvious place to install such a check is fmgr_info(), which is
fetching said row anyway for other purposes, so it's really hard to
see how adding anything to FmgrBuiltin is going to help.
Thank you, fmgr_info() looks like the best place to do the parallel safety check. Having a quick look at its callers, I didn't find any concerning place (of course, we can't be relieved until the regression test succeeds.) Also, with fmgr_info(), we don't have to find other places to add the check to deal with functions calls in execExpr.c and execExprInterp.c. This is beautiful.
But the current fmgr_info() does not check the parallel safety of builtin functions. It does not have information to do that. There are two options. Which do you think is better? I think 2.
1) fmgr_info() reads pg_proc like for non-builtin functions
This ruins the effort for the fast path for builtin functions. I can't imagine how large the adverse impact on performance would be, but I'm worried.
The benefit is that ALTER FUNCTION on builtin functions takes effect. But such operations are nonsensical, so I don't think we want to gain such a benefit.
2) Gen_fmgrtab.pl adds a member for proparallel in FmgrBuiltin
But we don't want to enlarge FmgrBuiltin struct. So, change the existing bool members strict and and retset into one member of type char, and represent the original values with some bit flags. Then we use that member for proparallel as well. (As a result, one byte is left for future use.)
I think we'll try 2). I'd be grateful if you could point out anything I need to be careful to.
Regards
Takayuki Tsunakawa
From: Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com>
I agree that it's better to mark the function with correct parallel safety lable.
Especially for the above functions which will be executed in parallel mode.
It will be friendly to developer and user who is working on something related to
parallel test.So, I attached the patch to mark the above functions parallel safe.
Thank you, the patch looks good. Please register it with the next CF if not yet.
Regards
Takayuki Tsunakawa
Thank you, fmgr_info() looks like the best place to do the parallel safety check.
Having a quick look at its callers, I didn't find any concerning place (of course,
we can't be relieved until the regression test succeeds.) Also, with fmgr_info(),
we don't have to find other places to add the check to deal with functions calls
in execExpr.c and execExprInterp.c. This is beautiful.But the current fmgr_info() does not check the parallel safety of builtin
functions. It does not have information to do that. There are two options.
Which do you think is better? I think 2.1) fmgr_info() reads pg_proc like for non-builtin functions This ruins the effort
for the fast path for builtin functions. I can't imagine how large the adverse
impact on performance would be, but I'm worried.
For approach 1): I think it could result in infinite recursion.
For example:
If we first access one built-in function A which have not been cached,
it need access the pg_proc, When accessing the pg_proc, it internally still need some built-in function B to scan.
At this time, if B is not cached , it still need to fetch function B's parallel flag by accessing the pg_proc.proparallel.
Then it could result in infinite recursion.
So, I think we can consider the approach 2)
Best regards,
houzj
From: Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com>
For approach 1): I think it could result in infinite recursion.
For example:
If we first access one built-in function A which have not been cached,
it need access the pg_proc, When accessing the pg_proc, it internally still need
some built-in function B to scan.
At this time, if B is not cached , it still need to fetch function B's parallel flag by
accessing the pg_proc.proparallel.
Then it could result in infinite recursion.So, I think we can consider the approach 2)
Hmm, that makes sense. That's a problem structure similar to that of relcache. Only one choice is left already, unless there's another better idea.
Regards
Takayuki Tsunakawa
On Wed, Apr 21, 2021 at 12:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
"tsunakawa.takay@fujitsu.com" <tsunakawa.takay@fujitsu.com> writes:
From: Tom Lane <tgl@sss.pgh.pa.us>
No. You'd have to be superuser anyway to do that, and we're not in the
habit of trying to put training wheels on superusers.Understood. However, we may add the parallel safety member in fmgr_builtins[] in another thread for parallel INSERT SELECT. I'd appreciate your comment on this if you see any concern.
[ raised eyebrow... ] I find it very hard to understand why that would
be necessary, or even a good idea. Not least because there's no spare
room there; you'd have to incur a substantial enlargement of the
array to add another flag. But also, that would indeed lock down
the value of the parallel-safety flag, and that seems like a fairly
bad idea.
I'm curious. The FmgrBuiltin struct includes the "strict" flag, so
that would "lock down the value" of the strict flag, wouldn't it?
Regards,
Greg Nancarrow
Fujitsu Australia
On Wed, Apr 21, 2021 at 7:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Amit Kapila <amit.kapila16@gmail.com> writes:
On Wed, Apr 21, 2021 at 8:12 AM tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:From: Tom Lane <tgl@sss.pgh.pa.us>
[ raised eyebrow... ] I find it very hard to understand why that would
be necessary, or even a good idea.IIUC, the idea here is to check for parallel safety of functions at
someplace in the code during function invocation so that if we execute
any parallel unsafe/restricted function via parallel worker then we
error out. If so, isn't it possible to deal with built-in and
non-built-in functions in the same way?Yeah, one of the reasons I doubt this is a great idea is that you'd
still have to fetch the pg_proc row for non-built-in functions.
So, are you suggesting that we should fetch the pg_proc row for
built-in functions as well for this purpose? If not, then how to
identify parallel safety of built-in functions in fmgr_info()?
Another idea could be that we check parallel safety of built-in
functions based on some static information. As we know the func_ids of
non-parallel-safe built-in functions, we can have a function
fmgr_builtin_parallel_safe() which check if the func_id is not one
among the predefined func_ids of non-parallel-safe built-in functions,
it returns true, otherwise, false. Then, we can call this new function
in fmgr_info for built-in functions.
Thoughts?
--
With Regards,
Amit Kapila.
Greg Nancarrow <gregn4422@gmail.com> writes:
I'm curious. The FmgrBuiltin struct includes the "strict" flag, so
that would "lock down the value" of the strict flag, wouldn't it?
It does, but that's much more directly a property of the function's
C code than parallel-safety is.
regards, tom lane
On Fri, Apr 23, 2021 at 9:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Greg Nancarrow <gregn4422@gmail.com> writes:
I'm curious. The FmgrBuiltin struct includes the "strict" flag, so
that would "lock down the value" of the strict flag, wouldn't it?It does, but that's much more directly a property of the function's
C code than parallel-safety is.
I'm not sure I agree with that, but I think having the "strict" flag
in FmgrBuiltin isn't that nice either.
--
Robert Haas
EDB: http://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes:
On Fri, Apr 23, 2021 at 9:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Greg Nancarrow <gregn4422@gmail.com> writes:
I'm curious. The FmgrBuiltin struct includes the "strict" flag, so
that would "lock down the value" of the strict flag, wouldn't it?
It does, but that's much more directly a property of the function's
C code than parallel-safety is.
I'm not sure I agree with that, but I think having the "strict" flag
in FmgrBuiltin isn't that nice either.
Yeah, if we could readily do without it, we probably would. But the
function call mechanism itself is responsible for implementing strictness,
so it *has* to have that flag available.
regards, tom lane
On Fri, Apr 23, 2021 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Greg Nancarrow <gregn4422@gmail.com> writes:
I'm curious. The FmgrBuiltin struct includes the "strict" flag, so
that would "lock down the value" of the strict flag, wouldn't it?It does, but that's much more directly a property of the function's
C code than parallel-safety is.
Isn't parallel safety also the C code property? I mean unless someone
changes the built-in function code, changing that property would be
dangerous. The other thing is even if a user is allowed to change one
function's property, how will they know which other functions are
called by that function and whether they are parallel-safe or not. For
example, say if the user wants to change the parallel safe property of
a built-in function brin_summarize_new_values, unless she changes its
code and the functions called by it like brin_summarize_range, it
would be dangerous. So, isn't it better to disallow changing parallel
safety for built-in functions?
Also, if the strict property of built-in functions is fixed
internally, why we allow users to change it and is that of any help?
--
With Regards,
Amit Kapila.
On Sat, Apr 24, 2021 at 12:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Fri, Apr 23, 2021 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Greg Nancarrow <gregn4422@gmail.com> writes:
I'm curious. The FmgrBuiltin struct includes the "strict" flag, so
that would "lock down the value" of the strict flag, wouldn't it?It does, but that's much more directly a property of the function's
C code than parallel-safety is.Isn't parallel safety also the C code property? I mean unless someone
changes the built-in function code, changing that property would be
dangerous. The other thing is even if a user is allowed to change one
function's property, how will they know which other functions are
called by that function and whether they are parallel-safe or not. For
example, say if the user wants to change the parallel safe property of
a built-in function brin_summarize_new_values, unless she changes its
code and the functions called by it like brin_summarize_range, it
would be dangerous. So, isn't it better to disallow changing parallel
safety for built-in functions?Also, if the strict property of built-in functions is fixed
internally, why we allow users to change it and is that of any help?
Yes, I'd like to know too.
I think it would make more sense to disallow changing properties like
strict/parallel-safety on built-in functions.
Also, with sufficient privileges, a built-in function can be
redefined, yet the original function (whose info is cached in
FmgrBuiltins[], from build-time) is always invoked, not the
newly-defined version.
Regards,
Greg Nancarrow
Fujitsu Australia
I'm curious. The FmgrBuiltin struct includes the "strict" flag, so
that would "lock down the value" of the strict flag, wouldn't it?It does, but that's much more directly a property of the function's C
code than parallel-safety is.I'm not sure I agree with that, but I think having the "strict" flag
in FmgrBuiltin isn't that nice either.Yeah, if we could readily do without it, we probably would. But the function
call mechanism itself is responsible for implementing strictness, so it *has* to
have that flag available.
So, If we do not want to lock down the parallel safety of built-in functions.
It seems we can try to fetch the proparallel from pg_proc for built-in function
in fmgr_info_cxt_security too. To avoid recursive safety check when fetching
proparallel from pg_proc, we can add a Global variable to mark is it in a recursive state.
And we skip safety check in a recursive state, In this approach, parallel safety
will not be locked, and there are no new members in FmgrBuiltin.
Attaching the patch about this approach [0001-approach-1].
Thoughts ?
I also attached another approach patch [0001-approach-2] about adding
parallel safety in FmgrBuiltin, because this approach seems faster and
we can combine some bool member into a bitflag to avoid enlarging the
FmgrBuiltin array, though this approach will lock down the parallel safety
of built-in function.
Best regards,
houzj
Attachments:
0002-fix-testcase-with-wrong-parallel-safety-flag.patchapplication/octet-stream; name=0002-fix-testcase-with-wrong-parallel-safety-flag.patchDownload
From 6c22ae7398198bafeb09cfeb8735b88887e0a922 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Wed, 28 Apr 2021 10:25:14 +0800
Subject: [PATCH] fix-testcase-with-wrong-parallel-safety-flag
---
src/backend/snowball/snowball_func.sql.in | 4 ++--
src/include/catalog/pg_proc.dat | 8 ++++++++
src/pl/plpgsql/src/plpgsql--1.0.sql | 2 +-
src/test/isolation/specs/deadlock-parallel.spec | 4 ++--
src/test/regress/expected/aggregates.out | 1 +
src/test/regress/expected/create_type.out | 4 ++--
src/test/regress/expected/domain.out | 2 +-
src/test/regress/expected/insert.out | 2 +-
src/test/regress/input/create_function_1.source | 4 ++--
src/test/regress/output/create_function_1.source | 4 ++--
src/test/regress/sql/aggregates.sql | 1 +
src/test/regress/sql/create_type.sql | 4 ++--
src/test/regress/sql/domain.sql | 2 +-
src/test/regress/sql/insert.sql | 2 +-
14 files changed, 27 insertions(+), 17 deletions(-)
diff --git a/src/backend/snowball/snowball_func.sql.in b/src/backend/snowball/snowball_func.sql.in
index cb1eaca4fb..08bf3397e4 100644
--- a/src/backend/snowball/snowball_func.sql.in
+++ b/src/backend/snowball/snowball_func.sql.in
@@ -21,11 +21,11 @@ SET search_path = pg_catalog;
CREATE FUNCTION dsnowball_init(INTERNAL)
RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_init'
-LANGUAGE C STRICT;
+LANGUAGE C STRICT PARALLEL SAFE;
CREATE FUNCTION dsnowball_lexize(INTERNAL, INTERNAL, INTERNAL, INTERNAL)
RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_lexize'
-LANGUAGE C STRICT;
+LANGUAGE C STRICT PARALLEL SAFE;
CREATE TEXT SEARCH TEMPLATE snowball
(INIT = dsnowball_init,
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index cac5b82cc6..5690c66f07 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -8485,6 +8485,14 @@
{ oid => '2892', descr => 'release all advisory locks',
proname => 'pg_advisory_unlock_all', provolatile => 'v', proparallel => 'r',
prorettype => 'void', proargtypes => '', prosrc => 'pg_advisory_unlock_all' },
+{ oid => '6123', descr => 'obtain shared advisory lock for testing purpose',
+ proname => 'pg_advisory_test_xact_lock_shared', provolatile => 'v',
+ prorettype => 'void', proargtypes => 'int8',
+ prosrc => 'pg_advisory_xact_lock_shared_int8' },
+{ oid => '6124', descr => 'obtain exclusive advisory lock for testing purpose',
+ proname => 'pg_advisory_test_xact_lock', provolatile => 'v',
+ prorettype => 'void', proargtypes => 'int8',
+ prosrc => 'pg_advisory_xact_lock_int8' },
# XML support
{ oid => '2893', descr => 'I/O',
diff --git a/src/pl/plpgsql/src/plpgsql--1.0.sql b/src/pl/plpgsql/src/plpgsql--1.0.sql
index 6e5b990fcc..165a670aa8 100644
--- a/src/pl/plpgsql/src/plpgsql--1.0.sql
+++ b/src/pl/plpgsql/src/plpgsql--1.0.sql
@@ -1,7 +1,7 @@
/* src/pl/plpgsql/src/plpgsql--1.0.sql */
CREATE FUNCTION plpgsql_call_handler() RETURNS language_handler
- LANGUAGE c AS 'MODULE_PATHNAME';
+ LANGUAGE c PARALLEL SAFE AS 'MODULE_PATHNAME';
CREATE FUNCTION plpgsql_inline_handler(internal) RETURNS void
STRICT LANGUAGE c AS 'MODULE_PATHNAME';
diff --git a/src/test/isolation/specs/deadlock-parallel.spec b/src/test/isolation/specs/deadlock-parallel.spec
index 7ad290c0bd..7beaad46ee 100644
--- a/src/test/isolation/specs/deadlock-parallel.spec
+++ b/src/test/isolation/specs/deadlock-parallel.spec
@@ -37,10 +37,10 @@
setup
{
create function lock_share(int,int) returns int language sql as
- 'select pg_advisory_xact_lock_shared($1); select 1;' parallel safe;
+ 'select pg_advisory_test_xact_lock_shared($1); select 1;' parallel safe;
create function lock_excl(int,int) returns int language sql as
- 'select pg_advisory_xact_lock($1); select 1;' parallel safe;
+ 'select pg_advisory_test_xact_lock($1); select 1;' parallel safe;
create table bigt as select x from generate_series(1, 10000) x;
analyze bigt;
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index ca06d41dd0..2a4a83fab7 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -2386,6 +2386,7 @@ rollback;
BEGIN;
CREATE FUNCTION balkifnull(int8, int4)
RETURNS int8
+PARALLEL SAFE
STRICT
LANGUAGE plpgsql AS $$
BEGIN
diff --git a/src/test/regress/expected/create_type.out b/src/test/regress/expected/create_type.out
index 14394cc95c..eb1c6bdcd2 100644
--- a/src/test/regress/expected/create_type.out
+++ b/src/test/regress/expected/create_type.out
@@ -48,7 +48,7 @@ NOTICE: return type int42 is only a shell
CREATE FUNCTION int42_out(int42)
RETURNS cstring
AS 'int4out'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type int42 is only a shell
CREATE FUNCTION text_w_default_in(cstring)
RETURNS text_w_default
@@ -58,7 +58,7 @@ NOTICE: return type text_w_default is only a shell
CREATE FUNCTION text_w_default_out(text_w_default)
RETURNS cstring
AS 'textout'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type text_w_default is only a shell
CREATE TYPE int42 (
internallength = 4,
diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out
index 411d5c003e..c82d189823 100644
--- a/src/test/regress/expected/domain.out
+++ b/src/test/regress/expected/domain.out
@@ -1067,7 +1067,7 @@ drop domain di;
-- this has caused issues in the past
--
create function sql_is_distinct_from(anyelement, anyelement)
-returns boolean language sql
+returns boolean language sql parallel safe
as 'select $1 is distinct from $2 limit 1';
create domain inotnull int
check (sql_is_distinct_from(value, null));
diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out
index 5063a3dc22..7e7ef24098 100644
--- a/src/test/regress/expected/insert.out
+++ b/src/test/regress/expected/insert.out
@@ -415,7 +415,7 @@ select tableoid::regclass::text, a, min(b) as min_b, max(b) as max_b from list_p
create or replace function part_hashint4_noop(value int4, seed int8)
returns int8 as $$
select value + seed;
-$$ language sql immutable;
+$$ language sql immutable parallel safe;
create operator class part_test_int4_ops
for type int4
using hash as
diff --git a/src/test/regress/input/create_function_1.source b/src/test/regress/input/create_function_1.source
index 6c69b7fe6c..b9a4e7af38 100644
--- a/src/test/regress/input/create_function_1.source
+++ b/src/test/regress/input/create_function_1.source
@@ -10,7 +10,7 @@ CREATE FUNCTION widget_in(cstring)
CREATE FUNCTION widget_out(widget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
CREATE FUNCTION int44in(cstring)
RETURNS city_budget
@@ -20,7 +20,7 @@ CREATE FUNCTION int44in(cstring)
CREATE FUNCTION int44out(city_budget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
CREATE FUNCTION check_primary_key ()
RETURNS trigger
diff --git a/src/test/regress/output/create_function_1.source b/src/test/regress/output/create_function_1.source
index c66146db9d..0c1390e8c5 100644
--- a/src/test/regress/output/create_function_1.source
+++ b/src/test/regress/output/create_function_1.source
@@ -10,7 +10,7 @@ DETAIL: Creating a shell type definition.
CREATE FUNCTION widget_out(widget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type widget is only a shell
CREATE FUNCTION int44in(cstring)
RETURNS city_budget
@@ -21,7 +21,7 @@ DETAIL: Creating a shell type definition.
CREATE FUNCTION int44out(city_budget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type city_budget is only a shell
CREATE FUNCTION check_primary_key ()
RETURNS trigger
diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql
index eb80a2fe06..68990b5b5f 100644
--- a/src/test/regress/sql/aggregates.sql
+++ b/src/test/regress/sql/aggregates.sql
@@ -978,6 +978,7 @@ rollback;
BEGIN;
CREATE FUNCTION balkifnull(int8, int4)
RETURNS int8
+PARALLEL SAFE
STRICT
LANGUAGE plpgsql AS $$
BEGIN
diff --git a/src/test/regress/sql/create_type.sql b/src/test/regress/sql/create_type.sql
index a32a9e6795..285707e532 100644
--- a/src/test/regress/sql/create_type.sql
+++ b/src/test/regress/sql/create_type.sql
@@ -51,7 +51,7 @@ CREATE FUNCTION int42_in(cstring)
CREATE FUNCTION int42_out(int42)
RETURNS cstring
AS 'int4out'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
CREATE FUNCTION text_w_default_in(cstring)
RETURNS text_w_default
AS 'textin'
@@ -59,7 +59,7 @@ CREATE FUNCTION text_w_default_in(cstring)
CREATE FUNCTION text_w_default_out(text_w_default)
RETURNS cstring
AS 'textout'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
CREATE TYPE int42 (
internallength = 4,
diff --git a/src/test/regress/sql/domain.sql b/src/test/regress/sql/domain.sql
index 549c0b5adf..a022ae4223 100644
--- a/src/test/regress/sql/domain.sql
+++ b/src/test/regress/sql/domain.sql
@@ -724,7 +724,7 @@ drop domain di;
--
create function sql_is_distinct_from(anyelement, anyelement)
-returns boolean language sql
+returns boolean language sql parallel safe
as 'select $1 is distinct from $2 limit 1';
create domain inotnull int
diff --git a/src/test/regress/sql/insert.sql b/src/test/regress/sql/insert.sql
index bfaa8a3b27..895524bc88 100644
--- a/src/test/regress/sql/insert.sql
+++ b/src/test/regress/sql/insert.sql
@@ -258,7 +258,7 @@ select tableoid::regclass::text, a, min(b) as min_b, max(b) as max_b from list_p
create or replace function part_hashint4_noop(value int4, seed int8)
returns int8 as $$
select value + seed;
-$$ language sql immutable;
+$$ language sql immutable parallel safe;
create operator class part_test_int4_ops
for type int4
--
2.18.4
0001-approach-1-check-parallel-safety-in-fmgr_info_cxt_se.patchapplication/octet-stream; name=0001-approach-1-check-parallel-safety-in-fmgr_info_cxt_se.patchDownload
From e84e68de4054ee88f44a65213d1d5d17c6479a37 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Wed, 28 Apr 2021 19:22:12 +0800
Subject: [PATCH] check-parallel-safety-in-fmgr_info_cxt_security
---
src/backend/utils/fmgr/fmgr.c | 62 ++++++++++++++++++++++++++++++++---
1 file changed, 57 insertions(+), 5 deletions(-)
diff --git a/src/backend/utils/fmgr/fmgr.c b/src/backend/utils/fmgr/fmgr.c
index 3dfe6e5825..79ed3951b2 100644
--- a/src/backend/utils/fmgr/fmgr.c
+++ b/src/backend/utils/fmgr/fmgr.c
@@ -16,6 +16,8 @@
#include "postgres.h"
#include "access/detoast.h"
+#include "access/parallel.h"
+#include "access/xact.h"
#include "catalog/pg_language.h"
#include "catalog/pg_proc.h"
#include "catalog/pg_type.h"
@@ -52,6 +54,7 @@ typedef struct
} CFuncHashTabEntry;
static HTAB *CFuncHash = NULL;
+static bool safety_checking = false;
static void fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
@@ -152,6 +155,7 @@ fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
Datum prosrcdatum;
bool isnull;
char *prosrc;
+ char parallel_safety;
/*
* fn_oid *must* be filled in last. Some code assumes that if fn_oid is
@@ -163,6 +167,48 @@ fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
finfo->fn_mcxt = mcxt;
finfo->fn_expr = NULL; /* caller may set this later */
+ procedureTuple = NULL;
+ procedureStruct = NULL;
+
+ /* Do not need to check parallel safety in Init or Bootstrap mode */
+ if ((!safety_checking && IsNormalProcessingMode()) &&
+ (IsInParallelMode() || IsParallelWorker()))
+ {
+ /*
+ * If cannot find functionId in syscache, it need use some built-in
+ * function to scan the pg_proc. Since functions used to scan systable
+ * must be parallel safe, Setting the safety check flag here to skip
+ * the recursive safety check.
+ */
+ safety_checking = true;
+
+ PG_TRY();
+ {
+ procedureTuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(functionId));
+ }
+ PG_FINALLY();
+ {
+ /* Reset the flag */
+ safety_checking = false;
+ }
+ PG_END_TRY();
+
+ if (!HeapTupleIsValid(procedureTuple))
+ elog(ERROR, "cache lookup failed for function %u", functionId);
+ procedureStruct = (Form_pg_proc) GETSTRUCT(procedureTuple);
+
+ parallel_safety = procedureStruct->proparallel;
+
+ /* Check the function's parallel safety */
+ if (((IsParallelWorker() &&
+ parallel_safety == PROPARALLEL_RESTRICTED) ||
+ parallel_safety == PROPARALLEL_UNSAFE))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_TRANSACTION_STATE),
+ errmsg("parallel-safety execution violation of function \"%s\" (%c)",
+ get_func_name(functionId), parallel_safety)));
+ }
+
if ((fbp = fmgr_isbuiltin(functionId)) != NULL)
{
/*
@@ -174,14 +220,20 @@ fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
finfo->fn_stats = TRACK_FUNC_ALL; /* ie, never track */
finfo->fn_addr = fbp->func;
finfo->fn_oid = functionId;
+
+ if (procedureTuple != NULL)
+ ReleaseSysCache(procedureTuple);
+
return;
}
- /* Otherwise we need the pg_proc entry */
- procedureTuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(functionId));
- if (!HeapTupleIsValid(procedureTuple))
- elog(ERROR, "cache lookup failed for function %u", functionId);
- procedureStruct = (Form_pg_proc) GETSTRUCT(procedureTuple);
+ if (procedureStruct == NULL)
+ {
+ procedureTuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(functionId));
+ if (!HeapTupleIsValid(procedureTuple))
+ elog(ERROR, "cache lookup failed for function %u", functionId);
+ procedureStruct = (Form_pg_proc) GETSTRUCT(procedureTuple);
+ }
finfo->fn_nargs = procedureStruct->pronargs;
finfo->fn_strict = procedureStruct->proisstrict;
--
2.18.4
0001-approach-2-check-parallel-safety-in-fmgr_info_cxt_se.patchapplication/octet-stream; name=0001-approach-2-check-parallel-safety-in-fmgr_info_cxt_se.patchDownload
From 602e175eb0b0d7338948caa8f92f69cd4a8fa0c7 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Thu, 29 Apr 2021 08:53:26 +0800
Subject: [PATCH] approach 2 check-parallel-safety-in-fmgr_info_cxt_security
---
src/backend/utils/Gen_fmgrtab.pl | 33 +++++++++++++++++++-------------
src/backend/utils/fmgr/fmgr.c | 27 ++++++++++++++++++++++++--
src/include/utils/fmgrtab.h | 8 ++++++--
3 files changed, 51 insertions(+), 17 deletions(-)
diff --git a/src/backend/utils/Gen_fmgrtab.pl b/src/backend/utils/Gen_fmgrtab.pl
index 881568defd..6d95d2e4d4 100644
--- a/src/backend/utils/Gen_fmgrtab.pl
+++ b/src/backend/utils/Gen_fmgrtab.pl
@@ -72,15 +72,16 @@ foreach my $row (@{ $catalog_data{pg_proc} })
push @fmgr,
{
- oid => $bki_values{oid},
- name => $bki_values{proname},
- lang => $bki_values{prolang},
- kind => $bki_values{prokind},
- strict => $bki_values{proisstrict},
- retset => $bki_values{proretset},
- nargs => $bki_values{pronargs},
- args => $bki_values{proargtypes},
- prosrc => $bki_values{prosrc},
+ oid => $bki_values{oid},
+ name => $bki_values{proname},
+ lang => $bki_values{prolang},
+ kind => $bki_values{prokind},
+ strict => $bki_values{proisstrict},
+ parallel => $bki_values{proparallel},
+ retset => $bki_values{proretset},
+ nargs => $bki_values{pronargs},
+ args => $bki_values{proargtypes},
+ prosrc => $bki_values{prosrc},
};
# Count so that we can detect overloaded pronames.
@@ -208,9 +209,13 @@ foreach my $s (sort { $a->{oid} <=> $b->{oid} } @fmgr)
# Create the fmgr_builtins table, collect data for fmgr_builtin_oid_index
print $tfh "\nconst FmgrBuiltin fmgr_builtins[] = {\n";
-my %bmap;
-$bmap{'t'} = 'true';
-$bmap{'f'} = 'false';
+my %bmap_strict;
+$bmap_strict{'t'} = 1 << 0;
+$bmap_strict{'f'} = 0;
+my %bmap_retset;
+$bmap_retset{'t'} = 1 << 1;
+$bmap_retset{'f'} = 0;
+
my @fmgr_builtin_oid_index;
my $last_builtin_oid = 0;
my $fmgr_count = 0;
@@ -220,9 +225,11 @@ foreach my $s (sort { $a->{oid} <=> $b->{oid} } @fmgr)
# We do not need entries for aggregate functions
next if $s->{kind} eq 'a';
+ my $bitflag = $bmap_strict{$s->{strict}} | $bmap_retset{$s->{retset}};
print $tfh ",\n" if ($fmgr_count > 0);
print $tfh
- " { $s->{oid}, $s->{nargs}, $bmap{$s->{strict}}, $bmap{$s->{retset}}, \"$s->{prosrc}\", $s->{prosrc} }";
+# " { $s->{oid}, $s->{nargs}, $bmap{$s->{strict}}, $bmap{$s->{retset}}, \"$s->{prosrc}\", $s->{prosrc} }";
+ " { $s->{oid}, $s->{nargs}, $bitflag, \'$s->{parallel}\', \"$s->{prosrc}\", $s->{prosrc} }";
$fmgr_builtin_oid_index[ $s->{oid} ] = $fmgr_count++;
$last_builtin_oid = $s->{oid};
diff --git a/src/backend/utils/fmgr/fmgr.c b/src/backend/utils/fmgr/fmgr.c
index 3dfe6e5825..7f7286cb8b 100644
--- a/src/backend/utils/fmgr/fmgr.c
+++ b/src/backend/utils/fmgr/fmgr.c
@@ -16,6 +16,8 @@
#include "postgres.h"
#include "access/detoast.h"
+#include "access/parallel.h"
+#include "access/xact.h"
#include "catalog/pg_language.h"
#include "catalog/pg_proc.h"
#include "catalog/pg_type.h"
@@ -56,6 +58,7 @@ static HTAB *CFuncHash = NULL;
static void fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
bool ignore_security);
+static void fmgr_check_parallel_safety(char parallel_safety, Oid functionId);
static void fmgr_info_C_lang(Oid functionId, FmgrInfo *finfo, HeapTuple procedureTuple);
static void fmgr_info_other_lang(Oid functionId, FmgrInfo *finfo, HeapTuple procedureTuple);
static CFuncHashTabEntry *lookup_C_func(HeapTuple procedureTuple);
@@ -165,12 +168,15 @@ fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
if ((fbp = fmgr_isbuiltin(functionId)) != NULL)
{
+ /* Check parallel safety for built-in functions */
+ fmgr_check_parallel_safety(fbp->parallel, functionId);
+
/*
* Fast path for builtin functions: don't bother consulting pg_proc
*/
finfo->fn_nargs = fbp->nargs;
- finfo->fn_strict = fbp->strict;
- finfo->fn_retset = fbp->retset;
+ finfo->fn_strict = GETSTRICT(fbp);
+ finfo->fn_retset = GETRETSET(fbp);
finfo->fn_stats = TRACK_FUNC_ALL; /* ie, never track */
finfo->fn_addr = fbp->func;
finfo->fn_oid = functionId;
@@ -183,6 +189,9 @@ fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
elog(ERROR, "cache lookup failed for function %u", functionId);
procedureStruct = (Form_pg_proc) GETSTRUCT(procedureTuple);
+ /* Check parallel safety for other functions */
+ fmgr_check_parallel_safety(procedureStruct->proparallel, functionId);
+
finfo->fn_nargs = procedureStruct->pronargs;
finfo->fn_strict = procedureStruct->proisstrict;
finfo->fn_retset = procedureStruct->proretset;
@@ -264,6 +273,20 @@ fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
ReleaseSysCache(procedureTuple);
}
+static void
+fmgr_check_parallel_safety(char parallel_safety, Oid functionId)
+{
+ if (IsInParallelMode() &&
+ ((IsParallelWorker() &&
+ parallel_safety == PROPARALLEL_RESTRICTED) ||
+ parallel_safety == PROPARALLEL_UNSAFE))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_TRANSACTION_STATE),
+ errmsg("parallel-safety execution violation of function \"%s\" (%c)",
+ get_func_name(functionId), parallel_safety)));
+}
+
+
/*
* Return module and C function name providing implementation of functionId.
*
diff --git a/src/include/utils/fmgrtab.h b/src/include/utils/fmgrtab.h
index 21a5f21156..beaf38db63 100644
--- a/src/include/utils/fmgrtab.h
+++ b/src/include/utils/fmgrtab.h
@@ -26,12 +26,16 @@ typedef struct
{
Oid foid; /* OID of the function */
short nargs; /* 0..FUNC_MAX_ARGS, or -1 if variable count */
- bool strict; /* T if function is "strict" */
- bool retset; /* T if function returns a set */
+ char bitflag; /* 1 << 0 if function is "strict"
+ * 1 << 1 if function returns a set */
+ char parallel;
const char *funcName; /* C name of the function */
PGFunction func; /* pointer to compiled function */
} FmgrBuiltin;
+#define GETSTRICT(fbp) ((fbp->bitflag & (1 << 0)) ? true : false)
+#define GETRETSET(fbp) ((fbp->bitflag & (1 << 1)) ? true : false)
+
extern const FmgrBuiltin fmgr_builtins[];
extern const int fmgr_nbuiltins; /* number of entries in table */
--
2.18.4
On Fri, Apr 23, 2021 at 10:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
Isn't parallel safety also the C code property?
In my opinion, yes.
So, isn't it better to disallow changing parallel
safety for built-in functions?
Superusers can do a lot of DML operations on the system catalogs that
are manifestly unsafe. I think we should really consider locking that
down somehow, but I doubt it makes sense to treat this case separately
from all the others. What do you think will happen if you change
proargtypes?
Also, if the strict property of built-in functions is fixed
internally, why we allow users to change it and is that of any help?
One real application of allowing these sorts of changes is letting
users correct things that were done wrong originally without waiting
for a new major release.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Wed, Apr 28, 2021 at 9:42 PM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:
So, If we do not want to lock down the parallel safety of built-in functions.
It seems we can try to fetch the proparallel from pg_proc for built-in function
in fmgr_info_cxt_security too. To avoid recursive safety check when fetching
proparallel from pg_proc, we can add a Global variable to mark is it in a recursive state.
And we skip safety check in a recursive state, In this approach, parallel safety
will not be locked, and there are no new members in FmgrBuiltin.Attaching the patch about this approach [0001-approach-1].
Thoughts ?
This seems to be full of complicated if-tests that don't seem
necessary and aren't explained by the comments. Also, introducing a
system cache lookup here seems completely unacceptable from a
reliability point of view, and I bet it's not too good for
performance, either.
I also attached another approach patch [0001-approach-2] about adding
parallel safety in FmgrBuiltin, because this approach seems faster and
we can combine some bool member into a bitflag to avoid enlarging the
FmgrBuiltin array, though this approach will lock down the parallel safety
of built-in function.
This doesn't seem like a good idea either.
I really don't understand what problem any of this is intended to
solve. Bharath's analysis above seems right on point to me. I think if
anybody is writing a patch that requires that this be changed in this
way, that person is probably doing something wrong.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Wed, May 5, 2021 at 5:09 AM Robert Haas <robertmhaas@gmail.com> wrote:
On Fri, Apr 23, 2021 at 10:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
Isn't parallel safety also the C code property?
Also, if the strict property of built-in functions is fixed
internally, why we allow users to change it and is that of any help?One real application of allowing these sorts of changes is letting
users correct things that were done wrong originally without waiting
for a new major release.
Problem is, for built-in functions, the changes are allowed, but for
some properties (like strict) the allowed changes don't actually take
effect (this is what Amit was referring to - so why allow those
changes?).
It's because some of the function properties are cached in
FmgrBuiltins[] (for a "fast-path" lookup for built-ins), according to
their state at build time (from pg_proc.dat), but ALTER FUNCTION is
just changing it in the system catalogs. Also, with sufficient
privileges, a built-in function can be redefined, yet the original
function (whose info is cached in FmgrBuiltins[]) is always invoked,
not the newly-defined version.
Regards,
Greg Nancarrow
Fujitsu Australia
On Tue, May 4, 2021 at 11:47 PM Greg Nancarrow <gregn4422@gmail.com> wrote:
Problem is, for built-in functions, the changes are allowed, but for
some properties (like strict) the allowed changes don't actually take
effect (this is what Amit was referring to - so why allow those
changes?).
It's because some of the function properties are cached in
FmgrBuiltins[] (for a "fast-path" lookup for built-ins), according to
their state at build time (from pg_proc.dat), but ALTER FUNCTION is
just changing it in the system catalogs. Also, with sufficient
privileges, a built-in function can be redefined, yet the original
function (whose info is cached in FmgrBuiltins[]) is always invoked,
not the newly-defined version.
I agree. I think that's not ideal. I think we should consider putting
some more restrictions on updating system catalog changes, and I also
think that if we can get out of having strict need to be part of
FmgrBuiltins[] that would be good. But what I don't agree with is the
idea that since strict already has this problem, it's OK to do the
same thing with parallel-safety. That seems to me to be making a bad
situation worse, and I can't see what problem it actually solves.
--
Robert Haas
EDB: http://www.enterprisedb.com
From: Robert Haas <robertmhaas@gmail.com>
On Tue, May 4, 2021 at 11:47 PM Greg Nancarrow <gregn4422@gmail.com>
wrote:Problem is, for built-in functions, the changes are allowed, but for
some properties (like strict) the allowed changes don't actually take
effect (this is what Amit was referring to - so why allow those
changes?).
It's because some of the function properties are cached in
FmgrBuiltins[] (for a "fast-path" lookup for built-ins), according to
their state at build time (from pg_proc.dat), but ALTER FUNCTION is
just changing it in the system catalogs. Also, with sufficient
privileges, a built-in function can be redefined, yet the original
function (whose info is cached in FmgrBuiltins[]) is always invoked,
not the newly-defined version.I agree. I think that's not ideal. I think we should consider putting
some more restrictions on updating system catalog changes, and I also
think that if we can get out of having strict need to be part of
FmgrBuiltins[] that would be good. But what I don't agree with is the
idea that since strict already has this problem, it's OK to do the
same thing with parallel-safety. That seems to me to be making a bad
situation worse, and I can't see what problem it actually solves.
Let me divide the points:
(1) Is it better to get hardcoded function properties out of fmgr_builtins[]?
It's little worth doing so or thinking about that. It's no business for users to change system objects, in this case system functions.
Also, hardcoding is a worthwhile strategy for good performance or other inevitable reasons. Postgres is using it as in the system catalog relcache below.
[relcache.c]
/*
* hardcoded tuple descriptors, contents generated by genbki.pl
*/
static const FormData_pg_attribute Desc_pg_class[Natts_pg_class] = {Schema_pg_class};
static const FormData_pg_attribute Desc_pg_attribute[Natts_pg_attribute] = {Schema_pg_attribute};
...
(2) Should it be disallowed for users to change system function properties with ALTER FUNCTION?
Maybe yes, but it's not an important issue for achieving parallel INSERT SELECT at the moment. So, I think this can be discussed in an independent separate thread.
As a reminder, Postgres have safeguards against modifying system objects as follows.
test=# drop table^C
test=# drop function pg_wal_replay_pause();
ERROR: cannot drop function pg_wal_replay_pause() because it is required by the database system
test=# drop table pg_largeobject;
ERROR: permission denied: "pg_largeobject" is a system catalog
OTOH, Postgres doesn't disallow changing the system table column values directly, such as UPDATE pg_proc SET .... But it's warned in the manual that such operations are dangerous. So, we don't have to care about it.
Chapter 52. System Catalogs
https://www.postgresql.org/docs/devel/catalogs.html
"You can drop and recreate the tables, add columns, insert and update values, and severely mess up your system that way. Normally, one should not change the system catalogs by hand, there are normally SQL commands to do that. (For example, CREATE DATABASE inserts a row into the pg_database catalog — and actually creates the database on disk.) There are some exceptions for particularly esoteric operations, but many of those have been made available as SQL commands over time, and so the need for direct manipulation of the system catalogs is ever decreasing."
(3) Why do we want to have parallel-safety in fmgr_builtins[]?
As proposed in this thread and/or "Parallel INSERT SELECT take 2", we thought of detecting parallel unsafe function execution during SQL statement execution, instead of imposing much overhead to check parallel safety during query planning. Specifically, we add parallel safety check in fmgr_info() and/or FunctionCallInvoke().
(Alternatively, I think we can conclude that we assume parallel unsafe built-in functions won't be used in parallel DML. In that case, we don't change FmgrBuiltin and we just skip the parallel safety check for built-in functions when the function is called. Would you buy this?)
Regards
Takayuki Tsunakawa
On Wed, May 5, 2021 at 7:39 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, May 4, 2021 at 11:47 PM Greg Nancarrow <gregn4422@gmail.com> wrote:
Problem is, for built-in functions, the changes are allowed, but for
some properties (like strict) the allowed changes don't actually take
effect (this is what Amit was referring to - so why allow those
changes?).
It's because some of the function properties are cached in
FmgrBuiltins[] (for a "fast-path" lookup for built-ins), according to
their state at build time (from pg_proc.dat), but ALTER FUNCTION is
just changing it in the system catalogs. Also, with sufficient
privileges, a built-in function can be redefined, yet the original
function (whose info is cached in FmgrBuiltins[]) is always invoked,
not the newly-defined version.I agree. I think that's not ideal. I think we should consider putting
some more restrictions on updating system catalog changes, and I also
think that if we can get out of having strict need to be part of
FmgrBuiltins[] that would be good. But what I don't agree with is the
idea that since strict already has this problem, it's OK to do the
same thing with parallel-safety. That seems to me to be making a bad
situation worse, and I can't see what problem it actually solves.
The idea here is to check for parallel safety of functions at
someplace in the code during function invocation so that if we execute
any parallel unsafe/restricted function via parallel worker then we
error out. I think that is a good safety net especially if we can do
it with some simple check. Now, we already have pg_proc information in
fmgr_info_cxt_security for non-built-in functions, so we can check
that and error out if the unsafe function is invoked in parallel mode.
It has been observed that we were calling some unsafe functions in
parallel-mode in the regression tests which is caught by such a check.
I think here the main challenge is to do a similar check for built-in
functions and one of the ideas to do that was to extend FmgrBuiltins
to cache that information. I see why that idea is not good and maybe
we can see if there is some other place where we already fetch pg_proc
for built-in functions and can we have such a check at that place? If
that is not feasible then we can probably have such a check just for
non-built-in functions as that seems straightforward.
--
With Regards,
Amit Kapila.
From: Robert Haas <robertmhaas@gmail.com>
On Wed, Apr 28, 2021 at 9:42 PM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:So, If we do not want to lock down the parallel safety of built-in functions.
It seems we can try to fetch the proparallel from pg_proc for built-in function
in fmgr_info_cxt_security too. To avoid recursive safety check when fetching
proparallel from pg_proc, we can add a Global variable to mark is it in arecursive state.
And we skip safety check in a recursive state, In this approach, parallel safety
will not be locked, and there are no new members in FmgrBuiltin.Attaching the patch about this approach [0001-approach-1].
Thoughts ?This seems to be full of complicated if-tests that don't seem
necessary and aren't explained by the comments. Also, introducing a
system cache lookup here seems completely unacceptable from a
reliability point of view, and I bet it's not too good for
performance, either.
Agreed. Also, PG_TRY() would be relatively heavyweight here. I'm inclined to avoid this approach.
I also attached another approach patch [0001-approach-2] about adding
parallel safety in FmgrBuiltin, because this approach seems faster and
we can combine some bool member into a bitflag to avoid enlarging the
FmgrBuiltin array, though this approach will lock down the parallel safety
of built-in function.This doesn't seem like a good idea either.
This looks good to me. What makes you think so?
That said, I actually think we want to avoid even this change. That is, I'm wondering if we can skip the parallel safety of built-in functions.
Can anyone think of the need to check the parallel safety of built-in functions in the context of parallel INSERT SELECT? The planner already checks (or can check) the parallel safety of the SELECT part with max_parallel_hazard(). Regarding the INSERT part, we're trying to rely on the parallel safety of the target table that the user specified with CREATE/ALTER TABLE. I don't see where we need to check the parallel safety of uilt-in functions.
Regards
Takayuki Tsunakawa
On Wed, May 5, 2021 at 10:54 PM tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:
(1) Is it better to get hardcoded function properties out of fmgr_builtins[]?
It's little worth doing so or thinking about that. It's no business for users to change system objects, in this case system functions.
I don't entirely agree with this. Whether or not users have any
business changing system functions, it's better to have one source of
truth than two. Now that being said, this is not a super-important
problem for us to go solve, and hard-coding a certain amount of stuff
is probably necessary to allow the system to bootstrap itself. So for
me it's one of those things that is in.a grey area: if someone showed
up with a patch to make it better, I'd be happy. But I probably
wouldn't spend much time on writing such a patch unless it solved some
other problem that I cared about.
(3) Why do we want to have parallel-safety in fmgr_builtins[]?
As proposed in this thread and/or "Parallel INSERT SELECT take 2", we thought of detecting parallel unsafe function execution during SQL statement execution, instead of imposing much overhead to check parallel safety during query planning. Specifically, we add parallel safety check in fmgr_info() and/or FunctionCallInvoke().
I haven't read that thread, but I don't understand how that can work.
The reason we need to detect it at plan time is because we might need
to use a different plan. At execution time it's too late for that.
Also, it seems potentially quite expensive. A query may be planned
once and executed many times. Also, a single query execution may call
the same SQL function many times. I think we don't want to incur the
overhead of an extra syscache lookup every time anyone calls any
function. A very simple expression like a+b+c+d+e involves four
function calls, and + need not be a built-in, if the data type is
user-defined. And that might be happening for every row in a table
with millions of rows.
(Alternatively, I think we can conclude that we assume parallel unsafe built-in functions won't be used in parallel DML. In that case, we don't change FmgrBuiltin and we just skip the parallel safety check for built-in functions when the function is called. Would you buy this?)
I don't really understand this idea. There's no such thing as parallel
DML, is there? There's just DML, which we must to decide whether can
be done in parallel or not based on, among other things, the
parallel-safety markings of the functions it contains. Maybe I am not
understanding you correctly, but it seems like you're suggesting that
in some cases we can just assume that the user hasn't done something
parallel-unsafe without making any attempt to check it. I don't think
I could support that.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Thu, May 6, 2021 at 3:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
The idea here is to check for parallel safety of functions at
someplace in the code during function invocation so that if we execute
any parallel unsafe/restricted function via parallel worker then we
error out. I think that is a good safety net especially if we can do
it with some simple check. Now, we already have pg_proc information in
fmgr_info_cxt_security for non-built-in functions, so we can check
that and error out if the unsafe function is invoked in parallel mode.
It has been observed that we were calling some unsafe functions in
parallel-mode in the regression tests which is caught by such a check.
I see your point, but I am not convinced. As I said to Tsunakawa-san,
doing the check here seems expensive. Also, I had the idea in mind
that parallel-safety should work like volatility. We don't check at
runtime whether a volatile function is being called in a context where
volatile functions are not supposed to be used. If for example you try
to call a volatile function directly from an index expression I
believe you will get an error. But if the index expression calls an
immutable function and then that function internally calls something
volatile, you don't get an error. Now it might not be a good idea: you
could end up with a broken index. But that's your fault for
mislabeling the function you used.
Sometimes this is actually quite useful. You might know that, while
the function is in general volatile, it is immutable in the particular
way that you are using it. Or, perhaps, you are using the volatile
function incidentally and it doesn't affect the output of your
function at all. Or, maybe you actually want to build an index that
might break, and then it's up to you to rebuild the index if and when
that is required. Users do this kind of thing all the time, I think,
and would be unhappy if we started checking it more rigorously than we
do today.
Now, I don't see why the same idea can't or shouldn't apply to
parallel-safety. If you call a parallel-unsafe function in a parallel
context, it's pretty likely that you are going to get an error, and so
you might not want to do it. If the function is written in C, it could
even cause horrible things to happen so that you crash the whole
backend or something, but I tried to set things up so that for
built-in functions you'll just get an error. But on the other hand,
maybe the parallel-unsafe function you are calling is not
parallel-unsafe in all cases. If you want to create a wrapper function
that is labelled parallel-safe and try to make that it only calls the
parallel-unsafe function in the cases where there's no safety problem,
that's up to you!
It's possible that I had the wrong idea here, so maybe the question
deserves more thought, but I wanted to explain what my thought process
was.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Thu, May 6, 2021 at 5:26 PM tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:
Can anyone think of the need to check the parallel safety of built-in functions in the context of parallel INSERT SELECT? The planner already checks (or can check) the parallel safety of the SELECT part with max_parallel_hazard(). Regarding the INSERT part, we're trying to rely on the parallel safety of the target table that the user specified with CREATE/ALTER TABLE. I don't see where we need to check the parallel safety of uilt-in functions.
Yes, I certainly can think of a reason to do this.
The idea is, for the approach being discussed, is to allow the user to
declare parallel-safety on a table, but then to catch any possible
violations of this at runtime (as opposed to adding additional
parallel-safety checks at planning time).
So for INSERT with parallel SELECT for example (which runs in
parallel-mode), then the execution of index expressions,
column-default expressions, check constraints etc. may end up invoking
functions (built-in or otherwise) that are NOT parallel-safe - so we
could choose to error-out in this case when these violations are
detected.
As far as I can see, this checking of function parallel-safety can be
done with little overhead to the current code - it already gets proc
information from the system cache for non-built-in-functions, and for
built-in functions it could store the parallel-safety status in
FmgrBuiltin and simply get it from there (I don't think we should be
allowing changes to built-in function properties - currently it is
allowed, but it doesn't work properly).
The other option is to just blindly trust the parallel-safety
declaration on tables and whatever happens at runtime happens.
Regards,
Greg Nancarrow
Fujitsu Australia
On Thu, May 6, 2021 at 4:35 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Thu, May 6, 2021 at 3:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
The idea here is to check for parallel safety of functions at
someplace in the code during function invocation so that if we execute
any parallel unsafe/restricted function via parallel worker then we
error out. I think that is a good safety net especially if we can do
it with some simple check. Now, we already have pg_proc information in
fmgr_info_cxt_security for non-built-in functions, so we can check
that and error out if the unsafe function is invoked in parallel mode.
It has been observed that we were calling some unsafe functions in
parallel-mode in the regression tests which is caught by such a check.I see your point, but I am not convinced. As I said to Tsunakawa-san,
doing the check here seems expensive.
If I read your email correctly then you are saying it is expensive
based on the idea that we need to perform extra syscache lookup but
actually for non-built-in functions, we already have parallel-safety
information so such a check should not incur a significant cost.
Also, I had the idea in mind
that parallel-safety should work like volatility. We don't check at
runtime whether a volatile function is being called in a context where
volatile functions are not supposed to be used. If for example you try
to call a volatile function directly from an index expression I
believe you will get an error. But if the index expression calls an
immutable function and then that function internally calls something
volatile, you don't get an error. Now it might not be a good idea: you
could end up with a broken index. But that's your fault for
mislabeling the function you used.Sometimes this is actually quite useful. You might know that, while
the function is in general volatile, it is immutable in the particular
way that you are using it. Or, perhaps, you are using the volatile
function incidentally and it doesn't affect the output of your
function at all. Or, maybe you actually want to build an index that
might break, and then it's up to you to rebuild the index if and when
that is required. Users do this kind of thing all the time, I think,
and would be unhappy if we started checking it more rigorously than we
do today.Now, I don't see why the same idea can't or shouldn't apply to
parallel-safety. If you call a parallel-unsafe function in a parallel
context, it's pretty likely that you are going to get an error, and so
you might not want to do it. If the function is written in C, it could
even cause horrible things to happen so that you crash the whole
backend or something, but I tried to set things up so that for
built-in functions you'll just get an error. But on the other hand,
maybe the parallel-unsafe function you are calling is not
parallel-unsafe in all cases. If you want to create a wrapper function
that is labelled parallel-safe and try to make that it only calls the
parallel-unsafe function in the cases where there's no safety problem,
that's up to you!
I think it is difficult to say for what purpose parallel-unsafe
function got called in parallel context so if we give an error in
cases where otherwise it could lead to a crash or caused other
horrible things, users will probably appreciate us. OTOH, if the
parallel-safety labeling is wrong (parallel-safe function is marked
parallel-unsafe) and we gave an error in such a case, the user can
always change the parallel-safety attribute by using Alter Function.
Now, if adding such a check is costly or needs some major re-design
then probably it might not be worth whereas I don't think that is the
case for non-built-in function invocation.
--
With Regards,
Amit Kapila.
From: Robert Haas <robertmhaas@gmail.com>
On Wed, May 5, 2021 at 10:54 PM tsunakawa.takay@fujitsu.com
<tsunakawa.takay@fujitsu.com> wrote:As proposed in this thread and/or "Parallel INSERT SELECT take 2", we
thought of detecting parallel unsafe function execution during SQL statement
execution, instead of imposing much overhead to check parallel safety during
query planning. Specifically, we add parallel safety check in fmgr_info()
and/or FunctionCallInvoke().I haven't read that thread, but I don't understand how that can work.
The reason we need to detect it at plan time is because we might need
to use a different plan. At execution time it's too late for that.
(I forgot to say this in my previous email. Robert-san, thank you very much for taking time to look at this and giving feedback. It was sad that we had to revert our parallel INSERT SELECT for redesign at the very end of the last CF. We need advice and suggestions from knowledgeable and thoughtful people like Tom-san, Andres-san and you in early stages to not repeat the tragedy.)
I'd really like you to have a look at the first mail in [1]Parallel INSERT SELECT take 2 /messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com, and to get your feedback like "this part should be like ... instead" and "this part would probably work, I think." Without feedback from leading developers, I'm somewhat at a loss if and how we can proceed with the proposed approach.
To put it shortly, we found that it can take non-negligible time for the planner to check the parallel safety of the target table of INSERT SELECT when it has many (hundreds or thousands of) partitions. The check also added much complicated code, too. So, we got inclined to take Tom-san's suggestion -- let the user specify the parallel safety of the target table with CREATE/ALTER TABLE and the planner just decides a query plan based on it. Caching the results of parallel safety checks in relcache or a new shared hash table didn't seem to work well to me, or it should be beyond my brain at least.
We may think that it's okay to just believe the user-specified parallel safety. But I thought we could step up and do our best to check the parallel safety during statement execution, if it's not very invasive in terms of performance and code complexity. The aforementioned idea is that if the parallel processes find the called functions parallel unsafe, they error out. All ancillary objects of the target table, data types, constraints, indexes, triggers, etc., come down to some UDF, so it should be enough to check the parallel safety when the UDF is called.
Also, it seems potentially quite expensive. A query may be planned
once and executed many times. Also, a single query execution may call
the same SQL function many times. I think we don't want to incur the
overhead of an extra syscache lookup every time anyone calls any
function. A very simple expression like a+b+c+d+e involves four
function calls, and + need not be a built-in, if the data type is
user-defined. And that might be happening for every row in a table
with millions of rows.
We (optimistically) expect that the overhead won't be serious, because the parallel safety information is already at hand in the FmgrInfo struct when the function is called. We don't have to look up the syscache every time the function is called.
Of course, adding even a single if statement may lead to a disaster in a critical path, so we need to assess the performance. I'd also appreciate if you could suggest some good workload we should experiment in the thread above.
[1]: Parallel INSERT SELECT take 2 /messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com
Parallel INSERT SELECT take 2
/messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com
Regards
Takayuki Tsunakawa
Sometimes this is actually quite useful. You might know that, while
the function is in general volatile, it is immutable in the particular
way that you are using it. Or, perhaps, you are using the volatile
function incidentally and it doesn't affect the output of your
function at all. Or, maybe you actually want to build an index that
might break, and then it's up to you to rebuild the index if and when
that is required. Users do this kind of thing all the time, I think,
and would be unhappy if we started checking it more rigorously than we
do today.Now, I don't see why the same idea can't or shouldn't apply to
parallel-safety. If you call a parallel-unsafe function in a parallel
context, it's pretty likely that you are going to get an error, and so
you might not want to do it. If the function is written in C, it could
even cause horrible things to happen so that you crash the whole
backend or something, but I tried to set things up so that for
built-in functions you'll just get an error. But on the other hand,
maybe the parallel-unsafe function you are calling is not
parallel-unsafe in all cases. If you want to create a wrapper function
that is labelled parallel-safe and try to make that it only calls the
parallel-unsafe function in the cases where there's no safety problem,
that's up to you!I think it is difficult to say for what purpose parallel-unsafe function got called in
parallel context so if we give an error in cases where otherwise it could lead to
a crash or caused other horrible things, users will probably appreciate us.
OTOH, if the parallel-safety labeling is wrong (parallel-safe function is marked
parallel-unsafe) and we gave an error in such a case, the user can always change
the parallel-safety attribute by using Alter Function.
Now, if adding such a check is costly or needs some major re-design then
probably it might not be worth whereas I don't think that is the case for
non-built-in function invocation.
Temporarily, Just in case someone want to take a look at the patch for the safety check.
I splited the patch into 0001(parallel safety check for user define function), 0003(parallel safety check for builtin function)
and the fix for testcases.
IMO, With such a check to give an error when detecting parallel unsafe function in parallel mode,
it will be easier for users to discover potential threats(parallel unsafe function) in parallel mode.
I think users is likely to invoke parallel unsafe function inner a parallel safe function unintentionally.
Such a check can help they detect the problem easier.
Although, the strict check limits some usages(intentionally wrapper function) like Robert-san said.
To mitigate the effect of the limit, I was thinking can we do the safety check conditionally, such as only check the top function invoke and/or
introduce a guc option to control whether do the strict parallel safety check? Thoughts ?
Best regards,
houzj
Attachments:
0003-check-built-in-function-parallel-safety-in-fmgr_info.patchapplication/octet-stream; name=0003-check-built-in-function-parallel-safety-in-fmgr_info.patchDownload
From 642529c99c1a624707cab32750b5c046e2058560 Mon Sep 17 00:00:00 2001
From: houzj <houzj.fnst@cn.fujitsu.com>
Date: Tue, 11 May 2021 08:55:37 +0800
Subject: [PATCH 2/2] check built-in function parallel safety in fmgr_info
---
src/backend/utils/Gen_fmgrtab.pl | 33 ++++++++++++++++++++-------------
src/backend/utils/fmgr/fmgr.c | 7 +++++--
src/include/utils/fmgrtab.h | 8 ++++++--
3 files changed, 31 insertions(+), 17 deletions(-)
diff --git a/src/backend/utils/Gen_fmgrtab.pl b/src/backend/utils/Gen_fmgrtab.pl
index 881568d..6d95d2e 100644
--- a/src/backend/utils/Gen_fmgrtab.pl
+++ b/src/backend/utils/Gen_fmgrtab.pl
@@ -72,15 +72,16 @@ foreach my $row (@{ $catalog_data{pg_proc} })
push @fmgr,
{
- oid => $bki_values{oid},
- name => $bki_values{proname},
- lang => $bki_values{prolang},
- kind => $bki_values{prokind},
- strict => $bki_values{proisstrict},
- retset => $bki_values{proretset},
- nargs => $bki_values{pronargs},
- args => $bki_values{proargtypes},
- prosrc => $bki_values{prosrc},
+ oid => $bki_values{oid},
+ name => $bki_values{proname},
+ lang => $bki_values{prolang},
+ kind => $bki_values{prokind},
+ strict => $bki_values{proisstrict},
+ parallel => $bki_values{proparallel},
+ retset => $bki_values{proretset},
+ nargs => $bki_values{pronargs},
+ args => $bki_values{proargtypes},
+ prosrc => $bki_values{prosrc},
};
# Count so that we can detect overloaded pronames.
@@ -208,9 +209,13 @@ foreach my $s (sort { $a->{oid} <=> $b->{oid} } @fmgr)
# Create the fmgr_builtins table, collect data for fmgr_builtin_oid_index
print $tfh "\nconst FmgrBuiltin fmgr_builtins[] = {\n";
-my %bmap;
-$bmap{'t'} = 'true';
-$bmap{'f'} = 'false';
+my %bmap_strict;
+$bmap_strict{'t'} = 1 << 0;
+$bmap_strict{'f'} = 0;
+my %bmap_retset;
+$bmap_retset{'t'} = 1 << 1;
+$bmap_retset{'f'} = 0;
+
my @fmgr_builtin_oid_index;
my $last_builtin_oid = 0;
my $fmgr_count = 0;
@@ -220,9 +225,11 @@ foreach my $s (sort { $a->{oid} <=> $b->{oid} } @fmgr)
# We do not need entries for aggregate functions
next if $s->{kind} eq 'a';
+ my $bitflag = $bmap_strict{$s->{strict}} | $bmap_retset{$s->{retset}};
print $tfh ",\n" if ($fmgr_count > 0);
print $tfh
- " { $s->{oid}, $s->{nargs}, $bmap{$s->{strict}}, $bmap{$s->{retset}}, \"$s->{prosrc}\", $s->{prosrc} }";
+# " { $s->{oid}, $s->{nargs}, $bmap{$s->{strict}}, $bmap{$s->{retset}}, \"$s->{prosrc}\", $s->{prosrc} }";
+ " { $s->{oid}, $s->{nargs}, $bitflag, \'$s->{parallel}\', \"$s->{prosrc}\", $s->{prosrc} }";
$fmgr_builtin_oid_index[ $s->{oid} ] = $fmgr_count++;
$last_builtin_oid = $s->{oid};
diff --git a/src/backend/utils/fmgr/fmgr.c b/src/backend/utils/fmgr/fmgr.c
index a20faf3..7f7286c 100644
--- a/src/backend/utils/fmgr/fmgr.c
+++ b/src/backend/utils/fmgr/fmgr.c
@@ -168,12 +168,15 @@ fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
if ((fbp = fmgr_isbuiltin(functionId)) != NULL)
{
+ /* Check parallel safety for built-in functions */
+ fmgr_check_parallel_safety(fbp->parallel, functionId);
+
/*
* Fast path for builtin functions: don't bother consulting pg_proc
*/
finfo->fn_nargs = fbp->nargs;
- finfo->fn_strict = fbp->strict;
- finfo->fn_retset = fbp->retset;
+ finfo->fn_strict = GETSTRICT(fbp);
+ finfo->fn_retset = GETRETSET(fbp);
finfo->fn_stats = TRACK_FUNC_ALL; /* ie, never track */
finfo->fn_addr = fbp->func;
finfo->fn_oid = functionId;
diff --git a/src/include/utils/fmgrtab.h b/src/include/utils/fmgrtab.h
index 21a5f21..beaf38d 100644
--- a/src/include/utils/fmgrtab.h
+++ b/src/include/utils/fmgrtab.h
@@ -26,12 +26,16 @@ typedef struct
{
Oid foid; /* OID of the function */
short nargs; /* 0..FUNC_MAX_ARGS, or -1 if variable count */
- bool strict; /* T if function is "strict" */
- bool retset; /* T if function returns a set */
+ char bitflag; /* 1 << 0 if function is "strict"
+ * 1 << 1 if function returns a set */
+ char parallel;
const char *funcName; /* C name of the function */
PGFunction func; /* pointer to compiled function */
} FmgrBuiltin;
+#define GETSTRICT(fbp) ((fbp->bitflag & (1 << 0)) ? true : false)
+#define GETRETSET(fbp) ((fbp->bitflag & (1 << 1)) ? true : false)
+
extern const FmgrBuiltin fmgr_builtins[];
extern const int fmgr_nbuiltins; /* number of entries in table */
--
2.7.2.windows.1
0004-fix-builtin-parallel-safety-label-in-testcase.patchapplication/octet-stream; name=0004-fix-builtin-parallel-safety-label-in-testcase.patchDownload
From 6a911c40ef29599803c59f9bfe7c02f27dbdcb8e Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Tue, 11 May 2021 10:22:30 +0800
Subject: [PATCH] fix builtin parallel safety label in testcase
---
src/include/catalog/pg_proc.dat | 8 ++++++++
src/test/isolation/specs/deadlock-parallel.spec | 4 ++--
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 26c3fc0f6b..cf6ee3359c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -8467,6 +8467,14 @@
{ oid => '2892', descr => 'release all advisory locks',
proname => 'pg_advisory_unlock_all', provolatile => 'v', proparallel => 'r',
prorettype => 'void', proargtypes => '', prosrc => 'pg_advisory_unlock_all' },
+{ oid => '6123', descr => 'obtain shared advisory lock for testing purpose',
+ proname => 'pg_advisory_test_xact_lock_shared', provolatile => 'v',
+ prorettype => 'void', proargtypes => 'int8',
+ prosrc => 'pg_advisory_xact_lock_shared_int8' },
+{ oid => '6124', descr => 'obtain exclusive advisory lock for testing purpose',
+ proname => 'pg_advisory_test_xact_lock', provolatile => 'v',
+ prorettype => 'void', proargtypes => 'int8',
+ prosrc => 'pg_advisory_xact_lock_int8' },
# XML support
{ oid => '2893', descr => 'I/O',
diff --git a/src/test/isolation/specs/deadlock-parallel.spec b/src/test/isolation/specs/deadlock-parallel.spec
index 7ad290c0bd..7beaad46ee 100644
--- a/src/test/isolation/specs/deadlock-parallel.spec
+++ b/src/test/isolation/specs/deadlock-parallel.spec
@@ -37,10 +37,10 @@
setup
{
create function lock_share(int,int) returns int language sql as
- 'select pg_advisory_xact_lock_shared($1); select 1;' parallel safe;
+ 'select pg_advisory_test_xact_lock_shared($1); select 1;' parallel safe;
create function lock_excl(int,int) returns int language sql as
- 'select pg_advisory_xact_lock($1); select 1;' parallel safe;
+ 'select pg_advisory_test_xact_lock($1); select 1;' parallel safe;
create table bigt as select x from generate_series(1, 10000) x;
analyze bigt;
--
2.18.4
0001-check-UDF-parallel-safety-in-fmgr_info.patchapplication/octet-stream; name=0001-check-UDF-parallel-safety-in-fmgr_info.patchDownload
From 877e1f22943e4ecbe97ae71a359c142c3163ac86 Mon Sep 17 00:00:00 2001
From: houzj <houzj.fnst@cn.fujitsu.com>
Date: Tue, 11 May 2021 08:54:00 +0800
Subject: [PATCH 1/2] check UDF parallel safety in fmgr_info
---
src/backend/utils/fmgr/fmgr.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/src/backend/utils/fmgr/fmgr.c b/src/backend/utils/fmgr/fmgr.c
index 3dfe6e5..a20faf3 100644
--- a/src/backend/utils/fmgr/fmgr.c
+++ b/src/backend/utils/fmgr/fmgr.c
@@ -16,6 +16,8 @@
#include "postgres.h"
#include "access/detoast.h"
+#include "access/parallel.h"
+#include "access/xact.h"
#include "catalog/pg_language.h"
#include "catalog/pg_proc.h"
#include "catalog/pg_type.h"
@@ -56,6 +58,7 @@ static HTAB *CFuncHash = NULL;
static void fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
bool ignore_security);
+static void fmgr_check_parallel_safety(char parallel_safety, Oid functionId);
static void fmgr_info_C_lang(Oid functionId, FmgrInfo *finfo, HeapTuple procedureTuple);
static void fmgr_info_other_lang(Oid functionId, FmgrInfo *finfo, HeapTuple procedureTuple);
static CFuncHashTabEntry *lookup_C_func(HeapTuple procedureTuple);
@@ -183,6 +186,9 @@ fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
elog(ERROR, "cache lookup failed for function %u", functionId);
procedureStruct = (Form_pg_proc) GETSTRUCT(procedureTuple);
+ /* Check parallel safety for other functions */
+ fmgr_check_parallel_safety(procedureStruct->proparallel, functionId);
+
finfo->fn_nargs = procedureStruct->pronargs;
finfo->fn_strict = procedureStruct->proisstrict;
finfo->fn_retset = procedureStruct->proretset;
@@ -264,6 +270,20 @@ fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,
ReleaseSysCache(procedureTuple);
}
+static void
+fmgr_check_parallel_safety(char parallel_safety, Oid functionId)
+{
+ if (IsInParallelMode() &&
+ ((IsParallelWorker() &&
+ parallel_safety == PROPARALLEL_RESTRICTED) ||
+ parallel_safety == PROPARALLEL_UNSAFE))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_TRANSACTION_STATE),
+ errmsg("parallel-safety execution violation of function \"%s\" (%c)",
+ get_func_name(functionId), parallel_safety)));
+}
+
+
/*
* Return module and C function name providing implementation of functionId.
*
--
2.7.2.windows.1
0002-fix-UDF-parallel-safety-label-in-testcase.patchapplication/octet-stream; name=0002-fix-UDF-parallel-safety-label-in-testcase.patchDownload
From 7afb1b1d79f628b510b1b9b5d417320f0632ef2d Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Tue, 11 May 2021 09:41:51 +0800
Subject: [PATCH] fix UDF parallel safety label in testcase
---
src/backend/snowball/snowball_func.sql.in | 4 ++--
src/pl/plpgsql/src/plpgsql--1.0.sql | 2 +-
src/test/regress/expected/aggregates.out | 1 +
src/test/regress/expected/create_type.out | 4 ++--
src/test/regress/expected/domain.out | 2 +-
src/test/regress/expected/insert.out | 2 +-
src/test/regress/input/create_function_1.source | 4 ++--
src/test/regress/output/create_function_1.source | 4 ++--
src/test/regress/sql/aggregates.sql | 1 +
src/test/regress/sql/create_type.sql | 4 ++--
src/test/regress/sql/domain.sql | 2 +-
src/test/regress/sql/insert.sql | 2 +-
12 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/src/backend/snowball/snowball_func.sql.in b/src/backend/snowball/snowball_func.sql.in
index cb1eaca4fb..08bf3397e4 100644
--- a/src/backend/snowball/snowball_func.sql.in
+++ b/src/backend/snowball/snowball_func.sql.in
@@ -21,11 +21,11 @@ SET search_path = pg_catalog;
CREATE FUNCTION dsnowball_init(INTERNAL)
RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_init'
-LANGUAGE C STRICT;
+LANGUAGE C STRICT PARALLEL SAFE;
CREATE FUNCTION dsnowball_lexize(INTERNAL, INTERNAL, INTERNAL, INTERNAL)
RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_lexize'
-LANGUAGE C STRICT;
+LANGUAGE C STRICT PARALLEL SAFE;
CREATE TEXT SEARCH TEMPLATE snowball
(INIT = dsnowball_init,
diff --git a/src/pl/plpgsql/src/plpgsql--1.0.sql b/src/pl/plpgsql/src/plpgsql--1.0.sql
index 6e5b990fcc..165a670aa8 100644
--- a/src/pl/plpgsql/src/plpgsql--1.0.sql
+++ b/src/pl/plpgsql/src/plpgsql--1.0.sql
@@ -1,7 +1,7 @@
/* src/pl/plpgsql/src/plpgsql--1.0.sql */
CREATE FUNCTION plpgsql_call_handler() RETURNS language_handler
- LANGUAGE c AS 'MODULE_PATHNAME';
+ LANGUAGE c PARALLEL SAFE AS 'MODULE_PATHNAME';
CREATE FUNCTION plpgsql_inline_handler(internal) RETURNS void
STRICT LANGUAGE c AS 'MODULE_PATHNAME';
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index ca06d41dd0..2a4a83fab7 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -2386,6 +2386,7 @@ rollback;
BEGIN;
CREATE FUNCTION balkifnull(int8, int4)
RETURNS int8
+PARALLEL SAFE
STRICT
LANGUAGE plpgsql AS $$
BEGIN
diff --git a/src/test/regress/expected/create_type.out b/src/test/regress/expected/create_type.out
index 14394cc95c..eb1c6bdcd2 100644
--- a/src/test/regress/expected/create_type.out
+++ b/src/test/regress/expected/create_type.out
@@ -48,7 +48,7 @@ NOTICE: return type int42 is only a shell
CREATE FUNCTION int42_out(int42)
RETURNS cstring
AS 'int4out'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type int42 is only a shell
CREATE FUNCTION text_w_default_in(cstring)
RETURNS text_w_default
@@ -58,7 +58,7 @@ NOTICE: return type text_w_default is only a shell
CREATE FUNCTION text_w_default_out(text_w_default)
RETURNS cstring
AS 'textout'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type text_w_default is only a shell
CREATE TYPE int42 (
internallength = 4,
diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out
index 411d5c003e..c82d189823 100644
--- a/src/test/regress/expected/domain.out
+++ b/src/test/regress/expected/domain.out
@@ -1067,7 +1067,7 @@ drop domain di;
-- this has caused issues in the past
--
create function sql_is_distinct_from(anyelement, anyelement)
-returns boolean language sql
+returns boolean language sql parallel safe
as 'select $1 is distinct from $2 limit 1';
create domain inotnull int
check (sql_is_distinct_from(value, null));
diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out
index 5063a3dc22..7e7ef24098 100644
--- a/src/test/regress/expected/insert.out
+++ b/src/test/regress/expected/insert.out
@@ -415,7 +415,7 @@ select tableoid::regclass::text, a, min(b) as min_b, max(b) as max_b from list_p
create or replace function part_hashint4_noop(value int4, seed int8)
returns int8 as $$
select value + seed;
-$$ language sql immutable;
+$$ language sql immutable parallel safe;
create operator class part_test_int4_ops
for type int4
using hash as
diff --git a/src/test/regress/input/create_function_1.source b/src/test/regress/input/create_function_1.source
index 6c69b7fe6c..b9a4e7af38 100644
--- a/src/test/regress/input/create_function_1.source
+++ b/src/test/regress/input/create_function_1.source
@@ -10,7 +10,7 @@ CREATE FUNCTION widget_in(cstring)
CREATE FUNCTION widget_out(widget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
CREATE FUNCTION int44in(cstring)
RETURNS city_budget
@@ -20,7 +20,7 @@ CREATE FUNCTION int44in(cstring)
CREATE FUNCTION int44out(city_budget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
CREATE FUNCTION check_primary_key ()
RETURNS trigger
diff --git a/src/test/regress/output/create_function_1.source b/src/test/regress/output/create_function_1.source
index c66146db9d..0c1390e8c5 100644
--- a/src/test/regress/output/create_function_1.source
+++ b/src/test/regress/output/create_function_1.source
@@ -10,7 +10,7 @@ DETAIL: Creating a shell type definition.
CREATE FUNCTION widget_out(widget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type widget is only a shell
CREATE FUNCTION int44in(cstring)
RETURNS city_budget
@@ -21,7 +21,7 @@ DETAIL: Creating a shell type definition.
CREATE FUNCTION int44out(city_budget)
RETURNS cstring
AS '@libdir@/regress@DLSUFFIX@'
- LANGUAGE C STRICT IMMUTABLE;
+ LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
NOTICE: argument type city_budget is only a shell
CREATE FUNCTION check_primary_key ()
RETURNS trigger
diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql
index eb80a2fe06..68990b5b5f 100644
--- a/src/test/regress/sql/aggregates.sql
+++ b/src/test/regress/sql/aggregates.sql
@@ -978,6 +978,7 @@ rollback;
BEGIN;
CREATE FUNCTION balkifnull(int8, int4)
RETURNS int8
+PARALLEL SAFE
STRICT
LANGUAGE plpgsql AS $$
BEGIN
diff --git a/src/test/regress/sql/create_type.sql b/src/test/regress/sql/create_type.sql
index a32a9e6795..285707e532 100644
--- a/src/test/regress/sql/create_type.sql
+++ b/src/test/regress/sql/create_type.sql
@@ -51,7 +51,7 @@ CREATE FUNCTION int42_in(cstring)
CREATE FUNCTION int42_out(int42)
RETURNS cstring
AS 'int4out'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
CREATE FUNCTION text_w_default_in(cstring)
RETURNS text_w_default
AS 'textin'
@@ -59,7 +59,7 @@ CREATE FUNCTION text_w_default_in(cstring)
CREATE FUNCTION text_w_default_out(text_w_default)
RETURNS cstring
AS 'textout'
- LANGUAGE internal STRICT IMMUTABLE;
+ LANGUAGE internal STRICT IMMUTABLE PARALLEL SAFE;
CREATE TYPE int42 (
internallength = 4,
diff --git a/src/test/regress/sql/domain.sql b/src/test/regress/sql/domain.sql
index 549c0b5adf..a022ae4223 100644
--- a/src/test/regress/sql/domain.sql
+++ b/src/test/regress/sql/domain.sql
@@ -724,7 +724,7 @@ drop domain di;
--
create function sql_is_distinct_from(anyelement, anyelement)
-returns boolean language sql
+returns boolean language sql parallel safe
as 'select $1 is distinct from $2 limit 1';
create domain inotnull int
diff --git a/src/test/regress/sql/insert.sql b/src/test/regress/sql/insert.sql
index bfaa8a3b27..895524bc88 100644
--- a/src/test/regress/sql/insert.sql
+++ b/src/test/regress/sql/insert.sql
@@ -258,7 +258,7 @@ select tableoid::regclass::text, a, min(b) as min_b, max(b) as max_b from list_p
create or replace function part_hashint4_noop(value int4, seed int8)
returns int8 as $$
select value + seed;
-$$ language sql immutable;
+$$ language sql immutable parallel safe;
create operator class part_test_int4_ops
for type int4
--
2.18.4
On Tue, May 11, 2021 at 12:28 PM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:
Temporarily, Just in case someone want to take a look at the patch for the safety check.
I am not sure still there is a consensus on which cases exactly need
to be dealt with. Let me try to summarize the discussion and see if
that helps. As per my understanding, the main reasons for this work
are:
a. Ensure parallel-unsafe functions don't get executed in parallel
mode. We do have checks to ensure that we don't select parallel-mode
for most cases where the parallel-unsafe function is used but we don't
have checks for input/output funcs, aggregate funcs, etc. This
proposal is to detect such cases during function invocation and return
an error. I think if, for some cases like aggregate or another type of
functions we allow selecting parallelism relying on the user, it is
not a bad idea to detect and return an error if some parallel-unsafe
function is executed in parallel mode.
b. Detect wrong parallel-safety markings. Say the user has declared
some function as parallel-safe but it invokes another parallel-unsafe
function.
c. The other motive is that this work can help us to enable
parallelism for inserts (and maybe update/delete in the future). As
being discussed in another thread [1]/messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com, we are considering allowing
parallel inserts on a table based on user input and then at runtime
detect if the insert is invoking any parallel-unsafe expression. The
idea is that the user will be able to specify whether a write
operation is allowed in parallel on a specified relation and we allow
to select parallelism for such writes based on that and do the checks
for Select as we are doing now. There are other options like determine
the parallel-safety of writes in a planner and then only allow
parallelism but those all seem costly. Now, I think it is not
compulsory to have such checks for this particular reason as we are
relying on user input but it will be good if we have it.
I think the last purpose (c) is still debatable even though we
couldn't come up with anything better till now but even if leave that
aside for now, I think the other reasons are good enough to have some
form of checks.
Now, the proposal being discussed is to add a parallel-safety check in
fmgr_info which seems to be invoked during all function executions. We
need to have access to proparallel attribute of the function to check
the parallel-safety and that is readily available in fmgr_info for
non-built-in functions because we already get the pg_proc information
from sys cache. So, I guess there is no harm in checking it when the
information is readily available. However, for built-in functions that
information is not readily available as we get required function
information from FmgrBuiltin (which doesn't have parallel-safety
information). For built-in functions, the following options have been
discussed:
a. Extend FmgrBuiltin without increasing its size to include parallel
information.
b. Enquire pg_proc cache to get the information. Accessing this for
each invocation of builtin could be costly. We can probably incur this
cost only when built-in is invoked in parallel-mode.
c. Don't add check for builtins.
I think if we can't think of any other better way to have checks for
builtins and don't like any of (a) or (b) then there is no harm in
(c). This will at least allow us to have parallel-safety check for
user-defined functions.
Thoughts?
[1]: /messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com
--
With Regards,
Amit Kapila.
On Fri, Jun 4, 2021 at 6:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Thoughts?
As far as I can see, trying to error out at function call time if the
function is parallel-safe doesn't fix any problem we have, and just
makes the design of this part of the system less consistent with what
we've done elsewhere. For example, if you create a stable function
that internally calls a volatile function, you don't get an error. You
can use your stable function in an index definition if you wish. That
may break, but if so, that's your problem. Also, when it breaks, it
probably won't blow up the entire world; you'll just have a messed-up
index. Currently, the parallel-safety stuff works the same way. If we
notice that something is marked parallel-unsafe, we'll skip
parallelism. But you can lie to us and claim that things are safe when
they're not, and if you do, it may break, but that's your problem.
Mostly likely your query will just error out, and there will be no
worse consequences than that, though if your parallel-unsafe function
is written in C, it could do horrible things like crash, which is
unavoidable because C code can do anything.
Now, the reason for all of this work, as I understand it, is because
we want to enable parallel inserts, and the problem there is that a
parallel insert could involve a lot of different things: it might need
to compute expressions, or fire triggers, or check constraints, and
any of those things could be parallel-unsafe. If we enable parallelism
and then find out that we need to do to one of those things, we have a
problem. Something probably will error out. The thing is, with this
proposal, that issue is not solved. Something will definitely error
out. You'll probably get the error in a different place, but nobody
fires off an INSERT hoping to get one error message rather than
another. What they want is for it to work. So I'm kind of confused how
we ended up going in this direction which seems to me at least to be a
tangent from the real issue, and somewhat at odds with the way the
rest of PostgreSQL is designed.
It seems to me that we could simply add a flag to each relation saying
whether or not we think that INSERT operations - or perhaps DML
operations generally - are believed to be parallel-safe for that
relation. Like the marking on functions, it would be the user's
responsibility to get that marking correct. If they don't, they might
call a parallel-unsafe function in parallel mode, and that will
probably error out. But that's no worse than what we already have in
existing cases, so I don't see why it requires doing what's proposed
here first. Now, it does have the advantage of being not very
convenient for users, who, I'm sure, would prefer that the system
figure out for them automatically whether or not parallel inserts are
likely to be safe, rather than making them declare it, especially
since presumably the default declaration would have to be "unsafe," as
it is for functions. But I don't have a better idea right now.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Mon, Jun 7, 2021 at 7:29 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Fri, Jun 4, 2021 at 6:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Thoughts?
As far as I can see, trying to error out at function call time if the
function is parallel-safe doesn't fix any problem we have, and just
makes the design of this part of the system less consistent with what
we've done elsewhere. For example, if you create a stable function
that internally calls a volatile function, you don't get an error. You
can use your stable function in an index definition if you wish. That
may break, but if so, that's your problem. Also, when it breaks, it
probably won't blow up the entire world; you'll just have a messed-up
index. Currently, the parallel-safety stuff works the same way. If we
notice that something is marked parallel-unsafe, we'll skip
parallelism.
This is not true in all cases which is one of the reasons for this
thread. For example, we don't skip parallelism when I/O functions are
parallel-unsafe as is shown in the following case:
postgres=# CREATE FUNCTION text_w_default_in(cstring) RETURNS
text_w_default AS 'textin' LANGUAGE internal STRICT IMMUTABLE;
NOTICE: type "text_w_default" is not yet defined
DETAIL: Creating a shell type definition.
CREATE FUNCTION
postgres=# CREATE FUNCTION text_w_default_out(text_w_default)
RETURNS cstring AS 'textout' LANGUAGE internal STRICT IMMUTABLE;
NOTICE: argument type text_w_default is only a shell
CREATE FUNCTION
postgres=# CREATE TYPE text_w_default ( internallength = variable,
input = text_w_default_in, output = text_w_default_out, alignment
= int4, default = 'zippo');
CREATE TYPE
postgres=# CREATE TABLE default_test (f1 text_w_default, f2 int);
CREATE TABLE
postgres=# INSERT INTO default_test DEFAULT VALUES;
INSERT 0 1
postgres=# SELECT * FROM default_test;
ERROR: parallel-safety execution violation of function "text_w_default_out" (u)
Note the error is raised after applying the patch, without the patch,
the above won't show any error (error message could be improved here).
Such cases can lead to unpredictable behavior without a patch because
we won't be able to detect the execution of parallel-unsafe functions.
There are similar examples from regression tests. Now, one way to deal
with similar cases could be that we document them and say we don't
consider parallel-safety in such cases and the other way is to detect
such cases and error out. Yet another way could be that we somehow try
to check these cases as well before enabling parallelism but I thought
these cases fall in the similar category as aggregate's support
functions.
But you can lie to us and claim that things are safe when
they're not, and if you do, it may break, but that's your problem.
Mostly likely your query will just error out, and there will be no
worse consequences than that, though if your parallel-unsafe function
is written in C, it could do horrible things like crash, which is
unavoidable because C code can do anything.
That is true but I was worried for cases where users didn't lie to us
but we still allowed those to choose parallelism.
Now, the reason for all of this work, as I understand it, is because
we want to enable parallel inserts, and the problem there is that a
parallel insert could involve a lot of different things: it might need
to compute expressions, or fire triggers, or check constraints, and
any of those things could be parallel-unsafe. If we enable parallelism
and then find out that we need to do to one of those things, we have a
problem. Something probably will error out. The thing is, with this
proposal, that issue is not solved. Something will definitely error
out. You'll probably get the error in a different place, but nobody
fires off an INSERT hoping to get one error message rather than
another. What they want is for it to work. So I'm kind of confused how
we ended up going in this direction which seems to me at least to be a
tangent from the real issue, and somewhat at odds with the way the
rest of PostgreSQL is designed.It seems to me that we could simply add a flag to each relation saying
whether or not we think that INSERT operations - or perhaps DML
operations generally - are believed to be parallel-safe for that
relation.
This is exactly the direction we are trying to pursue. The proposal
[1]: /messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com
CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE };
ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE };
This property is recorded in pg_class's relparallel column as 'u',
'r', or 's', just like pg_proc's proparallel. The default is UNSAFE.
This might require some bike-shedding to decide how exactly we want to
expose it to the user but I think it is on the lines of what you have
described here.
Like the marking on functions, it would be the user's
responsibility to get that marking correct. If they don't, they might
call a parallel-unsafe function in parallel mode, and that will
probably error out. But that's no worse than what we already have in
existing cases, so I don't see why it requires doing what's proposed
here first.
I agree it is not necessarily required if we give the responsibility
to the user but this might give a better user experience, OTOH,
without this as well, as you said it won't be any worse than current
behavior. But that was not the sole motivation of this proposal as
explained above in the email by giving example.
Now, it does have the advantage of being not very
convenient for users, who, I'm sure, would prefer that the system
figure out for them automatically whether or not parallel inserts are
likely to be safe, rather than making them declare it, especially
since presumably the default declaration would have to be "unsafe," as
it is for functions.
To improve the user experience in this regard, the proposal [1]/messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com
provides a function pg_get_parallel_safety(oid) using which users can
determine whether it is safe to enable parallelism. Surely, after the
user has checked with that function, one can add some unsafe
constraints to the table by altering the table but it will still be an
aid to enable parallelism on a relation.
[1]: /messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com
--
With Regards,
Amit Kapila.
On Mon, Jun 7, 2021 at 11:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
Note the error is raised after applying the patch, without the patch,
the above won't show any error (error message could be improved here).
Such cases can lead to unpredictable behavior without a patch because
we won't be able to detect the execution of parallel-unsafe functions.
There are similar examples from regression tests. Now, one way to deal
with similar cases could be that we document them and say we don't
consider parallel-safety in such cases and the other way is to detect
such cases and error out. Yet another way could be that we somehow try
to check these cases as well before enabling parallelism but I thought
these cases fall in the similar category as aggregate's support
functions.
I'm not very excited about the idea of checking type input and type
output functions. It's hard to imagine someone wanting to do something
parallel-unsafe in such a function, unless they're just trying to
prove a point. So I don't think checking it would be a good investment
of CPU cycles. If we do anything at all, I'd vote for just documenting
that such functions should be parallel-safe and that their
parallel-safety marks are not checked when they are used as type
input/output functions. Perhaps we ought to document the same thing
with regard to opclass support functions, another place where it's
hard to imagine a realistic use case for doing something
parallel-unsafe.
In the case of aggregates, I see the issues slightly differently. I
don't know that it's super-likely that someone would want to create a
parallel-unsafe aggregate function, but I think there should be a way
to do it, just in case. However, if somebody wants that, they can just
mark the aggregate itself unsafe. There's no benefit for the user to
marking the aggregate safe and the support functions unsafe and hoping
that the system figures it out somehow.
In my opinion, you're basically taking too pure a view of this. We're
not trying to create a system that does such a good job checking
parallel safety markings that nobody can possibly find a thing that
isn't checked no matter how hard they poke around the dark corners of
the system. Or at least we shouldn't be trying to do that. We should
be trying to create a system that works well in practice, and gives
people the flexibility to easily avoid parallelism when they have a
query that is parallel-unsafe, while still getting the benefit of
parallelism the rest of the time.
I don't know what all the cases you've uncovered are, and maybe
there's something in there that I'd be more excited about changing if
I knew what it was, but the particular problems you're mentioning here
seem more theoretical than real to me.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Tuesday, June 8, 2021 10:51 PM Robert Haas <robertmhaas@gmail.com>
On Mon, Jun 7, 2021 at 11:33 PM Amit Kapila <amit.kapila16@gmail.com>
wrote:Note the error is raised after applying the patch, without the patch,
the above won't show any error (error message could be improved here).
Such cases can lead to unpredictable behavior without a patch because
we won't be able to detect the execution of parallel-unsafe functions.
There are similar examples from regression tests. Now, one way to deal
with similar cases could be that we document them and say we don't
consider parallel-safety in such cases and the other way is to detect
such cases and error out. Yet another way could be that we somehow try
to check these cases as well before enabling parallelism but I thought
these cases fall in the similar category as aggregate's support
functions.I'm not very excited about the idea of checking type input and type output
functions. It's hard to imagine someone wanting to do something
parallel-unsafe in such a function, unless they're just trying to prove a point. So
I don't think checking it would be a good investment of CPU cycles. If we do
anything at all, I'd vote for just documenting that such functions should be
parallel-safe and that their parallel-safety marks are not checked when they are
used as type input/output functions. Perhaps we ought to document the same
thing with regard to opclass support functions, another place where it's hard to
imagine a realistic use case for doing something parallel-unsafe.In the case of aggregates, I see the issues slightly differently. I don't know that
it's super-likely that someone would want to create a parallel-unsafe
aggregate function, but I think there should be a way to do it, just in case.
However, if somebody wants that, they can just mark the aggregate itself
unsafe. There's no benefit for the user to marking the aggregate safe and the
support functions unsafe and hoping that the system figures it out somehow.In my opinion, you're basically taking too pure a view of this. We're not trying to
create a system that does such a good job checking parallel safety markings
that nobody can possibly find a thing that isn't checked no matter how hard
they poke around the dark corners of the system. Or at least we shouldn't be
trying to do that. We should be trying to create a system that works well in
practice, and gives people the flexibility to easily avoid parallelism when they
have a query that is parallel-unsafe, while still getting the benefit of parallelism
the rest of the time.I don't know what all the cases you've uncovered are, and maybe there's
something in there that I'd be more excited about changing if I knew what it
was, but the particular problems you're mentioning here seem more
theoretical than real to me.
I think another case that parallel unsafe function could be invoked in parallel mode is
the TEXT SEARCH TEMPLATE's init_function or lexize_function. Because currently,
the planner does not check the safety of these function. Please see the example below[1]---------------------------EXAMPLE------------------------------------ CREATE FUNCTION dsnowball_init(INTERNAL) RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_init' LANGUAGE C STRICT;
I am not sure will user use parallel unsafe function in init_function or lexize_function,
but if user does, it could cause unexpected result.
Does it make sense to add some check for init_function or lexize_function
or document this together with type input/output and opclass support functions ?
[1]: ---------------------------EXAMPLE------------------------------------ CREATE FUNCTION dsnowball_init(INTERNAL) RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_init' LANGUAGE C STRICT;
CREATE FUNCTION dsnowball_init(INTERNAL)
RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_init'
LANGUAGE C STRICT;
CREATE FUNCTION dsnowball_lexize(INTERNAL, INTERNAL, INTERNAL, INTERNAL)
RETURNS INTERNAL AS '$libdir/dict_snowball', 'dsnowball_lexize'
LANGUAGE C STRICT;
CREATE TEXT SEARCH TEMPLATE snowball
(INIT = dsnowball_init,
LEXIZE = dsnowball_lexize);
COMMENT ON TEXT SEARCH TEMPLATE snowball IS 'snowball stemmer';
create table pendtest (ts tsvector);
create index pendtest_idx on pendtest using gin(ts);
insert into pendtest select (to_tsvector('Lore ipsum')) from generate_series(1,10000000,1);
analyze;
set enable_bitmapscan = off;
postgres=# explain select * from pendtest where to_tsquery('345&qwerty') @@ ts;
QUERY PLAN
--------------------------------------------------------------------------------
Gather (cost=1000.00..1168292.86 rows=250 width=31)
Workers Planned: 2
-> Parallel Seq Scan on pendtest (cost=0.00..1167267.86 rows=104 width=31)
Filter: (to_tsquery('345&qwerty'::text) @@ ts)
-- In the example above, dsnowball_init() and dsnowball_lexize() will be executed in parallel mode.
----------------------------EXAMPLE------------------------------------
Best regards,
houzj
"houzj.fnst@fujitsu.com" <houzj.fnst@fujitsu.com> writes:
On Tuesday, June 8, 2021 10:51 PM Robert Haas <robertmhaas@gmail.com>
wrote:In my opinion, you're basically taking too pure a view of this. We're
not trying to create a system that does such a good job checking
parallel safety markings that nobody can possibly find a thing that
isn't checked no matter how hard they poke around the dark corners of
the system. Or at least we shouldn't be trying to do that.
I think another case that parallel unsafe function could be invoked in
parallel mode is the TEXT SEARCH TEMPLATE's init_function or
lexize_function.
Another point worth making in this connection is what I cited earlier
today in ba2c6d6ce:
: ... We could imagine prohibiting SCROLL when
: the query contains volatile functions, but that would be
: expensive to enforce. Moreover, it could break applications
: that work just fine, if they have functions that are in fact
: stable but the user neglected to mark them so. So settle for
: documenting the hazard.
If you break an application that used to work, because the
developer was careless about marking a function PARALLEL SAFE
even though it actually is, I do not think you have made any
friends or improved anyone's life. In fact, you could easily
make things far worse, by encouraging people to mark things
PARALLEL SAFE that are not. (We just had a thread about somebody
marking a function immutable because they wanted effect X of that,
and then whining because they also got effect Y.)
There are specific cases where there's a good reason to worry.
For example, if we assume blindly that domain_in() is parallel
safe, we will have cause to regret that. But I don't find that
to be a reason why we need to lock down everything everywhere.
We need to understand the tradeoffs involved in what we check,
and apply checks that are likely to avoid problems, while not
being too nanny-ish.
regards, tom lane
On Wed, Jun 9, 2021 at 2:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
There are specific cases where there's a good reason to worry.
For example, if we assume blindly that domain_in() is parallel
safe, we will have cause to regret that. But I don't find that
to be a reason why we need to lock down everything everywhere.
We need to understand the tradeoffs involved in what we check,
and apply checks that are likely to avoid problems, while not
being too nanny-ish.
Yeah, that's exactly how I feel about it, too.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Wed, Jun 9, 2021 at 9:47 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Wed, Jun 9, 2021 at 2:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
There are specific cases where there's a good reason to worry.
For example, if we assume blindly that domain_in() is parallel
safe, we will have cause to regret that. But I don't find that
to be a reason why we need to lock down everything everywhere.
We need to understand the tradeoffs involved in what we check,
and apply checks that are likely to avoid problems, while not
being too nanny-ish.Yeah, that's exactly how I feel about it, too.
Fair enough. So, I think there is a consensus to drop this patch and
if one wants then we can document these cases. Also, we don't want it
to enable parallelism for Inserts where we are trying to pursue the
approach to have a flag in pg_class which allows users to specify
whether writes are allowed on a specified relation.
--
With Regards,
Amit Kapila.
On Thu, Jun 10, 2021 at 12:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Fair enough. So, I think there is a consensus to drop this patch and
if one wants then we can document these cases. Also, we don't want it
to enable parallelism for Inserts where we are trying to pursue the
approach to have a flag in pg_class which allows users to specify
whether writes are allowed on a specified relation.
+1. The question that's still on my mind a little bit is whether
there's a reasonable alternative to forcing users to set a flag
manually. It seems less convenient than having to do the same thing
for a function, because most users probably only create functions
occasionally, but creating tables seems like it's likely to be a more
common operation. Plus, a function is basically a program, so it sort
of feels reasonable that you might need to give the system some hints
about what the program does, but that doesn't apply to a table.
Now, if we forget about partitioned tables here for a moment, I don't
really see why we couldn't do this computation based on the relcache
entry, and then just cache the flag there? I think anything that would
change the state for a plain old table would also cause some
invalidation that we could notice. And I don't think that the cost of
walking over triggers, constraints, etc. and computing the value we
need on demand would be exorbitant.
For a partitioned table, things are a lot more difficult. For one
thing, the cost of computation can be a lot higher; there might be a
thousand or more partitions. For another thing, computing the value
could have scary side effects, like opening all the partitions, which
would also mean taking locks on them and building expensive relcache
entries. For a third thing, we'd have no way of knowing whether the
value was still current, because an event that produces an
invalidation for a partition doesn't necessarily produce any
invalidation for the partitioned table.
So one idea is maybe we only need an explicit flag for partitioned
tables, and regular tables we can just work it out automatically.
Another idea is maybe we try to solve the problems somehow so that it
can also work with partitioned tables. I don't really have a great
idea right at the moment, but maybe it's worth devoting some more
thought to the problem.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Thu, Jun 10, 2021 at 10:59 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Thu, Jun 10, 2021 at 12:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Fair enough. So, I think there is a consensus to drop this patch and
if one wants then we can document these cases. Also, we don't want it
to enable parallelism for Inserts where we are trying to pursue the
approach to have a flag in pg_class which allows users to specify
whether writes are allowed on a specified relation.+1. The question that's still on my mind a little bit is whether
there's a reasonable alternative to forcing users to set a flag
manually. It seems less convenient than having to do the same thing
for a function, because most users probably only create functions
occasionally, but creating tables seems like it's likely to be a more
common operation. Plus, a function is basically a program, so it sort
of feels reasonable that you might need to give the system some hints
about what the program does, but that doesn't apply to a table.Now, if we forget about partitioned tables here for a moment, I don't
really see why we couldn't do this computation based on the relcache
entry, and then just cache the flag there?
Do we invalidate relcache entry if someone changes say trigger or some
index AM function property via Alter Function (in our case from safe
to unsafe or vice-versa)? Tsunakawa-San has mentioned this as the
reason in his email [1]/messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com why we can't rely on caching this property in
relcache entry. I also don't see anything in AlterFunction which would
suggest that we invalidate the relation with which the function might
be associated via trigger.
The other idea in this regard was to validate the parallel safety
during DDL instead of relying completely on the user but that also
seems to have similar hazards as pointed by Tom in his email [2]/messages/by-id/1030301.1616560249@sss.pgh.pa.us.
I think it would be good if there is a way we can do this without
asking for user input but if not then we can try to provide
parallel-safety info about relation which will slightly ease the
user's job. Such a function would check relation (and its partitions)
to see if there exists any parallel-unsafe clause and accordingly
return the same to the user. Now, again if the user changes the
parallel-safe property later we won't be able to automatically reflect
the same for rel.
[1]: /messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com
[2]: /messages/by-id/1030301.1616560249@sss.pgh.pa.us
--
With Regards,
Amit Kapila.
On Fri, Jun 11, 2021 at 12:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Do we invalidate relcache entry if someone changes say trigger or some
index AM function property via Alter Function (in our case from safe
to unsafe or vice-versa)? Tsunakawa-San has mentioned this as the
reason in his email [1] why we can't rely on caching this property in
relcache entry. I also don't see anything in AlterFunction which would
suggest that we invalidate the relation with which the function might
be associated via trigger.
Hmm. I am not sure index that AM functions really need to be checked,
but triggers certainly do. I think if you are correct that an ALTER
FUNCTION wouldn't invalidate the relcache entry, which is I guess
pretty much the same problem Tom was pointing out in the thread to
which you linked.
But ... thinking out of the box as Tom suggests, what if we came up
with some new kind of invalidation message that is only sent when a
function's parallel-safety marking is changed? And every backend in
the same database then needs to re-evaluate the parallel-safety of
every relation for which it has cached a value. Such recomputations
might be expensive, but they would probably also occur very
infrequently. And you might even be able to make it a bit more
fine-grained if it's worth the effort to worry about that: say that in
addition to caching the parallel-safety of the relation, we also cache
a list of the pg_proc OIDs upon which that determination depends. Then
when we hear that the flag's been changed for OID 123456, we only need
to invalidate the cached value for relations that depended on that
pg_proc entry. There are ways that a relation could become
parallel-unsafe without changing the parallel-safety marking of any
function, but perhaps all of the other ways involve a relcache
invalidation?
Just brainstorming here. I might be off-track.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Sat, Jun 12, 2021 at 1:56 AM Robert Haas <robertmhaas@gmail.com> wrote:
On Fri, Jun 11, 2021 at 12:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Do we invalidate relcache entry if someone changes say trigger or some
index AM function property via Alter Function (in our case from safe
to unsafe or vice-versa)? Tsunakawa-San has mentioned this as the
reason in his email [1] why we can't rely on caching this property in
relcache entry. I also don't see anything in AlterFunction which would
suggest that we invalidate the relation with which the function might
be associated via trigger.Hmm. I am not sure index that AM functions really need to be checked,
but triggers certainly do.
Why do you think we don't need to check index AM functions? Say we
have an index expression that uses function and if its parallel safety
is changed then probably that also impacts whether we can do insert in
parallel. Because otherwise, we will end up executing some parallel
unsafe function in parallel mode during index insertion.
I think if you are correct that an ALTER
FUNCTION wouldn't invalidate the relcache entry, which is I guess
pretty much the same problem Tom was pointing out in the thread to
which you linked.But ... thinking out of the box as Tom suggests, what if we came up
with some new kind of invalidation message that is only sent when a
function's parallel-safety marking is changed? And every backend in
the same database then needs to re-evaluate the parallel-safety of
every relation for which it has cached a value. Such recomputations
might be expensive, but they would probably also occur very
infrequently. And you might even be able to make it a bit more
fine-grained if it's worth the effort to worry about that: say that in
addition to caching the parallel-safety of the relation, we also cache
a list of the pg_proc OIDs upon which that determination depends. Then
when we hear that the flag's been changed for OID 123456, we only need
to invalidate the cached value for relations that depended on that
pg_proc entry.
Yeah, this could be one idea but I think even if we use pg_proc OID,
we still need to check all the rel cache entries to find which one
contains the invalidated OID and that could be expensive. I wonder
can't we directly find the relation involved and register invalidation
for the same? We are able to find the relation to which trigger
function is associated during drop function via findDependentObjects
by scanning pg_depend. Assuming, we are able to find the relation for
trigger function by scanning pg_depend, what kinds of problems do we
envision in registering the invalidation for the same?
I think we probably need to worry about the additional cost to find
dependent objects and if there are any race conditions in doing so as
pointed out by Tom in his email [1]/messages/by-id/1030301.1616560249@sss.pgh.pa.us. The concern related to cost could
be addressed by your idea of registering such an invalidation only
when the user changes the parallel safety of the function which we
don't expect to be a frequent operation. Now, here the race condition,
I could think of could be that by the time we change parallel-safety
(say making it unsafe) of a function, some of the other sessions might
have already started processing insert on a relation where that
function is associated via trigger or check constraint in which case
there could be a problem. I think to avoid that we need to acquire an
Exclusive lock on the relation as we are doing in Rename Policy kind
of operations.
There are ways that a relation could become
parallel-unsafe without changing the parallel-safety marking of any
function, but perhaps all of the other ways involve a relcache
invalidation?
Probably, but I guess we can once investigate/test those cases as well
if we find/agree on the solution for the functions stuff.
[1]: /messages/by-id/1030301.1616560249@sss.pgh.pa.us
--
With Regards,
Amit Kapila.
Amit Kapila <amit.kapila16@gmail.com> writes:
Why do you think we don't need to check index AM functions?
Primarily because index AMs and opclasses can only be defined by
superusers, and the superuser is assumed to know what she's doing.
More generally, we've never made any provisions for the properties
of index AMs or opclasses to change on-the-fly. I doubt that doing
so could possibly be justified on a cost-benefit basis.
regards, tom lane
On Mon, Jun 14, 2021 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Why do you think we don't need to check index AM functions? Say we
have an index expression that uses function and if its parallel safety
is changed then probably that also impacts whether we can do insert in
parallel. Because otherwise, we will end up executing some parallel
unsafe function in parallel mode during index insertion.
I'm not saying that we don't need to check index expressions. I agree
that we need to check those. The index AM functions are things like
btint4cmp(). I don't think that a function like that should ever be
parallel-unsafe.
Yeah, this could be one idea but I think even if we use pg_proc OID,
we still need to check all the rel cache entries to find which one
contains the invalidated OID and that could be expensive. I wonder
can't we directly find the relation involved and register invalidation
for the same? We are able to find the relation to which trigger
function is associated during drop function via findDependentObjects
by scanning pg_depend. Assuming, we are able to find the relation for
trigger function by scanning pg_depend, what kinds of problems do we
envision in registering the invalidation for the same?
I don't think that finding the relation involved and registering an
invalidation for the same will work properly. Suppose there is a
concurrently-running transaction which has created a new table and
attached a trigger function to it. You can't see any of the catalog
entries for that relation yet, so you can't invalidate it, but
invalidation needs to happen. Even if you used some snapshot that can
see those catalog entries before they are committed, I doubt it fixes
the race condition. You can't hold any lock on that relation, because
the creating transaction holds AccessExclusiveLock, but the whole
invalidation mechanism is built around the assumption that the sender
puts messages into the shared queue first and then releases locks,
while the receiver first acquires a conflicting lock and then
processes messages from the queue. Without locks, that synchronization
algorithm can't work reliably. As a consequence of all that, I believe
that, not just in this particular case but in general, the
invalidation message needs to describe the thing that has actually
changed, rather than any derived property. We can make invalidations
that say "some function's parallel-safety flag has changed" or "this
particular function's parallel-safety flag has changed" or "this
particular function has changed in some way" (this one, we have
already), but anything that involves trying to figure out what the
consequences of such a change might be and saying "hey, you, please
update XYZ because I changed something somewhere that could affect
that" is not going to be correct.
I think we probably need to worry about the additional cost to find
dependent objects and if there are any race conditions in doing so as
pointed out by Tom in his email [1]. The concern related to cost could
be addressed by your idea of registering such an invalidation only
when the user changes the parallel safety of the function which we
don't expect to be a frequent operation. Now, here the race condition,
I could think of could be that by the time we change parallel-safety
(say making it unsafe) of a function, some of the other sessions might
have already started processing insert on a relation where that
function is associated via trigger or check constraint in which case
there could be a problem. I think to avoid that we need to acquire an
Exclusive lock on the relation as we are doing in Rename Policy kind
of operations.
Well, the big issue here is that we don't actually lock functions
while they are in use. So there's absolutely nothing that prevents a
function from being altered in any arbitrary way, or even dropped,
while code that uses it is running. I don't really know what happens
in practice if you do that sort of thing: can you get the same query
to run with one function definition for the first part of execution
and some other definition for the rest of execution? I tend to doubt
it, because I suspect we cache the function definition at some point.
If that's the case, caching the parallel-safety marking at the same
point seems OK too, or at least no worse than what we're doing
already. But on the other hand if it is possible for a query's notion
of the function definition to shift while the query is in flight, then
this is just another example of that and no worse than any other.
Instead of changing the parallel-safety flag, somebody could redefine
the function so that it divides by zero or produces a syntax error and
kaboom, running queries break. Either way, I don't see what the big
deal is. As long as we make the handling of parallel-safety consistent
with other ways the function could be concurrently redefined, it won't
suck any more than the current system already does, or in any
fundamentally new ways.
Even if this line of thinking is correct, there's a big issue for
partitioning hierarchies because there you need to know stuff about
relations that you don't have any other reason to open. I'm just
arguing that if there's no partitioning, the problem seems reasonably
solvable. Either you changed something about the relation, in which
case you've got to lock it and issue invalidations, or you've changed
something about the function, which could be handled via a new type of
invalidation. I don't really see why the cost would be particularly
bad. Suppose that for every relation, you have a flag which is either
PARALLEL_DML_SAFE, PARALLEL_DML_RESTRICTED, PARALLEL_DML_UNSAFE, or
PARALLEL_DML_SAFETY_UNKNOWN. When someone sends a message saying "some
existing function's parallel-safety changed!" you reset that flag for
every relation in the relcache to PARALLEL_DML_SAFETY_UNKNOWN. Then if
somebody does DML on that relation and we want to consider
parallelism, it's got to recompute that flag. None of that sounds
horribly expensive.
I mean, it could be somewhat annoying if you have 100k relations open
and sit around all day flipping parallel-safety markings on and off
and then doing a single-row insert after each flip, but if that's the
only scenario where we incur significant extra overhead from this kind
of design, it seems clearly better than forcing users to set a flag
manually. Maybe it isn't, but I don't really see what the other
problem would be right now. Except, of course, for partitioning, which
I'm not quite sure what to do about.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Mon, Jun 14, 2021 at 9:08 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jun 14, 2021 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Why do you think we don't need to check index AM functions? Say we
have an index expression that uses function and if its parallel safety
is changed then probably that also impacts whether we can do insert in
parallel. Because otherwise, we will end up executing some parallel
unsafe function in parallel mode during index insertion.I'm not saying that we don't need to check index expressions. I agree
that we need to check those. The index AM functions are things like
btint4cmp(). I don't think that a function like that should ever be
parallel-unsafe.
Okay, but I think if we go with your suggested model where whenever
there is a change in parallel-safety of any function, we need to send
the new invalidation then I think it won't matter whether the function
is associated with index expression, check constraint in the table, or
is used in any other way.
Even if this line of thinking is correct, there's a big issue for
partitioning hierarchies because there you need to know stuff about
relations that you don't have any other reason to open. I'm just
arguing that if there's no partitioning, the problem seems reasonably
solvable. Either you changed something about the relation, in which
case you've got to lock it and issue invalidations, or you've changed
something about the function, which could be handled via a new type of
invalidation. I don't really see why the cost would be particularly
bad. Suppose that for every relation, you have a flag which is either
PARALLEL_DML_SAFE, PARALLEL_DML_RESTRICTED, PARALLEL_DML_UNSAFE, or
PARALLEL_DML_SAFETY_UNKNOWN. When someone sends a message saying "some
existing function's parallel-safety changed!" you reset that flag for
every relation in the relcache to PARALLEL_DML_SAFETY_UNKNOWN. Then if
somebody does DML on that relation and we want to consider
parallelism, it's got to recompute that flag. None of that sounds
horribly expensive.
Sounds reasonable. I will think more on this and see if anything else
comes to mind apart from what you have mentioned.
I mean, it could be somewhat annoying if you have 100k relations open
and sit around all day flipping parallel-safety markings on and off
and then doing a single-row insert after each flip, but if that's the
only scenario where we incur significant extra overhead from this kind
of design, it seems clearly better than forcing users to set a flag
manually. Maybe it isn't, but I don't really see what the other
problem would be right now. Except, of course, for partitioning, which
I'm not quite sure what to do about.
Yeah, dealing with partitioned tables is tricky. I think if we don't
want to check upfront the parallel safety of all the partitions then
the other option as discussed could be to ask the user to specify the
parallel safety of partitioned tables. We can additionally check the
parallel safety of partitions when we are trying to insert into a
particular partition and error out if we detect any parallel-unsafe
clause and we are in parallel-mode. So, in this case, we won't be
completely relying on the users. Users can either change the parallel
safe option of the table or remove/change the parallel-unsafe clause
after error. The new invalidation message as we are discussing would
invalidate the parallel-safety for individual partitions but not the
root partition (partitioned table itself). For root partition, we will
rely on information specified by the user.
I am not sure if we have a simple way to check the parallel safety of
partitioned tables. In some way, we need to rely on user either (a) by
providing an option to specify whether parallel Inserts (and/or other
DMLs) can be performed, or (b) by providing a guc and/or rel option
which indicate that we can check the parallel-safety of all the
partitions. Yet another option that I don't like could be to
parallelize inserts on non-partitioned tables.
--
With Regards,
Amit Kapila.
On Tue, Jun 15, 2021 at 7:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Okay, but I think if we go with your suggested model where whenever
there is a change in parallel-safety of any function, we need to send
the new invalidation then I think it won't matter whether the function
is associated with index expression, check constraint in the table, or
is used in any other way.
No, it will still matter, because I'm proposing that the
parallel-safety of functions that we only access via operator classes
will not even be checked. Also, if we decided to make the system more
fine-grained - e.g. by invalidating on the specific OID of the
function that was changed rather than doing something that is
database-wide or global - then it matters even more.
Yeah, dealing with partitioned tables is tricky. I think if we don't
want to check upfront the parallel safety of all the partitions then
the other option as discussed could be to ask the user to specify the
parallel safety of partitioned tables.
Just to be clear here, I don't think it really matters what we *want*
to do. I don't think it's reasonably *possible* to check all the
partitions, because we don't hold locks on them. When we're assessing
a bunch of stuff related to an individual relation, we have a lock on
it. I think - though we should double-check tablecmds.c - that this is
enough to prevent all of the dependent objects - triggers,
constraints, etc. - from changing. So the stuff we care about is
stable. But the situation with a partitioned table is different. In
that case, we can't even examine that stuff without locking all the
partitions. And even if we do lock all the partitions, the stuff could
change immediately afterward and we wouldn't know. So I think it would
be difficult to make it correct.
Now, maybe it could be done, and I think that's worth a little more
thought. For example, perhaps whenever we invalidate a relation, we
could also somehow send some new, special kind of invalidation for its
parent saying, essentially, "hey, one of your children has changed in
a way you might care about." But that's not good enough, because it
only goes up one level. The grandparent would still be unaware that a
change it potentially cares about has occurred someplace down in the
partitioning hierarchy. That seems hard to patch up, again because of
the locking rules. The child can know the OID of its parent without
locking the parent, but it can't know the OID of its grandparent
without locking its parent. Walking up the whole partitioning
hierarchy might be an issue for a number of reasons, including
possible deadlocks, and possible race conditions where we don't emit
all of the right invalidations in the face of concurrent changes. So I
don't quite see a way around this part of the problem, but I may well
be missing something. In fact I hope I am missing something, because
solving this problem would be really nice.
We can additionally check the
parallel safety of partitions when we are trying to insert into a
particular partition and error out if we detect any parallel-unsafe
clause and we are in parallel-mode. So, in this case, we won't be
completely relying on the users. Users can either change the parallel
safe option of the table or remove/change the parallel-unsafe clause
after error. The new invalidation message as we are discussing would
invalidate the parallel-safety for individual partitions but not the
root partition (partitioned table itself). For root partition, we will
rely on information specified by the user.
Yeah, that may be the best we can do. Just to be clear, I think we
would want to check whether the relation is still parallel-safe at the
start of the operation, but not have a run-time check at each function
call.
I am not sure if we have a simple way to check the parallel safety of
partitioned tables. In some way, we need to rely on user either (a) by
providing an option to specify whether parallel Inserts (and/or other
DMLs) can be performed, or (b) by providing a guc and/or rel option
which indicate that we can check the parallel-safety of all the
partitions. Yet another option that I don't like could be to
parallelize inserts on non-partitioned tables.
If we figure out a way to check the partitions automatically that
actually works, we don't need a switch for it; we can (and should)
just do it that way all the time. But if we can't come up with a
correct algorithm for that, then we'll need to add some kind of option
where the user declares whether it's OK.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Mon, Jun 14, 2021 at 9:08 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jun 14, 2021 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Yeah, this could be one idea but I think even if we use pg_proc OID,
we still need to check all the rel cache entries to find which one
contains the invalidated OID and that could be expensive. I wonder
can't we directly find the relation involved and register invalidation
for the same? We are able to find the relation to which trigger
function is associated during drop function via findDependentObjects
by scanning pg_depend. Assuming, we are able to find the relation for
trigger function by scanning pg_depend, what kinds of problems do we
envision in registering the invalidation for the same?I don't think that finding the relation involved and registering an
invalidation for the same will work properly. Suppose there is a
concurrently-running transaction which has created a new table and
attached a trigger function to it. You can't see any of the catalog
entries for that relation yet, so you can't invalidate it, but
invalidation needs to happen. Even if you used some snapshot that can
see those catalog entries before they are committed, I doubt it fixes
the race condition. You can't hold any lock on that relation, because
the creating transaction holds AccessExclusiveLock, but the whole
invalidation mechanism is built around the assumption that the sender
puts messages into the shared queue first and then releases locks,
while the receiver first acquires a conflicting lock and then
processes messages from the queue.
Won't such messages be proceesed at start transaction time
(AtStart_Cache->AcceptInvalidationMessages)?
Without locks, that synchronization
algorithm can't work reliably. As a consequence of all that, I believe
that, not just in this particular case but in general, the
invalidation message needs to describe the thing that has actually
changed, rather than any derived property. We can make invalidations
that say "some function's parallel-safety flag has changed" or "this
particular function's parallel-safety flag has changed" or "this
particular function has changed in some way" (this one, we have
already), but anything that involves trying to figure out what the
consequences of such a change might be and saying "hey, you, please
update XYZ because I changed something somewhere that could affect
that" is not going to be correct.I think we probably need to worry about the additional cost to find
dependent objects and if there are any race conditions in doing so as
pointed out by Tom in his email [1]. The concern related to cost could
be addressed by your idea of registering such an invalidation only
when the user changes the parallel safety of the function which we
don't expect to be a frequent operation. Now, here the race condition,
I could think of could be that by the time we change parallel-safety
(say making it unsafe) of a function, some of the other sessions might
have already started processing insert on a relation where that
function is associated via trigger or check constraint in which case
there could be a problem. I think to avoid that we need to acquire an
Exclusive lock on the relation as we are doing in Rename Policy kind
of operations.Well, the big issue here is that we don't actually lock functions
while they are in use. So there's absolutely nothing that prevents a
function from being altered in any arbitrary way, or even dropped,
while code that uses it is running. I don't really know what happens
in practice if you do that sort of thing: can you get the same query
to run with one function definition for the first part of execution
and some other definition for the rest of execution? I tend to doubt
it, because I suspect we cache the function definition at some point.
It is possible that in the same statement execution a different
function definition can be executed. Say, in session-1 we are
inserting three rows, on first-row execution definition-1 of function
in index expression gets executed. Now, from session-2, we change the
definition of the function to definition-2. Now, in session-1, on
second-row insertion, while executing definition-1 of function, we
insert in another table that will accept the invalidation message
registered in session-2. Now, on third-row insertion, the new
definition (definition-2) of function will be executed.
If that's the case, caching the parallel-safety marking at the same
point seems OK too, or at least no worse than what we're doing
already. But on the other hand if it is possible for a query's notion
of the function definition to shift while the query is in flight, then
this is just another example of that and no worse than any other.
Instead of changing the parallel-safety flag, somebody could redefine
the function so that it divides by zero or produces a syntax error and
kaboom, running queries break. Either way, I don't see what the big
deal is. As long as we make the handling of parallel-safety consistent
with other ways the function could be concurrently redefined, it won't
suck any more than the current system already does, or in any
fundamentally new ways.
Okay, so, in this scheme, we have allowed changing the function
definition during statement execution but even though the rel's
parallel-safe property gets modified (say to parallel-unsafe), we will
still proceed in parallel-mode as if it's not changed. I guess this
may not be a big deal as we can anyway allow breaking the running
statement by changing its definition and users may be okay if the
parallel statement errors out or behave in an unpredictable way in
such corner cases.
--
With Regards,
Amit Kapila.
On Tuesday, June 15, 2021 10:01 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, Jun 15, 2021 at 7:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Yeah, dealing with partitioned tables is tricky. I think if we don't
want to check upfront the parallel safety of all the partitions then
the other option as discussed could be to ask the user to specify the
parallel safety of partitioned tables.Just to be clear here, I don't think it really matters what we *want* to do. I don't
think it's reasonably *possible* to check all the partitions, because we don't
hold locks on them. When we're assessing a bunch of stuff related to an
individual relation, we have a lock on it. I think - though we should
double-check tablecmds.c - that this is enough to prevent all of the dependent
objects - triggers, constraints, etc. - from changing. So the stuff we care about
is stable. But the situation with a partitioned table is different. In that case, we
can't even examine that stuff without locking all the partitions. And even if we
do lock all the partitions, the stuff could change immediately afterward and we
wouldn't know. So I think it would be difficult to make it correct.Now, maybe it could be done, and I think that's worth a little more thought. For
example, perhaps whenever we invalidate a relation, we could also somehow
send some new, special kind of invalidation for its parent saying, essentially,
"hey, one of your children has changed in a way you might care about." But
that's not good enough, because it only goes up one level. The grandparent
would still be unaware that a change it potentially cares about has occurred
someplace down in the partitioning hierarchy. That seems hard to patch up,
again because of the locking rules. The child can know the OID of its parent
without locking the parent, but it can't know the OID of its grandparent without
locking its parent. Walking up the whole partitioning hierarchy might be an
issue for a number of reasons, including possible deadlocks, and possible race
conditions where we don't emit all of the right invalidations in the face of
concurrent changes. So I don't quite see a way around this part of the problem,
but I may well be missing something. In fact I hope I am missing something,
because solving this problem would be really nice.
I think the check of partition could be even more complicated if we need to
check the parallel safety of partition key expression. If user directly insert into
a partition, then we need invoke ExecPartitionCheck which will execute all it's
parent's and grandparent's partition key expressions. It means if we change a
parent table's partition key expression(by 1) change function in expr or 2) attach
the parent table as partition of another parent table), then we need to invalidate
all its child's relcache.
BTW, currently, If user attach a partitioned table 'A' to be partition of another
partitioned table 'B', the child of 'A' will not be invalidated.
Best regards,
houzj
On Tue, Jun 15, 2021 at 7:31 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, Jun 15, 2021 at 7:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Okay, but I think if we go with your suggested model where whenever
there is a change in parallel-safety of any function, we need to send
the new invalidation then I think it won't matter whether the function
is associated with index expression, check constraint in the table, or
is used in any other way.No, it will still matter, because I'm proposing that the
parallel-safety of functions that we only access via operator classes
will not even be checked.
I am not very clear on what exactly you have in your mind in this
regard. I understand that while computing parallel-safety for a rel we
don't need to consider functions that we only access via operator
class but how do we distinguish such functions during Alter Function?
Is there a simple way to deduce that this is an operator class
function so don't register invalidation for it? Shall we check it via
pg_depend?
We can additionally check the
parallel safety of partitions when we are trying to insert into a
particular partition and error out if we detect any parallel-unsafe
clause and we are in parallel-mode. So, in this case, we won't be
completely relying on the users. Users can either change the parallel
safe option of the table or remove/change the parallel-unsafe clause
after error. The new invalidation message as we are discussing would
invalidate the parallel-safety for individual partitions but not the
root partition (partitioned table itself). For root partition, we will
rely on information specified by the user.Yeah, that may be the best we can do. Just to be clear, I think we
would want to check whether the relation is still parallel-safe at the
start of the operation, but not have a run-time check at each function
call.
Agreed, that is what I also had in mind.
I am not sure if we have a simple way to check the parallel safety of
partitioned tables. In some way, we need to rely on user either (a) by
providing an option to specify whether parallel Inserts (and/or other
DMLs) can be performed, or (b) by providing a guc and/or rel option
which indicate that we can check the parallel-safety of all the
partitions. Yet another option that I don't like could be to
parallelize inserts on non-partitioned tables.If we figure out a way to check the partitions automatically that
actually works, we don't need a switch for it; we can (and should)
just do it that way all the time. But if we can't come up with a
correct algorithm for that, then we'll need to add some kind of option
where the user declares whether it's OK.
Yeah, so let us think for some more time and see if we can come up
with something better for partitions, otherwise, we can sort out
things further in this direction.
--
With Regards,
Amit Kapila.
On Tue, Jun 15, 2021 at 8:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Mon, Jun 14, 2021 at 9:08 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jun 14, 2021 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Yeah, this could be one idea but I think even if we use pg_proc OID,
we still need to check all the rel cache entries to find which one
contains the invalidated OID and that could be expensive. I wonder
can't we directly find the relation involved and register invalidation
for the same? We are able to find the relation to which trigger
function is associated during drop function via findDependentObjects
by scanning pg_depend. Assuming, we are able to find the relation for
trigger function by scanning pg_depend, what kinds of problems do we
envision in registering the invalidation for the same?I don't think that finding the relation involved and registering an
invalidation for the same will work properly. Suppose there is a
concurrently-running transaction which has created a new table and
attached a trigger function to it. You can't see any of the catalog
entries for that relation yet, so you can't invalidate it, but
invalidation needs to happen. Even if you used some snapshot that can
see those catalog entries before they are committed, I doubt it fixes
the race condition. You can't hold any lock on that relation, because
the creating transaction holds AccessExclusiveLock, but the whole
invalidation mechanism is built around the assumption that the sender
puts messages into the shared queue first and then releases locks,
while the receiver first acquires a conflicting lock and then
processes messages from the queue.Won't such messages be proceesed at start transaction time
(AtStart_Cache->AcceptInvalidationMessages)?
Even if accept invalidation at the start transaction time, we need to
accept and execute it after taking a lock on relation to ensure that
relation doesn't change afterward. I think what I mentioned didn't
break this assumption because after finding a relation we will take a
lock on it before registering the invalidation, so in the above
scenario, it should wait before registering the invalidation.
--
With Regards,
Amit Kapila.
On Tuesday, June 15, 2021 10:01 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, Jun 15, 2021 at 7:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Okay, but I think if we go with your suggested model where whenever
there is a change in parallel-safety of any function, we need to send
the new invalidation then I think it won't matter whether the function
is associated with index expression, check constraint in the table, or
is used in any other way.No, it will still matter, because I'm proposing that the parallel-safety of
functions that we only access via operator classes will not even be checked.
Also, if we decided to make the system more fine-grained - e.g. by invalidating
on the specific OID of the function that was changed rather than doing
something that is database-wide or global - then it matters even more.Yeah, dealing with partitioned tables is tricky. I think if we don't
want to check upfront the parallel safety of all the partitions then
the other option as discussed could be to ask the user to specify the
parallel safety of partitioned tables.Just to be clear here, I don't think it really matters what we *want* to do. I don't
think it's reasonably *possible* to check all the partitions, because we don't
hold locks on them. When we're assessing a bunch of stuff related to an
individual relation, we have a lock on it. I think - though we should
double-check tablecmds.c - that this is enough to prevent all of the dependent
objects - triggers, constraints, etc. - from changing. So the stuff we care about
is stable. But the situation with a partitioned table is different. In that case, we
can't even examine that stuff without locking all the partitions. And even if we
do lock all the partitions, the stuff could change immediately afterward and we
wouldn't know. So I think it would be difficult to make it correct.Now, maybe it could be done, and I think that's worth a little more thought. For
example, perhaps whenever we invalidate a relation, we could also somehow
send some new, special kind of invalidation for its parent saying, essentially,
"hey, one of your children has changed in a way you might care about." But
that's not good enough, because it only goes up one level. The grandparent
would still be unaware that a change it potentially cares about has occurred
someplace down in the partitioning hierarchy. That seems hard to patch up,
again because of the locking rules. The child can know the OID of its parent
without locking the parent, but it can't know the OID of its grandparent without
locking its parent. Walking up the whole partitioning hierarchy might be an
issue for a number of reasons, including possible deadlocks, and possible race
conditions where we don't emit all of the right invalidations in the face of
concurrent changes. So I don't quite see a way around this part of the problem,
but I may well be missing something. In fact I hope I am missing something,
because solving this problem would be really nice.
For partition, I think postgres already have the logic about recursively finding
the parent table[1]In generate_partition_qual() parentrelid = get_partition_parent(RelationGetRelid(rel), true); parent = relation_open(parentrelid, AccessShareLock); ... /* Add the parent's quals to the list (if any) */ if (parent->rd_rel->relispartition) result = list_concat(generate_partition_qual(parent), my_qual);. Can we copy that logic to send serval invalid messages that
invalidate the parent and grandparent... relcache if change a partition's parallel safety ?
Although, it means we need more lock(on its parents) when the parallel safety
changed, but it seems it's not a frequent scenario and looks acceptable.
[1]: In generate_partition_qual() parentrelid = get_partition_parent(RelationGetRelid(rel), true); parent = relation_open(parentrelid, AccessShareLock); ... /* Add the parent's quals to the list (if any) */ if (parent->rd_rel->relispartition) result = list_concat(generate_partition_qual(parent), my_qual);
parentrelid = get_partition_parent(RelationGetRelid(rel), true);
parent = relation_open(parentrelid, AccessShareLock);
...
/* Add the parent's quals to the list (if any) */
if (parent->rd_rel->relispartition)
result = list_concat(generate_partition_qual(parent), my_qual);
Besides, I have a possible crazy idea that maybe it's not necessary to invalidate the
relcache when parallel safety of function is changed.
I take a look at what postgres currently behaves, and found that even if user changes
a function (CREATE OR REPLACE/ALTER FUNCTION) which is used in
objects(like: constraint or index expression or partition key expression),
the data in the relation won't be rechecked. And as the doc said[2]https://www.postgresql.org/docs/14/ddl-constraints.html, It is *not recommended*
to change the function which is already used in some other objects. The
recommended way to handle such a change is to drop the object, adjust the function
definition, and re-add the objects. Maybe we only care about the parallel safety
change when create or drop an object(constraint or index or partition or trigger). And
we can check the parallel safety when insert into a particular table, if find functions
not allowed in parallel mode which means someone change the function's parallel safety,
then we can invalidate the relcache and error out.
[2]: https://www.postgresql.org/docs/14/ddl-constraints.html
Best regards,
houzj
On Tue, Jun 15, 2021 at 10:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
I don't think that finding the relation involved and registering an
invalidation for the same will work properly. Suppose there is a
concurrently-running transaction which has created a new table and
attached a trigger function to it. You can't see any of the catalog
entries for that relation yet, so you can't invalidate it, but
invalidation needs to happen. Even if you used some snapshot that can
see those catalog entries before they are committed, I doubt it fixes
the race condition. You can't hold any lock on that relation, because
the creating transaction holds AccessExclusiveLock, but the whole
invalidation mechanism is built around the assumption that the sender
puts messages into the shared queue first and then releases locks,
while the receiver first acquires a conflicting lock and then
processes messages from the queue.Won't such messages be proceesed at start transaction time
(AtStart_Cache->AcceptInvalidationMessages)?
Only if they show up in the queue before that. But there's nothing
forcing that to happen. You don't seem to understand how important
heavyweight locking is to the whole shared invalidation message
system....
Okay, so, in this scheme, we have allowed changing the function
definition during statement execution but even though the rel's
parallel-safe property gets modified (say to parallel-unsafe), we will
still proceed in parallel-mode as if it's not changed. I guess this
may not be a big deal as we can anyway allow breaking the running
statement by changing its definition and users may be okay if the
parallel statement errors out or behave in an unpredictable way in
such corner cases.
Yeah, I mean, it's no different than leaving the parallel-safety
marking exactly as it was and changing the body of the function to
call some other function marked parallel-unsafe. I don't think we've
gotten any complaints about that, because I don't think it would
normally have any really bad consequences; most likely you'd just get
an error saying that something-or-other isn't allowed in parallel
mode. If it does have bad consequences, then I guess we'll have to fix
it when we find out about it, but in the meantime there's no reason to
hold the parallel-safety flag to a stricter standard than the function
body.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Wed, Jun 16, 2021 at 9:22 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, Jun 15, 2021 at 10:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
I don't think that finding the relation involved and registering an
invalidation for the same will work properly. Suppose there is a
concurrently-running transaction which has created a new table and
attached a trigger function to it. You can't see any of the catalog
entries for that relation yet, so you can't invalidate it, but
invalidation needs to happen. Even if you used some snapshot that can
see those catalog entries before they are committed, I doubt it fixes
the race condition. You can't hold any lock on that relation, because
the creating transaction holds AccessExclusiveLock, but the whole
invalidation mechanism is built around the assumption that the sender
puts messages into the shared queue first and then releases locks,
while the receiver first acquires a conflicting lock and then
processes messages from the queue.Won't such messages be proceesed at start transaction time
(AtStart_Cache->AcceptInvalidationMessages)?Only if they show up in the queue before that. But there's nothing
forcing that to happen. You don't seem to understand how important
heavyweight locking is to the whole shared invalidation message
system....
I have responded about heavy-weight locking stuff in my next email [1]/messages/by-id/CAA4eK1+T2CWqp40YqYttDA1Skk7wK6yDrkCD5GZ80QGr5ze-6g@mail.gmail.com
and why I think the approach I mentioned will work. I don't deny that
I might be missing something here.
[1]: /messages/by-id/CAA4eK1+T2CWqp40YqYttDA1Skk7wK6yDrkCD5GZ80QGr5ze-6g@mail.gmail.com
--
With Regards,
Amit Kapila.
On Thu, Jun 17, 2021 at 4:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
I have responded about heavy-weight locking stuff in my next email [1]
and why I think the approach I mentioned will work. I don't deny that
I might be missing something here.[1] - /messages/by-id/CAA4eK1+T2CWqp40YqYttDA1Skk7wK6yDrkCD5GZ80QGr5ze-6g@mail.gmail.com
I mean I saw that but I don't see how it addresses the visibility
issue. There could be a relation that is not visible to your snapshot
and upon which AccessExclusiveLock is held which needs to be
invalidated. You can't lock it because it's AccessExclusiveLock'd
already.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Thu, Jun 17, 2021 at 8:29 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Thu, Jun 17, 2021 at 4:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
I have responded about heavy-weight locking stuff in my next email [1]
and why I think the approach I mentioned will work. I don't deny that
I might be missing something here.[1] - /messages/by-id/CAA4eK1+T2CWqp40YqYttDA1Skk7wK6yDrkCD5GZ80QGr5ze-6g@mail.gmail.com
I mean I saw that but I don't see how it addresses the visibility
issue.
I thought if we scan a system catalog using DirtySnapshot, then we
should be able to find such a relation. But, if the system catalog is
updated after our scan then surely we won't be able to see it and in
that case, we won't be able to send invalidation. Now, say if the rel
is not visible to us because of the snapshot we used or due to some
race condition then we won't be able to send the invalidation but why
we want to consider it worse than the case where we miss such
invalidations (invalidations due to change of parallel-safe property)
when the insertion into relation is in-progress.
There could be a relation that is not visible to your snapshot
and upon which AccessExclusiveLock is held which needs to be
invalidated. You can't lock it because it's AccessExclusiveLock'd
already.
Yeah, the session in which we are doing Alter Function won't be able
to lock it but it will wait for the AccessExclusiveLock on the rel to
be released because it will also try to acquire it before sending
invalidation.
--
With Regards,
Amit Kapila.
On Wednesday, June 16, 2021 11:27 AM houzj.fnst@fujitsu.com wrote:
On Tuesday, June 15, 2021 10:01 PM Robert Haas <robertmhaas@gmail.com> wrote:
Just to be clear here, I don't think it really matters what we *want*
to do. I don't think it's reasonably *possible* to check all the
partitions, because we don't hold locks on them. When we're assessing
a bunch of stuff related to an individual relation, we have a lock on
it. I think - though we should double-check tablecmds.c - that this is
enough to prevent all of the dependent objects - triggers,
constraints, etc. - from changing. So the stuff we care about is
stable. But the situation with a partitioned table is different. In
that case, we can't even examine that stuff without locking all the
partitions. And even if we do lock all the partitions, the stuff could changeimmediately afterward and we wouldn't know. So I think it would be difficult to
make it correct.
Now, maybe it could be done, and I think that's worth a little more
thought. For example, perhaps whenever we invalidate a relation, we
could also somehow send some new, special kind of invalidation for its
parent saying, essentially, "hey, one of your children has changed in
a way you might care about." But that's not good enough, because it
only goes up one level. The grandparent would still be unaware that a
change it potentially cares about has occurred someplace down in the
partitioning hierarchy. That seems hard to patch up, again because of
the locking rules. The child can know the OID of its parent without
locking the parent, but it can't know the OID of its grandparent
without locking its parent. Walking up the whole partitioning
hierarchy might be an issue for a number of reasons, including
possible deadlocks, and possible race conditions where we don't emit
all of the right invalidations in the face of concurrent changes. So I
don't quite see a way around this part of the problem, but I may well bemissing something. In fact I hope I am missing something, because solving this
problem would be really nice.I think the check of partition could be even more complicated if we need to
check the parallel safety of partition key expression. If user directly insert into a
partition, then we need invoke ExecPartitionCheck which will execute all it's
parent's and grandparent's partition key expressions. It means if we change a
parent table's partition key expression(by 1) change function in expr or 2)
attach the parent table as partition of another parent table), then we need to
invalidate all its child's relcache.BTW, currently, If user attach a partitioned table 'A' to be partition of another
partitioned table 'B', the child of 'A' will not be invalidated.
To be honest, I didn't find a cheap way to invalidate partitioned table's
parallel safety automatically. For one thing, We need to recurse higher
in the partition tree to invalid all the parent table's relcache(and perhaps
all its children's relcache) not only when alter function parallel safety,
but also for DDLs which will invalid the partition's relcache, such as
CREATE/DROP INDEX/TRIGGER/CONSTRAINT. It seems too expensive. For another,
even if we can invalidate the partitioned table's parallel safety
automatically, we still need to lock all the partition when checking table's
parallel safety, because the partition's parallel safety could be changed
after checking the parallel safety.
So, IMO, at least for partitioned table, an explicit flag looks more acceptable.
For regular table, It seems we can work it out automatically, although
I am not sure does anyone think it looks a bit inconsistent.
Best regards,
houzj
On Mon, Jun 21, 2021 at 12:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
I thought if we scan a system catalog using DirtySnapshot, then we
should be able to find such a relation. But, if the system catalog is
updated after our scan then surely we won't be able to see it and in
that case, we won't be able to send invalidation. Now, say if the rel
is not visible to us because of the snapshot we used or due to some
race condition then we won't be able to send the invalidation but why
we want to consider it worse than the case where we miss such
invalidations (invalidations due to change of parallel-safe property)
when the insertion into relation is in-progress.
A concurrent change is something quite different than a change that
happened some time in the past. We all know that DROP TABLE blocks if
it is run while the table is in use, and everybody considers that
acceptable, but if DROP TABLE were to block because the table was in
use at some previous time, everybody would complain, and rightly so.
The same principle applies here. It's not possible to react to a
change that happens in the middle of the query. Somebody could argue
that we ought to lock all the functions we're using against concurrent
changes so that attempts to change their properties block on a lock
rather than succeeding. But given that that's not how it works, we can
hardly go back in time and switch to a non-parallel plan after we've
already decided on a parallel one. On the other hand, we should be
able to notice a change that has *already completed* at the time we do
planning. I don't see how we can blame failure to do that on anything
other than bad coding.
Yeah, the session in which we are doing Alter Function won't be able
to lock it but it will wait for the AccessExclusiveLock on the rel to
be released because it will also try to acquire it before sending
invalidation.
I think users would not be very happy with such behavior. Users accept
that if they try to access a relation, they might end up needing to
wait for a lock on it, but what you are proposing here might make a
session block waiting for a lock on a relation which it never
attempted to access.
I think this whole line of attack is a complete dead-end. We can
invent new types of invalidations if we want, but they need to be sent
based on which objects actually got changed, not based on what we
think might be affected indirectly as a result of those changes. It's
reasonable to regard something like a trigger or constraint as a
property of the table because it is really a dependent object. It is
associated with precisely one table when it is created and the
association can never be changed. On the other hand, functions clearly
have their own existence. They can be created and dropped
independently of any table and the tables with which they are
associated can change at any time. In that kind of situation,
invalidating the table based on changes to the function is riddled
with problems which I am pretty convinced we're never going to be able
to solve. I'm not 100% sure what we ought to do here, but I'm pretty
sure that looking up the tables that happen to be associated with the
function in the session that is modifying the function is not it.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Monday, June 21, 2021 11:23 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jun 21, 2021 at 12:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
Yeah, the session in which we are doing Alter Function won't be able
to lock it but it will wait for the AccessExclusiveLock on the rel to
be released because it will also try to acquire it before sending
invalidation.I think users would not be very happy with such behavior. Users accept that if
they try to access a relation, they might end up needing to wait for a lock on it,
but what you are proposing here might make a session block waiting for a lock
on a relation which it never attempted to access.I think this whole line of attack is a complete dead-end. We can invent new
types of invalidations if we want, but they need to be sent based on which
objects actually got changed, not based on what we think might be affected
indirectly as a result of those changes. It's reasonable to regard something like
a trigger or constraint as a property of the table because it is really a
dependent object. It is associated with precisely one table when it is created
and the association can never be changed. On the other hand, functions clearly
have their own existence. They can be created and dropped independently of
any table and the tables with which they are associated can change at any time.
In that kind of situation, invalidating the table based on changes to the function
is riddled with problems which I am pretty convinced we're never going to be
able to solve. I'm not 100% sure what we ought to do here, but I'm pretty sure
that looking up the tables that happen to be associated with the function in the
session that is modifying the function is not it.
I agree that we should send invalid message like
" function OID's parallel safety has changed ". And when each session accept
this invalid message, each session needs to invalid the related table. Based on
previous mails, we only want to invalid the table that use this function in the
index expression/trigger/constraints. The problem is how to get all the related
tables. Robert-san suggested cache a list of pg_proc OIDs, that means we need
to rebuild the list everytime if the relcache is invalidated. The cost to do that
could be expensive, especially for extracting pg_proc OIDs from index expression,
because we need to invoke index_open(index, lock) to get the index expression.
Or, maybe we can let each session uses the pg_depend to get the related table and
invalid them after accepting the new type invalid message.
Best regards,
houzj
On Wed, Jun 16, 2021 at 8:57 AM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:
I think the check of partition could be even more complicated if we need to
check the parallel safety of partition key expression. If user directly insert into
a partition, then we need invoke ExecPartitionCheck which will execute all it's
parent's and grandparent's partition key expressions. It means if we change a
parent table's partition key expression(by 1) change function in expr or 2) attach
the parent table as partition of another parent table), then we need to invalidate
all its child's relcache.
I think we already invalidate the child entries when we add/drop
constraints on a parent table. See ATAddCheckConstraint,
ATExecDropConstraint. If I am not missing anything then this case
shouldn't be a problem. Do you have something else in mind?
--
With Regards,
Amit Kapila.
On Tuesday, June 22, 2021 8:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Wed, Jun 16, 2021 at 8:57 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:
I think the check of partition could be even more complicated if we
need to check the parallel safety of partition key expression. If user
directly insert into a partition, then we need invoke
ExecPartitionCheck which will execute all it's parent's and
grandparent's partition key expressions. It means if we change a
parent table's partition key expression(by 1) change function in expr
or 2) attach the parent table as partition of another parent table), then weneed to invalidate all its child's relcache.
I think we already invalidate the child entries when we add/drop constraints on
a parent table. See ATAddCheckConstraint, ATExecDropConstraint. If I am not
missing anything then this case shouldn't be a problem. Do you have
something else in mind?
Currently, attach/detach a partition doesn't invalidate the child entries
recursively, except when detach a partition concurrently which will add a
constraint to all the child. Do you mean we can add the logic about
invalidating the child entries recursively when attach/detach a partition ?
Best regards,
houzj
On Wed, Jun 23, 2021 at 6:35 AM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:
On Tuesday, June 22, 2021 8:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Wed, Jun 16, 2021 at 8:57 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:
I think the check of partition could be even more complicated if we
need to check the parallel safety of partition key expression. If user
directly insert into a partition, then we need invoke
ExecPartitionCheck which will execute all it's parent's and
grandparent's partition key expressions. It means if we change a
parent table's partition key expression(by 1) change function in expr
or 2) attach the parent table as partition of another parent table), then weneed to invalidate all its child's relcache.
I think we already invalidate the child entries when we add/drop constraints on
a parent table. See ATAddCheckConstraint, ATExecDropConstraint. If I am not
missing anything then this case shouldn't be a problem. Do you have
something else in mind?Currently, attach/detach a partition doesn't invalidate the child entries
recursively, except when detach a partition concurrently which will add a
constraint to all the child. Do you mean we can add the logic about
invalidating the child entries recursively when attach/detach a partition ?
I was talking about adding/dropping CHECK or other constraints on
partitioned tables via Alter Table. I think if attach/detach leads to
change in constraints of child tables then either they should
invalidate child rels to avoid problems in the existing sessions or if
it is not doing due to a reason then probably it might not matter. I
see that you have started a separate thread [1]/messages/by-id/OS3PR01MB5718DA1C4609A25186D1FBF194089@OS3PR01MB5718.jpnprd01.prod.outlook.com to confirm the
behavior of attach/detach partition and we might want to decide based
on the conclusion of that thread.
[1]: /messages/by-id/OS3PR01MB5718DA1C4609A25186D1FBF194089@OS3PR01MB5718.jpnprd01.prod.outlook.com
--
With Regards,
Amit Kapila.
On Wed, Jun 16, 2021 at 6:10 PM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:
On Tuesday, June 15, 2021 10:01 PM Robert Haas <robertmhaas@gmail.com> wrote:
Now, maybe it could be done, and I think that's worth a little more thought. For
example, perhaps whenever we invalidate a relation, we could also somehow
send some new, special kind of invalidation for its parent saying, essentially,
"hey, one of your children has changed in a way you might care about." But
that's not good enough, because it only goes up one level. The grandparent
would still be unaware that a change it potentially cares about has occurred
someplace down in the partitioning hierarchy. That seems hard to patch up,
again because of the locking rules. The child can know the OID of its parent
without locking the parent, but it can't know the OID of its grandparent without
locking its parent. Walking up the whole partitioning hierarchy might be an
issue for a number of reasons, including possible deadlocks, and possible race
conditions where we don't emit all of the right invalidations in the face of
concurrent changes. So I don't quite see a way around this part of the problem,
but I may well be missing something. In fact I hope I am missing something,
because solving this problem would be really nice.For partition, I think postgres already have the logic about recursively finding
the parent table[1]. Can we copy that logic to send serval invalid messages that
invalidate the parent and grandparent... relcache if change a partition's parallel safety ?
Although, it means we need more lock(on its parents) when the parallel safety
changed, but it seems it's not a frequent scenario and looks acceptable.[1] In generate_partition_qual()
parentrelid = get_partition_parent(RelationGetRelid(rel), true);
parent = relation_open(parentrelid, AccessShareLock);
...
/* Add the parent's quals to the list (if any) */
if (parent->rd_rel->relispartition)
result = list_concat(generate_partition_qual(parent), my_qual);
As shown by me in another email [1]/messages/by-id/CAA4eK1LsFpjK5gL+0HEvoqB2DJVOi19vGAWbZBEx8fACOi5+_A@mail.gmail.com, such a coding pattern can lead to
deadlock. It is because in some DDL operations we walk the partition
hierarchy from top to down and if we walk from bottom to upwards, then
that can lead to deadlock. I think this is a dangerous coding pattern
and we shouldn't try to replicate it.
[1]: /messages/by-id/CAA4eK1LsFpjK5gL+0HEvoqB2DJVOi19vGAWbZBEx8fACOi5+_A@mail.gmail.com
--
With Regards,
Amit Kapila.
On Wed, Jun 23, 2021 at 8:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Wed, Jun 16, 2021 at 6:10 PM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:On Tuesday, June 15, 2021 10:01 PM Robert Haas <robertmhaas@gmail.com>
wrote:
Now, maybe it could be done, and I think that's worth a little more
thought. For
example, perhaps whenever we invalidate a relation, we could also
somehow
send some new, special kind of invalidation for its parent saying,
essentially,
"hey, one of your children has changed in a way you might care about."
But
that's not good enough, because it only goes up one level. The
grandparent
would still be unaware that a change it potentially cares about has
occurred
someplace down in the partitioning hierarchy. That seems hard to patch
up,
again because of the locking rules. The child can know the OID of its
parent
without locking the parent, but it can't know the OID of its
grandparent without
locking its parent. Walking up the whole partitioning hierarchy might
be an
issue for a number of reasons, including possible deadlocks, and
possible race
conditions where we don't emit all of the right invalidations in the
face of
concurrent changes. So I don't quite see a way around this part of the
problem,
but I may well be missing something. In fact I hope I am missing
something,
because solving this problem would be really nice.
For partition, I think postgres already have the logic about recursively
finding
the parent table[1]. Can we copy that logic to send serval invalid
messages that
invalidate the parent and grandparent... relcache if change a
partition's parallel safety ?
Although, it means we need more lock(on its parents) when the parallel
safety
changed, but it seems it's not a frequent scenario and looks acceptable.
[1] In generate_partition_qual()
parentrelid = get_partition_parent(RelationGetRelid(rel), true);
parent = relation_open(parentrelid, AccessShareLock);
...
/* Add the parent's quals to the list (if any) */
if (parent->rd_rel->relispartition)
result = list_concat(generate_partition_qual(parent),my_qual);
As shown by me in another email [1], such a coding pattern can lead to
deadlock. It is because in some DDL operations we walk the partition
hierarchy from top to down and if we walk from bottom to upwards, then
that can lead to deadlock. I think this is a dangerous coding pattern
and we shouldn't try to replicate it.[1] -
/messages/by-id/CAA4eK1LsFpjK5gL+0HEvoqB2DJVOi19vGAWbZBEx8fACOi5+_A@mail.gmail.com--
With Regards,
Amit Kapila.Hi,
How about walking the partition hierarchy bottom up, recording the parents
but not taking the locks.
Once top-most parent is found, take the locks in reverse order (top down) ?
Cheers
On Thu, Jun 24, 2021 at 1:38 PM Zhihong Yu <zyu@yugabyte.com> wrote:
How about walking the partition hierarchy bottom up, recording the parents but not taking the locks.
Once top-most parent is found, take the locks in reverse order (top down) ?
Is it safe to walk up the partition hierarchy (to record the parents
for the eventual locking in reverse order) without taking locks?
Regards,
Greg Nancarrow
Fujitsu Australia
On Thursday, June 24, 2021 11:44 AM Zhihong Yu <zyu@yugabyte.com> wrote:
Hi,
How about walking the partition hierarchy bottom up, recording the parents but not taking the locks.
Once top-most parent is found, take the locks in reverse order (top down) ?
IMO, When we directly INSERT INTO a partition, postgres already lock the partition
as the target table before execution which means we cannot postpone the lock
on partition to find the parent table.
Best regards,
houzj
On Mon, Jun 21, 2021 at 4:40 PM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:
To be honest, I didn't find a cheap way to invalidate partitioned table's
parallel safety automatically.
I also don't see the feasibility for doing parallelism checks for
partitioned tables both because it is expensive due to
traversing/locking all the partitions and then the invalidations are
difficult to handle due to deadlock hazards as discussed above.
Let me try to summarize the discussion so far and see if we can have
any better ideas than what we have discussed so far or we want to go
with one of the ideas discussed till now. I think we have broadly
discussed two approaches (a) to automatically decide whether
parallelism can be enabled for inserts, (b) provide an option to the
user to specify whether inserts can be parallelized on a relation.
For the first approach (a), we have evaluated both the partitioned and
non-partitioned relation cases. For non-partitioned relations, we can
compute the parallel-safety of relation during the planning and save
it in the relation cache entry. This is normally safe because we have
a lock on the relation and any change to the relation should raise an
invalidation which will lead to re-computation of parallel-safety
information for a relation. Now, there are cases where the
parallel-safety of some trigger function or a function used in index
expression can be changed by the user which won't register an
invalidation for a relation. To handle such cases, we can register a
new kind of invalidation only when a function's parallel-safety
information is changed. And every backend in the same database then
needs to re-evaluate the parallel-safety of every relation for which
it has cached a value. For partitioned relations, the similar idea
won't work because of multiple reasons (a) We need to traverse and
lock all the partitions to compute the parallel-safety of the root
relation which could be very expensive; (b) Whenever we invalidate a
particular partition, we need to invalidate its parent hierarchy as
well. We can't traverse the parent hierarchy without taking locks on
the parent table which can lead to deadlock. The alternative could be
that for partitioned relations we can rely on the user-specified
information about parallel-safety (like the approach-b mentioned in
the previous paragraph). We can additionally check the parallel safety
of partitions when we are trying to insert into a particular partition
and error out if we detect any parallel-unsafe clause and we are in
parallel-mode. So, in this case, we won't be completely relying on the
users. Users can either change the parallel safe option of the table
or remove/change the parallel-unsafe clause after an error.
For the second approach (b), we can provide an option to the user to
specify whether inserts (or other dml's) can be parallelized for a
relation. One of the ideas is to provide some options like below to
the user:
CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE };
ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE };
This property is recorded in pg_class's relparallel column as 'u',
'r', or 's', just like pg_proc's proparallel. The default is UNSAFE.
Additionally, provide a function pg_get_parallel_safety(oid) using
which users can determine whether it is safe to enable parallelism.
Surely, after the user has checked with that function, one can add
some unsafe constraints to the table by altering the table but it will
still be an aid to enable parallelism on a relation.
The first approach (a) has an appeal because it would allow to
automatically parallelize inserts in many cases but might have some
overhead in some cases due to processing of relcache entries after the
parallel-safety of the relation is changed. The second approach (b)
has an appeal because of its consistent behavior for partitioned and
non-partitioned relations.
Among the above options, I would personally prefer (b) mainly because
of the consistent handling for partition and non-partition table cases
but I am fine with approach (a) as well if that is what other people
feel is better.
Thoughts?
--
With Regards,
Amit Kapila.
On Mon, Jun 28, 2021 at 7:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
Among the above options, I would personally prefer (b) mainly because
of the consistent handling for partition and non-partition table cases
but I am fine with approach (a) as well if that is what other people
feel is better.Thoughts?
I personally think "(b) provide an option to the user to specify
whether inserts can be parallelized on a relation" is the preferable
option.
There seems to be too many issues with the alternative of trying to
determine the parallel-safety of a partitioned table automatically.
I think (b) is the simplest and most consistent approach, working the
same way for all table types, and without the overhead of (a).
Also, I don't think (b) is difficult for the user. At worst, the user
can use the provided utility-functions at development-time to verify
the intended declared table parallel-safety.
I can't really see some mixture of (a) and (b) being acceptable.
Regards,
Greg Nancarrow
Fujitsu Australia
On Wed, Jun 30, 2021 at 11:46 PM Greg Nancarrow <gregn4422@gmail.com> wrote:
I personally think "(b) provide an option to the user to specify
whether inserts can be parallelized on a relation" is the preferable
option.
There seems to be too many issues with the alternative of trying to
determine the parallel-safety of a partitioned table automatically.
I think (b) is the simplest and most consistent approach, working the
same way for all table types, and without the overhead of (a).
Also, I don't think (b) is difficult for the user. At worst, the user
can use the provided utility-functions at development-time to verify
the intended declared table parallel-safety.
I can't really see some mixture of (a) and (b) being acceptable.
Yeah, I'd like to have it be automatic, but I don't have a clear idea
how to make that work nicely. It's possible somebody (Tom?) can
suggest something that I'm overlooking, though.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Fri, Jul 2, 2021 at 8:16 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Wed, Jun 30, 2021 at 11:46 PM Greg Nancarrow <gregn4422@gmail.com> wrote:
I personally think "(b) provide an option to the user to specify
whether inserts can be parallelized on a relation" is the preferable
option.
There seems to be too many issues with the alternative of trying to
determine the parallel-safety of a partitioned table automatically.
I think (b) is the simplest and most consistent approach, working the
same way for all table types, and without the overhead of (a).
Also, I don't think (b) is difficult for the user. At worst, the user
can use the provided utility-functions at development-time to verify
the intended declared table parallel-safety.
I can't really see some mixture of (a) and (b) being acceptable.Yeah, I'd like to have it be automatic, but I don't have a clear idea
how to make that work nicely. It's possible somebody (Tom?) can
suggest something that I'm overlooking, though.
In general, for the non-partitioned table, where we don't have much
overhead of checking the parallel safety and invalidation is also not
a big problem so I am tempted to provide an automatic parallel safety
check. This would enable parallelism for more cases wherever it is
suitable without user intervention. OTOH, I understand that providing
automatic checking might be very costly if the number of partitions is
more. Can't we provide some mid-way where the parallelism is enabled
by default for the normal table but for the partitioned table it is
disabled by default and the user has to set it safe for enabling
parallelism? I agree that such behavior might sound a bit hackish.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Sunday, July 4, 2021 1:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
On Fri, Jul 2, 2021 at 8:16 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Wed, Jun 30, 2021 at 11:46 PM Greg Nancarrow <gregn4422@gmail.com>
wrote:
I personally think "(b) provide an option to the user to specify
whether inserts can be parallelized on a relation" is the preferable
option.
There seems to be too many issues with the alternative of trying to
determine the parallel-safety of a partitioned table automatically.
I think (b) is the simplest and most consistent approach, working
the same way for all table types, and without the overhead of (a).
Also, I don't think (b) is difficult for the user. At worst, the
user can use the provided utility-functions at development-time to
verify the intended declared table parallel-safety.
I can't really see some mixture of (a) and (b) being acceptable.Yeah, I'd like to have it be automatic, but I don't have a clear idea
how to make that work nicely. It's possible somebody (Tom?) can
suggest something that I'm overlooking, though.In general, for the non-partitioned table, where we don't have much overhead
of checking the parallel safety and invalidation is also not a big problem so I am
tempted to provide an automatic parallel safety check. This would enable
parallelism for more cases wherever it is suitable without user intervention.
OTOH, I understand that providing automatic checking might be very costly if
the number of partitions is more. Can't we provide some mid-way where the
parallelism is enabled by default for the normal table but for the partitioned
table it is disabled by default and the user has to set it safe for enabling
parallelism? I agree that such behavior might sound a bit hackish.
About the invalidation for non-partitioned table, I think it still has a
problem: When a function's parallel safety changed, it's expensive to judge
whether the function is related to index or trigger or some table-related
objects by using pg_depend, because we can only do the judgement in each
backend when accept a invalidation message. If we don't do that, it means
whenever a function's parallel safety changed, we invalidate every relation's
cached safety which looks not very nice to me.
So, I personally think "(b) provide an option to the user to specify whether
inserts can be parallelized on a relation" is the preferable option.
Best regards,
houzj
On Sun, Jul 4, 2021 at 1:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:
In general, for the non-partitioned table, where we don't have much
overhead of checking the parallel safety and invalidation is also not
a big problem so I am tempted to provide an automatic parallel safety
check. This would enable parallelism for more cases wherever it is
suitable without user intervention. OTOH, I understand that providing
automatic checking might be very costly if the number of partitions is
more. Can't we provide some mid-way where the parallelism is enabled
by default for the normal table but for the partitioned table it is
disabled by default and the user has to set it safe for enabling
parallelism? I agree that such behavior might sound a bit hackish.
I think that's basically the proposal that Amit and I have been discussing.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Wed, Jul 21, 2021 at 12:30 AM Robert Haas <robertmhaas@gmail.com> wrote:
On Sun, Jul 4, 2021 at 1:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:
In general, for the non-partitioned table, where we don't have much
overhead of checking the parallel safety and invalidation is also not
a big problem so I am tempted to provide an automatic parallel safety
check. This would enable parallelism for more cases wherever it is
suitable without user intervention. OTOH, I understand that providing
automatic checking might be very costly if the number of partitions is
more. Can't we provide some mid-way where the parallelism is enabled
by default for the normal table but for the partitioned table it is
disabled by default and the user has to set it safe for enabling
parallelism? I agree that such behavior might sound a bit hackish.I think that's basically the proposal that Amit and I have been discussing.
I see here we have a mix of opinions from various people. Dilip seems
to be favoring the approach where we provide some option to the user
for partitioned tables and automatic behavior for non-partitioned
tables but he also seems to have mild concerns about this behavior.
OTOH, Greg and Hou-San seem to favor an approach where we can provide
an option to the user for both partitioned and non-partitioned tables.
I am also in favor of providing an option to the user for the sake of
consistency in behavior and not trying to introduce a special kind of
invalidation as it doesn't serve the purpose for partitioned tables.
Robert seems to be in favor of automatic behavior but it is not very
clear to me if he is fine with dealing differently for partitioned and
non-partitioned relations. Robert, can you please provide your opinion
on what do you think is the best way to move forward here?
--
With Regards,
Amit Kapila.
On Wed, Jul 21, 2021 at 11:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
I see here we have a mix of opinions from various people. Dilip seems
to be favoring the approach where we provide some option to the user
for partitioned tables and automatic behavior for non-partitioned
tables but he also seems to have mild concerns about this behavior.
OTOH, Greg and Hou-San seem to favor an approach where we can provide
an option to the user for both partitioned and non-partitioned tables.
I am also in favor of providing an option to the user for the sake of
consistency in behavior and not trying to introduce a special kind of
invalidation as it doesn't serve the purpose for partitioned tables.
Robert seems to be in favor of automatic behavior but it is not very
clear to me if he is fine with dealing differently for partitioned and
non-partitioned relations. Robert, can you please provide your opinion
on what do you think is the best way to move forward here?
I thought we had agreed on handling partitioned and unpartitioned
tables differently, but maybe I misunderstood the discussion.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Fri, Jul 23, 2021 at 6:55 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Wed, Jul 21, 2021 at 11:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
I see here we have a mix of opinions from various people. Dilip seems
to be favoring the approach where we provide some option to the user
for partitioned tables and automatic behavior for non-partitioned
tables but he also seems to have mild concerns about this behavior.
OTOH, Greg and Hou-San seem to favor an approach where we can provide
an option to the user for both partitioned and non-partitioned tables.
I am also in favor of providing an option to the user for the sake of
consistency in behavior and not trying to introduce a special kind of
invalidation as it doesn't serve the purpose for partitioned tables.
Robert seems to be in favor of automatic behavior but it is not very
clear to me if he is fine with dealing differently for partitioned and
non-partitioned relations. Robert, can you please provide your opinion
on what do you think is the best way to move forward here?I thought we had agreed on handling partitioned and unpartitioned
tables differently, but maybe I misunderstood the discussion.
I think for the consistency argument how about allowing users to
specify a parallel-safety option for both partitioned and
non-partitioned relations but for non-partitioned relations if users
didn't specify, it would be computed automatically? If the user has
specified parallel-safety option for non-partitioned relation then we
would consider that instead of computing the value by ourselves.
Another reason for hesitation to do automatically for non-partitioned
relations was the new invalidation which will invalidate the cached
parallel-safety for all relations in relcache for a particular
database. As mentioned by Hou-San [1]/messages/by-id/OS0PR01MB5716EC1D07ACCA24373C2557941B9@OS0PR01MB5716.jpnprd01.prod.outlook.com, it seems we need to do this
whenever any function's parallel-safety is changed. OTOH, changing
parallel-safety for a function is probably not that often to matter in
practice which is why I think you seem to be fine with this idea. So,
I think, on that premise, it is okay to go ahead with different
handling for partitioned and non-partitioned relations here.
[1]: /messages/by-id/OS0PR01MB5716EC1D07ACCA24373C2557941B9@OS0PR01MB5716.jpnprd01.prod.outlook.com
--
With Regards,
Amit Kapila.
On Sat, Jul 24, 2021 at 5:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
I think for the consistency argument how about allowing users to
specify a parallel-safety option for both partitioned and
non-partitioned relations but for non-partitioned relations if users
didn't specify, it would be computed automatically? If the user has
specified parallel-safety option for non-partitioned relation then we
would consider that instead of computing the value by ourselves.
Having the option for both partitioned and non-partitioned tables
doesn't seem like the worst idea ever, but I am also not entirely sure
that I understand the point.
Another reason for hesitation to do automatically for non-partitioned
relations was the new invalidation which will invalidate the cached
parallel-safety for all relations in relcache for a particular
database. As mentioned by Hou-San [1], it seems we need to do this
whenever any function's parallel-safety is changed. OTOH, changing
parallel-safety for a function is probably not that often to matter in
practice which is why I think you seem to be fine with this idea.
Right. I think it should be quite rare, and invalidation events are
also not crazy expensive. We can test what the worst case is, but if
you have to sit there and run ALTER FUNCTION in a tight loop to see a
measurable performance impact, it's not a real problem. There may be a
code complexity argument against trying to figure it out
automatically, perhaps, but I don't think there's a big performance
issue.
What bothers me is that if this is something people have to set
manually then many people won't and will not get the benefit of the
feature. And some of them will also set it incorrectly and have
problems. So I am in favor of trying to determine it automatically
where possible, to make it easy for people. However, other people may
feel differently, and I'm not trying to say they're necessarily wrong.
I'm just telling you what I think.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Mon, Jul 26, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Sat, Jul 24, 2021 at 5:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
I think for the consistency argument how about allowing users to
specify a parallel-safety option for both partitioned and
non-partitioned relations but for non-partitioned relations if users
didn't specify, it would be computed automatically? If the user has
specified parallel-safety option for non-partitioned relation then we
would consider that instead of computing the value by ourselves.Having the option for both partitioned and non-partitioned tables
doesn't seem like the worst idea ever, but I am also not entirely sure
that I understand the point.
Consider below ways to allow the user to specify the parallel-safety option:
(a)
CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...
ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..
OR
(b)
CREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)
ALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)
The point was what should we do if the user specifies the option for a
non-partitioned table. Do we just ignore it or give an error that this
is not a valid syntax/option when used with non-partitioned tables? I
find it slightly odd that this option works for partitioned tables but
gives an error for non-partitioned tables but maybe we can document
it.
With the above syntax, even if the user doesn't specify the
parallelism option for non-partitioned relations, we will determine it
automatically. Now, in some situations, users might want to force
parallelism even when we wouldn't have chosen it automatically. It is
possible that she might face an error due to some parallel-unsafe
function but OTOH, she might have ensured that it is safe to choose
parallelism in her particular case.
Another reason for hesitation to do automatically for non-partitioned
relations was the new invalidation which will invalidate the cached
parallel-safety for all relations in relcache for a particular
database. As mentioned by Hou-San [1], it seems we need to do this
whenever any function's parallel-safety is changed. OTOH, changing
parallel-safety for a function is probably not that often to matter in
practice which is why I think you seem to be fine with this idea.Right. I think it should be quite rare, and invalidation events are
also not crazy expensive. We can test what the worst case is, but if
you have to sit there and run ALTER FUNCTION in a tight loop to see a
measurable performance impact, it's not a real problem. There may be a
code complexity argument against trying to figure it out
automatically, perhaps, but I don't think there's a big performance
issue.
True, there could be some code complexity but I think we can see once
the patch is ready for review.
What bothers me is that if this is something people have to set
manually then many people won't and will not get the benefit of the
feature. And some of them will also set it incorrectly and have
problems. So I am in favor of trying to determine it automatically
where possible, to make it easy for people. However, other people may
feel differently, and I'm not trying to say they're necessarily wrong.
I'm just telling you what I think.
Thanks for all your suggestions and feedback.
--
With Regards,
Amit Kapila.
On Tue, Jul 27, 2021 at 10:44 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Mon, Jul 26, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com> wrote:
Consider below ways to allow the user to specify the parallel-safety option:(a)
CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...
ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..OR
(b)
CREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)
ALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)The point was what should we do if the user specifies the option for a
non-partitioned table. Do we just ignore it or give an error that this
is not a valid syntax/option when used with non-partitioned tables? I
find it slightly odd that this option works for partitioned tables but
gives an error for non-partitioned tables but maybe we can document
it.
IMHO, for a non-partitioned table, we should be default allow the
parallel safely checking so that users don't have to set it for
individual tables, OTOH, I don't think that there is any point in
blocking the syntax for the non-partitioned table, So I think for the
non-partitioned table if the user hasn't set it we should do automatic
safety checking and if the user has defined the safety externally then
we should respect that. And for the partitioned table, we will never
do the automatic safety checking and we should always respect what the
user has set.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Tue, Jul 27, 2021 at 11:28 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:
On Tue, Jul 27, 2021 at 10:44 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Mon, Jul 26, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com> wrote:
Consider below ways to allow the user to specify the parallel-safety option:(a)
CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...
ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..OR
(b)
CREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)
ALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)The point was what should we do if the user specifies the option for a
non-partitioned table. Do we just ignore it or give an error that this
is not a valid syntax/option when used with non-partitioned tables? I
find it slightly odd that this option works for partitioned tables but
gives an error for non-partitioned tables but maybe we can document
it.IMHO, for a non-partitioned table, we should be default allow the
parallel safely checking so that users don't have to set it for
individual tables, OTOH, I don't think that there is any point in
blocking the syntax for the non-partitioned table, So I think for the
non-partitioned table if the user hasn't set it we should do automatic
safety checking and if the user has defined the safety externally then
we should respect that. And for the partitioned table, we will never
do the automatic safety checking and we should always respect what the
user has set.
This is exactly what I am saying. BTW, do you have any preference for
the syntax among (a) or (b)?
--
With Regards,
Amit Kapila.
On Tue, Jul 27, 2021 at 3:58 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
IMHO, for a non-partitioned table, we should be default allow the
parallel safely checking so that users don't have to set it for
individual tables, OTOH, I don't think that there is any point in
blocking the syntax for the non-partitioned table, So I think for the
non-partitioned table if the user hasn't set it we should do automatic
safety checking and if the user has defined the safety externally then
we should respect that. And for the partitioned table, we will never
do the automatic safety checking and we should always respect what the
user has set.
Provided it is possible to distinguish between the default
parallel-safety (unsafe) and that default being explicitly specified
by the user, it should be OK.
In the case of performing the automatic parallel-safety checking and
the table using something that is parallel-unsafe, there will be a
performance degradation compared to the current code (hopefully only
small). That can be avoided by the user explicitly specifying that
it's parallel-unsafe.
Regards,
Greg Nancarrow
Fujitsu Australia
On Tue, Jul 27, 2021 at 4:00 PM Greg Nancarrow <gregn4422@gmail.com> wrote:
On Tue, Jul 27, 2021 at 3:58 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
IMHO, for a non-partitioned table, we should be default allow the
parallel safely checking so that users don't have to set it for
individual tables, OTOH, I don't think that there is any point in
blocking the syntax for the non-partitioned table, So I think for the
non-partitioned table if the user hasn't set it we should do automatic
safety checking and if the user has defined the safety externally then
we should respect that. And for the partitioned table, we will never
do the automatic safety checking and we should always respect what the
user has set.Provided it is possible to distinguish between the default
parallel-safety (unsafe) and that default being explicitly specified
by the user, it should be OK.
Offhand, I don't see any problem with this. Do you have something
specific in mind?
In the case of performing the automatic parallel-safety checking and
the table using something that is parallel-unsafe, there will be a
performance degradation compared to the current code (hopefully only
small). That can be avoided by the user explicitly specifying that
it's parallel-unsafe.
True, but I guess this should be largely addressed by caching the
value of parallel safety at the relation level. Sure, there will be
some cost the first time we compute it but on consecutive accesses, it
should be quite cheap.
--
With Regards,
Amit Kapila.
On July 27, 2021 1:14 PM Amit Kapila <amit.kapila16@gmail.com>
On Mon, Jul 26, 2021 at 8:33 PM Robert Haas <robertmhaas@gmail.com>
wrote:On Sat, Jul 24, 2021 at 5:52 AM Amit Kapila <amit.kapila16@gmail.com>
wrote:
I think for the consistency argument how about allowing users to
specify a parallel-safety option for both partitioned and
non-partitioned relations but for non-partitioned relations if users
didn't specify, it would be computed automatically? If the user has
specified parallel-safety option for non-partitioned relation then we
would consider that instead of computing the value by ourselves.Having the option for both partitioned and non-partitioned tables
doesn't seem like the worst idea ever, but I am also not entirely sure
that I understand the point.Consider below ways to allow the user to specify the parallel-safety option:
(a)
CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...
ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..OR
(b)
CREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)
ALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)
Personally, I think the approach (a) might be better. Since it's similar to
ALTER FUNCTION PARALLEL XXX which user might be more familiar with.
Besides, I think we need a new default value about parallel dml safety. Maybe
'auto' or 'null'(different from safe/restricted/unsafe). Because, user is
likely to alter the safety to the default value to get the automatic safety
check, a independent default value can make it more clear.
Best regards,
Houzj
On Wed, Jul 28, 2021 at 12:52 PM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:
Consider below ways to allow the user to specify the parallel-safety option:
(a)
CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...
ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..OR
(b)
CREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)
ALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)Personally, I think the approach (a) might be better. Since it's similar to
ALTER FUNCTION PARALLEL XXX which user might be more familiar with.
I think so too.
Besides, I think we need a new default value about parallel dml safety. Maybe
'auto' or 'null'(different from safe/restricted/unsafe). Because, user is
likely to alter the safety to the default value to get the automatic safety
check, a independent default value can make it more clear.
Yes, I was thinking something similar when I said "Provided it is
possible to distinguish between the default parallel-safety (unsafe)
and that default being explicitly specified by the user". If we don't
have a new default value, then we need to distinguish these cases, but
I'm not sure Postgres does something similar elsewhere (for example,
for function parallel-safety, it's not currently recorded whether
parallel-safety=unsafe is because of the default or because the user
specifically set it to what is the default value).
Opinions?
Regards,
Greg Nancarrow
Fujitsu Australia
Note: Changing the subject as I felt the topic has diverted from the
original reported case and also it might help others to pay attention.
On Wed, Jul 28, 2021 at 8:22 AM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:
Consider below ways to allow the user to specify the parallel-safety option:
(a)
CREATE TABLE table_name (...) PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ...
ALTER TABLE table_name PARALLEL DML { UNSAFE | RESTRICTED | SAFE } ..OR
(b)
CREATE TABLE table_name (...) WITH (parallel_dml_enabled = true)
ALTER TABLE table_name (...) WITH (parallel_dml_enabled = true)Personally, I think the approach (a) might be better. Since it's similar to
ALTER FUNCTION PARALLEL XXX which user might be more familiar with.
Okay, and I think for (b) true/false won't be sufficient because one
might want to specify restricted.
Besides, I think we need a new default value about parallel dml safety. Maybe
'auto' or 'null'(different from safe/restricted/unsafe). Because, user is
likely to alter the safety to the default value to get the automatic safety
check, a independent default value can make it more clear.
Hmm, but auto won't work for partitioned tables, right? If so, that
might appear like an inconsistency to the user and we need to document
the same. Let me summarize the discussion so far in this thread so
that it is helpful to others.
We would like to parallelize INSERT SELECT (first step INSERT +
parallel SELECT and then Parallel (INSERT + SELECT)) and for that, we
have explored a couple of ways. The first approach is to automatically
detect if it is safe to parallelize insert and then do it without user
intervention. To detect automatically, we need to determine the
parallel-safety of various expressions (like default column
expressions, check constraints, index expressions, etc.) at the
planning time which can be costly but we can avoid most of the cost if
we cache the parallel safety for the relation. So, the cost needs to
be paid just once. Now, we can't cache this for partitioned relations
because it can be very costly (as we need to lock all the partitions)
and has deadlock risks (while processing invalidation), this has been
explained in email [1]/messages/by-id/CAA4eK1Jwz8xGss4b0-33eyX0i5W_1CnqT16DjB9snVC--DoOsQ@mail.gmail.com.
Now, as we can't think of a nice way to determine parallel safety
automatically for partitioned relations, we thought of providing an
option to the user. The next thing to decide here is that if we are
providing an option to the user in one of the ways as mentioned above
in the email, what should we do if the user uses that option for
non-partitioned relations, shall we just ignore it or give an error
that this is not a valid syntax/option? The one idea which Dilip and I
are advocating is to respect the user's input for non-partitioned
relations and if it is not given then compute the parallel safety and
cache it.
To facilitate users for providing a parallel-safety option, we are
thinking to provide a utility function
"pg_get_table_parallel_dml_safety(regclass)" that
returns records of (objid, classid, parallel_safety) for all parallel
unsafe/restricted table-related objects from which the table's
parallel DML safety is determined. This will allow user to identify
unsafe objects and if the required user can change the parallel safety
of required functions and then use the parallel safety option for the
table.
Thoughts?
Note - This topic has been discussed in another thread as well [2]/messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com but
as many of the key technical points have been discussed here I thought
it is better to continue here.
[1]: /messages/by-id/CAA4eK1Jwz8xGss4b0-33eyX0i5W_1CnqT16DjB9snVC--DoOsQ@mail.gmail.com
[2]: /messages/by-id/TYAPR01MB29905A9AB82CC8BA50AB0F80FE709@TYAPR01MB2990.jpnprd01.prod.outlook.com
--
With Regards,
Amit Kapila.
On Fri, Jul 30, 2021 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
Besides, I think we need a new default value about parallel dml safety. Maybe
'auto' or 'null'(different from safe/restricted/unsafe). Because, user is
likely to alter the safety to the default value to get the automatic safety
check, a independent default value can make it more clear.Hmm, but auto won't work for partitioned tables, right? If so, that
might appear like an inconsistency to the user and we need to document
the same. Let me summarize the discussion so far in this thread so
that it is helpful to others.
To avoid that inconsistency, UNSAFE could be the default for
partitioned tables (and we would disallow setting AUTO for these).
So then AUTO is the default for non-partitioned tables only.
Regards,
Greg Nancarrow
Fujitsu Australia
On Friday, July 30, 2021 2:52 PM Greg Nancarrow <gregn4422@gmail.com> wrote:
On Fri, Jul 30, 2021 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
Besides, I think we need a new default value about parallel dml
safety. Maybe 'auto' or 'null'(different from
safe/restricted/unsafe). Because, user is likely to alter the safety
to the default value to get the automatic safety check, a independent default
value can make it more clear.Hmm, but auto won't work for partitioned tables, right? If so, that
might appear like an inconsistency to the user and we need to document
the same. Let me summarize the discussion so far in this thread so
that it is helpful to others.To avoid that inconsistency, UNSAFE could be the default for partitioned tables
(and we would disallow setting AUTO for these).
So then AUTO is the default for non-partitioned tables only.
I think this approach is reasonable, +1.
Best regards,
houzj
On Fri, Jul 30, 2021 at 6:53 PM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:
On Friday, July 30, 2021 2:52 PM Greg Nancarrow <gregn4422@gmail.com> wrote:
On Fri, Jul 30, 2021 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
Besides, I think we need a new default value about parallel dml
safety. Maybe 'auto' or 'null'(different from
safe/restricted/unsafe). Because, user is likely to alter the safety
to the default value to get the automatic safety check, a independent default
value can make it more clear.Hmm, but auto won't work for partitioned tables, right? If so, that
might appear like an inconsistency to the user and we need to document
the same. Let me summarize the discussion so far in this thread so
that it is helpful to others.To avoid that inconsistency, UNSAFE could be the default for partitioned tables
(and we would disallow setting AUTO for these).
So then AUTO is the default for non-partitioned tables only.I think this approach is reasonable, +1.
I see the need to change to default via Alter Table but I am not sure
if Auto is the most appropriate way to handle that. How about using
DEFAULT itself as we do in the case of REPLICA IDENTITY? So, if users
have to alter parallel safety value to default, they need to just say
Parallel DML DEFAULT. The default would mean automatic behavior for
non-partitioned relations and ignore parallelism for partitioned
tables.
--
With Regards,
Amit Kapila.
On Mon, Aug 2, 2021 at 2:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Fri, Jul 30, 2021 at 6:53 PM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:On Friday, July 30, 2021 2:52 PM Greg Nancarrow <gregn4422@gmail.com> wrote:
On Fri, Jul 30, 2021 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
Besides, I think we need a new default value about parallel dml
safety. Maybe 'auto' or 'null'(different from
safe/restricted/unsafe). Because, user is likely to alter the safety
to the default value to get the automatic safety check, a independent default
value can make it more clear.Hmm, but auto won't work for partitioned tables, right? If so, that
might appear like an inconsistency to the user and we need to document
the same. Let me summarize the discussion so far in this thread so
that it is helpful to others.To avoid that inconsistency, UNSAFE could be the default for partitioned tables
(and we would disallow setting AUTO for these).
So then AUTO is the default for non-partitioned tables only.I think this approach is reasonable, +1.
I see the need to change to default via Alter Table but I am not sure
if Auto is the most appropriate way to handle that. How about using
DEFAULT itself as we do in the case of REPLICA IDENTITY? So, if users
have to alter parallel safety value to default, they need to just say
Parallel DML DEFAULT. The default would mean automatic behavior for
non-partitioned relations and ignore parallelism for partitioned
tables.
Hmm, I'm not so sure I'm sold on that.
I personally think "DEFAULT" here is vague, and users then need to
know what DEFAULT equates to, based on the type of table (partitioned
or non-partitioned table).
Also, then there's two ways to set the actual "default" DML
parallel-safety for partitioned tables: DEFAULT or UNSAFE.
At least "AUTO" is a meaningful default option name for
non-partitioned tables - "automatic" parallel-safety checking, and the
fact that it isn't the default (and can't be set) for partitioned
tables highlights the difference in the way being proposed to treat
them (i.e. use automatic checking only for non-partitioned tables).
I'd be interested to hear what others think.
I think a viable alternative would be to record whether an explicit
DML parallel-safety has been specified, and if not, apply default
behavior (i.e. by default use automatic checking for non-partitioned
tables and treat partitioned tables as UNSAFE). I'm just not sure
whether this kind of distinction (explicit vs implicit default) has
been used before in Postgres options.
Regards,
Greg Nancarrow
Fujitsu Australia
On August 2, 2021 2:04 PM Greg Nancarrow <gregn4422@gmail.com> wrote:
On Mon, Aug 2, 2021 at 2:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Fri, Jul 30, 2021 at 6:53 PM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:On Friday, July 30, 2021 2:52 PM Greg Nancarrow <gregn4422@gmail.com>
wrote:
On Fri, Jul 30, 2021 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com>
wrote:
Besides, I think we need a new default value about parallel
dml safety. Maybe 'auto' or 'null'(different from
safe/restricted/unsafe). Because, user is likely to alter the
safety to the default value to get the automatic safety check,
a independent default value can make it more clear.Hmm, but auto won't work for partitioned tables, right? If so,
that might appear like an inconsistency to the user and we need
to document the same. Let me summarize the discussion so far in
this thread so that it is helpful to others.To avoid that inconsistency, UNSAFE could be the default for
partitioned tables (and we would disallow setting AUTO for these).
So then AUTO is the default for non-partitioned tables only.I think this approach is reasonable, +1.
I see the need to change to default via Alter Table but I am not sure
if Auto is the most appropriate way to handle that. How about using
DEFAULT itself as we do in the case of REPLICA IDENTITY? So, if users
have to alter parallel safety value to default, they need to just say
Parallel DML DEFAULT. The default would mean automatic behavior for
non-partitioned relations and ignore parallelism for partitioned
tables.Hmm, I'm not so sure I'm sold on that.
I personally think "DEFAULT" here is vague, and users then need to know what
DEFAULT equates to, based on the type of table (partitioned or non-partitioned
table).
Also, then there's two ways to set the actual "default" DML parallel-safety for
partitioned tables: DEFAULT or UNSAFE.
At least "AUTO" is a meaningful default option name for non-partitioned tables
- "automatic" parallel-safety checking, and the fact that it isn't the default (and
can't be set) for partitioned tables highlights the difference in the way being
proposed to treat them (i.e. use automatic checking only for non-partitioned
tables).
I'd be interested to hear what others think.
I think a viable alternative would be to record whether an explicit DML
parallel-safety has been specified, and if not, apply default behavior (i.e. by
default use automatic checking for non-partitioned tables and treat partitioned
tables as UNSAFE). I'm just not sure whether this kind of distinction (explicit vs
implicit default) has been used before in Postgres options.
I think both approaches are fine, but using "DEFAULT" might has a disadvantage
that if we somehow support automatic safety check for partitioned table in the
future, then the meaning of "DEFAULT" for partitioned table will change from
UNSAFE to automatic check. It could also bring some burden on the user to
modify their sql script.
Best regards,
houzj
Based on the discussion here, I implemented the auto-safety-check feature.
Since most of the technical discussion happened here,I attatched the patches in
this thread.
The patches allow users to specify a parallel-safety option for both
partitioned and non-partitioned relations, and for non-partitioned relations if
users didn't specify, it would be computed automatically. If the user has
specified parallel-safety option then we would consider that instead of
computing the value by ourselves. But for partitioned table, if users didn't
specify the parallel dml safety, it will treat is as unsafe.
For non-partitioned relations, after computing the parallel-safety of relation
during the planning, we save it in the relation cache entry and invalidate the
cached parallel-safety for all relations in relcache for a particular database
whenever any function's parallel-safety is changed.
To make it possible for user to alter the safety to a not specified value to
get the automatic safety check, add a new default option(temporarily named
'DEFAULT' in addition to safe/unsafe/restricted) about parallel dml safety.
To facilitate users for providing a parallel-safety option, provide a utility
functionr "pg_get_table_parallel_dml_safety(regclass)" that returns records of
(objid, classid, parallel_safety) for all parallel unsafe/restricted
table-related objects from which the table's parallel DML safety is determined.
This will allow user to identify unsafe objects and if the required user can
change the parallel safety of required functions and then use the parallel
safety option for the table.
Best regards,
houzj
Attachments:
v15-0004-cache-parallel-dml-safety.patchapplication/octet-stream; name=v15-0004-cache-parallel-dml-safety.patchDownload
From a97b1bf327ad665c9f71c43063a3a9e6d364716d Mon Sep 17 00:00:00 2001
From: Hou Zhijie <HouZhijie@foxmail.com>
Date: Fri, 30 Jul 2021 10:04:32 +0800
Subject: [PATCH] cache-parallel-dml-safety
For non-partitioned table, if not set pg_class.relparalleldml, then check it
automatically and save the parallel dml safety in relcache.
whenever any function's parallel-safety is changed, invalidate the cached
parallel-safety for all relations in relcache for a particular database.
---
src/backend/catalog/pg_proc.c | 13 +++++
src/backend/commands/functioncmds.c | 18 ++++++-
src/backend/optimizer/util/clauses.c | 78 ++++++++++++++++++++++------
src/backend/utils/cache/inval.c | 53 +++++++++++++++++++
src/backend/utils/cache/relcache.c | 19 +++++++
src/include/storage/sinval.h | 8 +++
src/include/utils/inval.h | 2 +
src/include/utils/rel.h | 1 +
src/include/utils/relcache.h | 2 +
9 files changed, 176 insertions(+), 18 deletions(-)
diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c
index 1454d2fb67..04585dc3ef 100644
--- a/src/backend/catalog/pg_proc.c
+++ b/src/backend/catalog/pg_proc.c
@@ -39,6 +39,7 @@
#include "tcop/tcopprot.h"
#include "utils/acl.h"
#include "utils/builtins.h"
+#include "utils/inval.h"
#include "utils/lsyscache.h"
#include "utils/regproc.h"
#include "utils/rel.h"
@@ -367,6 +368,9 @@ ProcedureCreate(const char *procedureName,
Datum proargnames;
bool isnull;
const char *dropcmd;
+ char old_proparallel;
+
+ old_proparallel = oldproc->proparallel;
if (!replace)
ereport(ERROR,
@@ -559,6 +563,15 @@ ProcedureCreate(const char *procedureName,
tup = heap_modify_tuple(oldtup, tupDesc, values, nulls, replaces);
CatalogTupleUpdate(rel, &tup->t_self, tup);
+ /*
+ * If the function's parallel safety changed, the tables that depend
+ * on this function won't be safe to be modified in parallel mode
+ * anymore. So, we need to invalidate the parallel dml flag in
+ * relcache.
+ */
+ if (old_proparallel != parallel)
+ CacheInvalidateParallelDML();
+
ReleaseSysCache(oldtup);
is_update = true;
}
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index 79d875ab10..57d9ca52e5 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -70,6 +70,7 @@
#include "utils/builtins.h"
#include "utils/fmgroids.h"
#include "utils/guc.h"
+#include "utils/inval.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
#include "utils/rel.h"
@@ -1504,7 +1505,22 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
repl_val, repl_null, repl_repl);
}
if (parallel_item)
- procForm->proparallel = interpret_func_parallel(parallel_item);
+ {
+ char proparallel;
+
+ proparallel = interpret_func_parallel(parallel_item);
+
+ /*
+ * If the function's parallel safety changed, the tables that depends
+ * on this function won't be safe to be modified in parallel mode
+ * anymore. So, we need to invalidate the parallel dml flag in
+ * relcache.
+ */
+ if (proparallel != procForm->proparallel)
+ CacheInvalidateParallelDML();
+
+ procForm->proparallel = proparallel;
+ }
/* Do the update */
CatalogTupleUpdate(rel, &tup->t_self, tup);
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 749cb0dacd..f65c2fc961 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -187,7 +187,7 @@ static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
static safety_object *make_safety_object(Oid objid, Oid classid, char proparallel);
-
+static char max_parallel_dml_hazard(Query *parse, max_parallel_hazard_context *context);
/*****************************************************************************
* Aggregate-function clause manipulation
@@ -654,7 +654,6 @@ contain_volatile_functions_not_nextval_walker(Node *node, void *context)
char
max_parallel_hazard(Query *parse)
{
- bool max_hazard_found;
max_parallel_hazard_context context;
context.max_hazard = PROPARALLEL_SAFE;
@@ -664,28 +663,73 @@ max_parallel_hazard(Query *parse)
context.objects = NIL;
context.partition_directory = NULL;
- max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
+ if(!max_parallel_hazard_walker((Node *) parse, &context))
+ (void) max_parallel_dml_hazard(parse, &context);
+
+ return context.max_hazard;
+}
+
+/* Check the safety of parallel data modification */
+static char
+max_parallel_dml_hazard(Query *parse,
+ max_parallel_hazard_context *context)
+{
+ RangeTblEntry *rte;
+ Relation target_rel;
+ char hazard;
+
+ if (!IsModifySupportedInParallelMode(parse->commandType))
+ return context->max_hazard;
+
+ /*
+ * The target table is already locked by the caller (this is done in the
+ * parse/analyze phase), and remains locked until end-of-transaction.
+ */
+ rte = rt_fetch(parse->resultRelation, parse->rtable);
+ target_rel = table_open(rte->relid, NoLock);
+
+ /*
+ * If user set specific parallel dml safety safe/restricted/unsafe, we
+ * respect what user has set. If not set, for non-partitioned table, check
+ * the safety automatically, for partitioned table, consider it as unsafe.
+ */
+ hazard = target_rel->rd_rel->relparalleldml;
+ if (target_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE &&
+ hazard == PROPARALLEL_DEFAULT)
+ hazard = PROPARALLEL_UNSAFE;
+
+ if (hazard != PROPARALLEL_DEFAULT)
+ (void) max_parallel_hazard_test(hazard, context);
- if (!max_hazard_found &&
- IsModifySupportedInParallelMode(parse->commandType))
+ /* Do parallel safety check for the target relation */
+ else if (!target_rel->rd_paralleldml)
{
- RangeTblEntry *rte;
- Relation target_rel;
+ bool max_hazard_found;
+ char pre_max_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
- rte = rt_fetch(parse->resultRelation, parse->rtable);
+ max_hazard_found = target_rel_parallel_hazard_recurse(target_rel,
+ context,
+ false,
+ false);
- /*
- * The target table is already locked by the caller (this is done in the
- * parse/analyze phase), and remains locked until end-of-transaction.
- */
- target_rel = table_open(rte->relid, NoLock);
+ /* Cache the parallel dml safety of thie relation */
+ target_rel->rd_paralleldml = context->max_hazard;
- (void) max_parallel_hazard_test(target_rel->rd_rel->relparalleldml,
- &context);
- table_close(target_rel, NoLock);
+ if (!max_hazard_found)
+ (void) max_parallel_hazard_test(pre_max_hazard, context);
}
- return context.max_hazard;
+ /*
+ * If we already cached the parallel dml safety of this relation we don't
+ * need to check it again.
+ */
+ else
+ (void) max_parallel_hazard_test(target_rel->rd_paralleldml, context);
+
+ table_close(target_rel, NoLock);
+
+ return context->max_hazard;
}
/*
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 9c79775725..9459b3c204 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -385,6 +385,27 @@ AddCatalogInvalidationMessage(InvalidationListHeader *hdr,
AddInvalidationMessage(&hdr->cclist, &msg);
}
+/*
+ * Add a Parallel dml inval entry
+ */
+static void
+AddParallelDMLInvalidationMessage(InvalidationListHeader *hdr)
+{
+ SharedInvalidationMessage msg;
+
+ /* Don't add a duplicate item. */
+ ProcessMessageList(hdr->rclist,
+ if (msg->rc.id == SHAREDINVALPARALLELDML_ID)
+ return);
+
+ /* OK, add the item */
+ msg.pd.id = SHAREDINVALPARALLELDML_ID;
+ /* check AddCatcacheInvalidationMessage() for an explanation */
+ VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
+
+ AddInvalidationMessage(&hdr->rclist, &msg);
+}
+
/*
* Add a relcache inval entry
*/
@@ -539,6 +560,21 @@ RegisterRelcacheInvalidation(Oid dbId, Oid relId)
transInvalInfo->RelcacheInitFileInval = true;
}
+/*
+ * RegisterParallelDMLInvalidation
+ *
+ * As above, but register a invalidation event for paralleldml in all relcache.
+ */
+static void
+RegisterParallelDMLInvalidation()
+{
+ AddParallelDMLInvalidationMessage(&transInvalInfo->CurrentCmdInvalidMsgs);
+
+ (void) GetCurrentCommandId(true);
+
+ transInvalInfo->RelcacheInitFileInval = true;
+}
+
/*
* RegisterSnapshotInvalidation
*
@@ -631,6 +667,11 @@ LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
else if (msg->sn.dbId == MyDatabaseId)
InvalidateCatalogSnapshot();
}
+ else if (msg->id == SHAREDINVALPARALLELDML_ID)
+ {
+ /* Invalid all the relcache's parallel dml flag */
+ ParallelDMLInvalidate();
+ }
else
elog(FATAL, "unrecognized SI message ID: %d", msg->id);
}
@@ -1307,6 +1348,18 @@ CacheInvalidateRelcacheAll(void)
RegisterRelcacheInvalidation(InvalidOid, InvalidOid);
}
+/*
+ * CacheInvalidateParallelDML
+ * Register invalidation of the whole relcache at the end of command.
+ */
+void
+CacheInvalidateParallelDML(void)
+{
+ PrepareInvalidationState();
+
+ RegisterParallelDMLInvalidation();
+}
+
/*
* CacheInvalidateRelcacheByTuple
* As above, but relation is identified by passing its pg_class tuple.
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 3f38a69687..e013c4d0dc 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2934,6 +2934,25 @@ RelationCacheInvalidate(void)
list_free(rebuildList);
}
+/*
+ * ParallelDMLInvalidate
+ * Invalidate all the relcache's parallel dml flag.
+ */
+void
+ParallelDMLInvalidate(void)
+{
+ HASH_SEQ_STATUS status;
+ RelIdCacheEnt *idhentry;
+ Relation relation;
+
+ hash_seq_init(&status, RelationIdCache);
+
+ while ((idhentry = (RelIdCacheEnt *) hash_seq_search(&status)) != NULL)
+ {
+ relation = idhentry->reldesc;
+ relation->rd_paralleldml = 0;
+ }
+}
/*
* RelationCloseSmgrByOid - close a relcache entry's smgr link
*
diff --git a/src/include/storage/sinval.h b/src/include/storage/sinval.h
index f03dc23b14..9859a3bea0 100644
--- a/src/include/storage/sinval.h
+++ b/src/include/storage/sinval.h
@@ -110,6 +110,13 @@ typedef struct
Oid relId; /* relation ID */
} SharedInvalSnapshotMsg;
+#define SHAREDINVALPARALLELDML_ID (-6)
+
+typedef struct
+{
+ int8 id; /* type field --- must be first */
+} SharedInvalParallelDMLMsg;
+
typedef union
{
int8 id; /* type field --- must be first */
@@ -119,6 +126,7 @@ typedef union
SharedInvalSmgrMsg sm;
SharedInvalRelmapMsg rm;
SharedInvalSnapshotMsg sn;
+ SharedInvalParallelDMLMsg pd;
} SharedInvalidationMessage;
diff --git a/src/include/utils/inval.h b/src/include/utils/inval.h
index 770672890b..f1ce1462c1 100644
--- a/src/include/utils/inval.h
+++ b/src/include/utils/inval.h
@@ -64,4 +64,6 @@ extern void CallSyscacheCallbacks(int cacheid, uint32 hashvalue);
extern void InvalidateSystemCaches(void);
extern void LogLogicalInvalidations(void);
+
+extern void CacheInvalidateParallelDML(void);
#endif /* INVAL_H */
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index b4faa1c123..52574e9d40 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -63,6 +63,7 @@ typedef struct RelationData
bool rd_indexvalid; /* is rd_indexlist valid? (also rd_pkindex and
* rd_replidindex) */
bool rd_statvalid; /* is rd_statlist valid? */
+ char rd_paralleldml; /* parallel dml safety */
/*----------
* rd_createSubid is the ID of the highest subtransaction the rel has
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 5ea225ac2d..5813aa50a0 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -128,6 +128,8 @@ extern void RelationCacheInvalidate(void);
extern void RelationCloseSmgrByOid(Oid relationId);
+extern void ParallelDMLInvalidate(void);
+
#ifdef USE_ASSERT_CHECKING
extern void AssertPendingSyncs_RelationCache(void);
#else
--
2.27.0
v15-0003-get-parallel-safety-functions.patchapplication/octet-stream; name=v15-0003-get-parallel-safety-functions.patchDownload
From d93281fdbeef47af1b16bf6803d80c18e592fc13 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Fri, 30 Jul 2021 11:50:55 +0800
Subject: [PATCH] get-parallel-safety-functions
Provide a utility function "pg_get_table_parallel_dml_safety(regclass)" that
returns records of (objid, classid, parallel_safety) for all
parallel unsafe/restricted table-related objects from which the
table's parallel DML safety is determined. The user can use this
information during development in order to accurately declare a
table's parallel DML safety. Or to identify any problematic objects
if a parallel DML fails or behaves unexpectedly.
When the use of an index-related parallel unsafe/restricted function
is detected, both the function oid and the index oid are returned.
Provide a utility function "pg_get_table_max_parallel_dml_hazard(regclass)" that
returns the worst parallel DML safety hazard that can be found in the
given relation. Users can use this function to do a quick check without
caring about specific parallel-related objects.
---
src/backend/optimizer/util/clauses.c | 658 ++++++++++++++++++++++++++++++++++-
src/backend/utils/adt/misc.c | 94 +++++
src/backend/utils/cache/typcache.c | 17 +
src/include/catalog/pg_proc.dat | 22 +-
src/include/optimizer/clauses.h | 14 +
src/include/utils/typcache.h | 2 +
src/tools/pgindent/typedefs.list | 1 +
7 files changed, 803 insertions(+), 5 deletions(-)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index ac0f243..749cb0d 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -19,15 +19,20 @@
#include "postgres.h"
+#include "access/amapi.h"
+#include "access/genam.h"
#include "access/htup_details.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/pg_aggregate.h"
#include "catalog/pg_class.h"
+#include "catalog/pg_constraint.h"
#include "catalog/pg_language.h"
#include "catalog/pg_operator.h"
#include "catalog/pg_proc.h"
+#include "catalog/pg_trigger.h"
#include "catalog/pg_type.h"
+#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/functions.h"
#include "funcapi.h"
@@ -46,6 +51,8 @@
#include "parser/parse_coerce.h"
#include "parser/parse_func.h"
#include "parser/parsetree.h"
+#include "partitioning/partdesc.h"
+#include "rewrite/rewriteHandler.h"
#include "rewrite/rewriteManip.h"
#include "tcop/tcopprot.h"
#include "utils/acl.h"
@@ -54,6 +61,7 @@
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
+#include "utils/partcache.h"
#include "utils/rel.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
@@ -92,6 +100,9 @@ typedef struct
char max_hazard; /* worst proparallel hazard found so far */
char max_interesting; /* worst proparallel hazard of interest */
List *safe_param_ids; /* PARAM_EXEC Param IDs to treat as safe */
+ bool check_all; /* whether collect all the unsafe/restricted objects */
+ List *objects; /* parallel unsafe/restricted objects */
+ PartitionDirectory partition_directory; /* partition descriptors */
} max_parallel_hazard_context;
static bool contain_agg_clause_walker(Node *node, void *context);
@@ -102,6 +113,25 @@ static bool contain_volatile_functions_walker(Node *node, void *context);
static bool contain_volatile_functions_not_nextval_walker(Node *node, void *context);
static bool max_parallel_hazard_walker(Node *node,
max_parallel_hazard_context *context);
+static bool target_rel_parallel_hazard_recurse(Relation relation,
+ max_parallel_hazard_context *context,
+ bool is_partition,
+ bool check_column_default);
+static bool target_rel_trigger_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
+static bool index_expr_parallel_hazard(Relation index_rel,
+ List *ii_Expressions,
+ List *ii_Predicate,
+ max_parallel_hazard_context *context);
+static bool target_rel_index_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
+static bool target_rel_domain_parallel_hazard(Oid typid,
+ max_parallel_hazard_context *context);
+static bool target_rel_partitions_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition);
+static bool target_rel_chk_constr_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
static bool contain_nonstrict_functions_walker(Node *node, void *context);
static bool contain_exec_param_walker(Node *node, List *param_ids);
static bool contain_context_dependent_node(Node *clause);
@@ -156,6 +186,7 @@ static Query *substitute_actual_srf_parameters(Query *expr,
static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
+static safety_object *make_safety_object(Oid objid, Oid classid, char proparallel);
/*****************************************************************************
@@ -629,6 +660,9 @@ max_parallel_hazard(Query *parse)
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_UNSAFE;
context.safe_param_ids = NIL;
+ context.check_all = false;
+ context.objects = NIL;
+ context.partition_directory = NULL;
max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
@@ -681,6 +715,9 @@ is_parallel_safe(PlannerInfo *root, Node *node)
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_RESTRICTED;
context.safe_param_ids = NIL;
+ context.check_all = false;
+ context.objects = NIL;
+ context.partition_directory = NULL;
/*
* The params that refer to the same or parent query level are considered
@@ -712,7 +749,7 @@ max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context)
break;
case PROPARALLEL_RESTRICTED:
/* increase max_hazard to RESTRICTED */
- Assert(context->max_hazard != PROPARALLEL_UNSAFE);
+ Assert(context->check_all || context->max_hazard != PROPARALLEL_UNSAFE);
context->max_hazard = proparallel;
/* done if we are not expecting any unsafe functions */
if (context->max_interesting == proparallel)
@@ -729,6 +766,82 @@ max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context)
return false;
}
+/*
+ * make_safety_object
+ *
+ * Creates a safety_object, given object id, class id and parallel safety.
+ */
+static safety_object *
+make_safety_object(Oid objid, Oid classid, char proparallel)
+{
+ safety_object *object = (safety_object *) palloc(sizeof(safety_object));
+
+ object->objid = objid;
+ object->classid = classid;
+ object->proparallel = proparallel;
+
+ return object;
+}
+
+/* check_functions_in_node callback */
+static bool
+parallel_hazard_checker(Oid func_id, void *context)
+{
+ char proparallel;
+ max_parallel_hazard_context *cont = (max_parallel_hazard_context *) context;
+
+ proparallel = func_parallel(func_id);
+
+ if (max_parallel_hazard_test(proparallel, cont) && !cont->check_all)
+ return true;
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object = make_safety_object(func_id,
+ ProcedureRelationId,
+ proparallel);
+ cont->objects = lappend(cont->objects, object);
+ }
+
+ return false;
+}
+
+/*
+ * parallel_hazard_walker
+ *
+ * Recursively search an expression tree which is defined as partition key or
+ * index or constraint or column default expression for PARALLEL
+ * UNSAFE/RESTRICTED table-related objects.
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
+{
+ if (node == NULL)
+ return false;
+
+ /* Check for hazardous functions in node itself */
+ if (check_functions_in_node(node, parallel_hazard_checker,
+ context))
+ return true;
+
+ if (IsA(node, CoerceToDomain))
+ {
+ CoerceToDomain *domain = (CoerceToDomain *) node;
+
+ if (target_rel_domain_parallel_hazard(domain->resulttype, context))
+ return true;
+ }
+
+ /* Recurse to check arguments */
+ return expression_tree_walker(node,
+ parallel_hazard_walker,
+ context);
+}
+
/* check_functions_in_node callback */
static bool
max_parallel_hazard_checker(Oid func_id, void *context)
@@ -885,6 +998,549 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
}
/*
+ * target_rel_parallel_hazard
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+List*
+target_rel_parallel_hazard(Oid relOid, bool findall,
+ char max_interesting, char *max_hazard)
+{
+ max_parallel_hazard_context context;
+ Relation targetRel;
+
+ context.check_all = findall;
+ context.objects = NIL;
+ context.max_hazard = PROPARALLEL_SAFE;
+ context.max_interesting = max_interesting;
+ context.safe_param_ids = NIL;
+ context.partition_directory = NULL;
+
+ targetRel = table_open(relOid, AccessShareLock);
+
+ (void) target_rel_parallel_hazard_recurse(targetRel, &context, false, true);
+ if (context.partition_directory)
+ DestroyPartitionDirectory(context.partition_directory);
+
+ table_close(targetRel, AccessShareLock);
+
+ *max_hazard = context.max_hazard;
+
+ return context.objects;
+}
+
+/*
+ * target_rel_parallel_hazard_recurse
+ *
+ * Recursively search all table-related objects for PARALLEL UNSAFE/RESTRICTED
+ * objects.
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_parallel_hazard_recurse(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition,
+ bool check_column_default)
+{
+ TupleDesc tupdesc;
+ int attnum;
+
+ /*
+ * We can't support table modification in a parallel worker if it's a
+ * foreign table/partition (no FDW API for supporting parallel access) or
+ * a temporary table.
+ */
+ if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||
+ RelationUsesLocalBuffers(rel))
+ {
+ if (max_parallel_hazard_test(PROPARALLEL_RESTRICTED, context) &&
+ !context->check_all)
+ return true;
+ else
+ {
+ safety_object *object = make_safety_object(rel->rd_rel->oid,
+ RelationRelationId,
+ PROPARALLEL_RESTRICTED);
+ context->objects = lappend(context->objects, object);
+ }
+ }
+
+ /*
+ * If a partitioned table, check that each partition is safe for
+ * modification in parallel-mode.
+ */
+ if (target_rel_partitions_parallel_hazard(rel, context, is_partition))
+ return true;
+
+ /*
+ * If there are any index expressions or index predicate, check that they
+ * are parallel-mode safe.
+ */
+ if (target_rel_index_parallel_hazard(rel, context))
+ return true;
+
+ /*
+ * If any triggers exist, check that they are parallel-safe.
+ */
+ if (target_rel_trigger_parallel_hazard(rel, context))
+ return true;
+
+ /*
+ * Column default expressions are only applicable to INSERT and UPDATE.
+ * Note that even though column defaults may be specified separately for
+ * each partition in a partitioned table, a partition's default value is
+ * not applied when inserting a tuple through a partitioned table.
+ */
+
+ tupdesc = RelationGetDescr(rel);
+ for (attnum = 0; attnum < tupdesc->natts; attnum++)
+ {
+ Form_pg_attribute att = TupleDescAttr(tupdesc, attnum);
+
+ /* We don't need info for dropped or generated attributes */
+ if (att->attisdropped || att->attgenerated)
+ continue;
+
+ if (att->atthasdef && check_column_default)
+ {
+ Node *defaultexpr;
+
+ defaultexpr = build_column_default(rel, attnum + 1);
+ if (parallel_hazard_walker((Node *) defaultexpr, context))
+ return true;
+ }
+
+ /*
+ * If the column is of a DOMAIN type, determine whether that
+ * domain has any CHECK expressions that are not parallel-mode
+ * safe.
+ */
+ if (get_typtype(att->atttypid) == TYPTYPE_DOMAIN)
+ {
+ if (target_rel_domain_parallel_hazard(att->atttypid, context))
+ return true;
+ }
+ }
+
+ /*
+ * CHECK constraints are only applicable to INSERT and UPDATE. If any
+ * CHECK constraints exist, determine if they are parallel-safe.
+ */
+ if (target_rel_chk_constr_parallel_hazard(rel, context))
+ return true;
+
+ return false;
+}
+
+/*
+ * target_rel_trigger_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the specified relation's trigger data.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_trigger_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ int i;
+ char proparallel;
+
+ if (rel->trigdesc == NULL)
+ return false;
+
+ /*
+ * Care is needed here to avoid using the same relcache TriggerDesc field
+ * across other cache accesses, because relcache doesn't guarantee that it
+ * won't move.
+ */
+ for (i = 0; i < rel->trigdesc->numtriggers; i++)
+ {
+ Oid tgfoid = rel->trigdesc->triggers[i].tgfoid;
+ Oid tgoid = rel->trigdesc->triggers[i].tgoid;
+
+ proparallel = func_parallel(tgfoid);
+
+ if (max_parallel_hazard_test(proparallel, context) &&
+ !context->check_all)
+ return true;
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object,
+ *parent_object;
+
+ object = make_safety_object(tgfoid, ProcedureRelationId,
+ proparallel);
+ parent_object = make_safety_object(tgoid, TriggerRelationId,
+ proparallel);
+
+ context->objects = lappend(context->objects, object);
+ context->objects = lappend(context->objects, parent_object);
+ }
+ }
+
+ return false;
+}
+
+/*
+ * index_expr_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the input index expression and index predicate.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+index_expr_parallel_hazard(Relation index_rel,
+ List *ii_Expressions,
+ List *ii_Predicate,
+ max_parallel_hazard_context *context)
+{
+ int i;
+ Form_pg_index indexStruct;
+ ListCell *index_expr_item;
+
+ indexStruct = index_rel->rd_index;
+ index_expr_item = list_head(ii_Expressions);
+
+ /* Check parallel-safety of index expression */
+ for (i = 0; i < indexStruct->indnatts; i++)
+ {
+ int keycol = indexStruct->indkey.values[i];
+
+ if (keycol == 0)
+ {
+ /* Found an index expression */
+ Node *index_expr;
+
+ Assert(index_expr_item != NULL);
+ if (index_expr_item == NULL) /* shouldn't happen */
+ elog(ERROR, "too few entries in indexprs list");
+
+ index_expr = (Node *) lfirst(index_expr_item);
+
+ if (parallel_hazard_walker(index_expr, context))
+ return true;
+
+ index_expr_item = lnext(ii_Expressions, index_expr_item);
+ }
+ }
+
+ /* Check parallel-safety of index predicate */
+ if (parallel_hazard_walker((Node *) ii_Predicate, context))
+ return true;
+
+ return false;
+}
+
+/*
+ * target_rel_index_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any existing index expressions or index predicate of a specified
+ * relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_index_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ List *index_oid_list;
+ ListCell *lc;
+ LOCKMODE lockmode = AccessShareLock;
+ bool max_hazard_found;
+
+ index_oid_list = RelationGetIndexList(rel);
+ foreach(lc, index_oid_list)
+ {
+ Relation index_rel;
+ List *ii_Expressions;
+ List *ii_Predicate;
+ List *temp_objects;
+ char temp_hazard;
+ Oid index_oid = lfirst_oid(lc);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ index_rel = index_open(index_oid, lockmode);
+
+ /* Check index expression */
+ ii_Expressions = RelationGetIndexExpressions(index_rel);
+ ii_Predicate = RelationGetIndexPredicate(index_rel);
+
+ max_hazard_found = index_expr_parallel_hazard(index_rel,
+ ii_Expressions,
+ ii_Predicate,
+ context);
+
+ index_close(index_rel, lockmode);
+
+ if (max_hazard_found)
+ return true;
+
+ /* Add the index itself to the objects list */
+ else if (context->objects != NIL)
+ {
+ safety_object *object;
+
+ object = make_safety_object(index_oid, IndexRelationId,
+ context->max_hazard);
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ list_free(index_oid_list);
+
+ return false;
+}
+
+/*
+ * target_rel_domain_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the specified DOMAIN type. Only any CHECK expressions are
+ * examined for parallel-safety.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_domain_parallel_hazard(Oid typid,
+ max_parallel_hazard_context *context)
+{
+ ListCell *lc;
+ List *domain_list;
+ List *temp_objects;
+ char temp_hazard;
+
+ domain_list = GetDomainConstraints(typid);
+
+ foreach(lc, domain_list)
+ {
+ DomainConstraintState *r = (DomainConstraintState *) lfirst(lc);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ if (parallel_hazard_walker((Node *) r->check_expr, context))
+ return true;
+
+ /* Add the constraint itself to the objects list */
+ else if (context->objects != NIL)
+ {
+ safety_object *object;
+ Oid constr_oid = get_domain_constraint_oid(typid,
+ r->name,
+ false);
+
+ object = make_safety_object(constr_oid,
+ ConstraintRelationId,
+ context->max_hazard);
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ return false;
+
+}
+
+/*
+ * target_rel_partitions_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any partitions of a specified relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_partitions_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition)
+{
+ int i;
+ PartitionDesc pdesc;
+ PartitionKey pkey;
+ ListCell *partexprs_item;
+ int partnatts;
+ List *partexprs,
+ *qual;
+
+ /*
+ * The partition check expression is composed of its parent table's
+ * partition key expression, we do not need to check it again for a
+ * partition because we already checked the parallel safety of its parent
+ * table's partition key expression.
+ */
+ if (!is_partition)
+ {
+ qual = RelationGetPartitionQual(rel);
+ if (parallel_hazard_walker((Node *) qual, context))
+ return true;
+ }
+
+ if (rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
+ return false;
+
+ pkey = RelationGetPartitionKey(rel);
+
+ partnatts = get_partition_natts(pkey);
+ partexprs = get_partition_exprs(pkey);
+
+ partexprs_item = list_head(partexprs);
+ for (i = 0; i < partnatts; i++)
+ {
+ Oid funcOid = pkey->partsupfunc[i].fn_oid;
+
+ if (OidIsValid(funcOid))
+ {
+ char proparallel = func_parallel(funcOid);
+
+ if (max_parallel_hazard_test(proparallel, context) &&
+ !context->check_all)
+ return true;
+
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object;
+
+ object = make_safety_object(funcOid, ProcedureRelationId,
+ proparallel);
+ context->objects = lappend(context->objects, object);
+ }
+ }
+
+ /* Check parallel-safety of any expressions in the partition key */
+ if (get_partition_col_attnum(pkey, i) == 0)
+ {
+ Node *check_expr = (Node *) lfirst(partexprs_item);
+
+ if (parallel_hazard_walker(check_expr, context))
+ return true;
+
+ partexprs_item = lnext(partexprs, partexprs_item);
+ }
+ }
+
+ /* Recursively check each partition ... */
+
+ /* Create the PartitionDirectory infrastructure if we didn't already */
+ if (context->partition_directory == NULL)
+ context->partition_directory =
+ CreatePartitionDirectory(CurrentMemoryContext, false);
+
+ pdesc = PartitionDirectoryLookup(context->partition_directory, rel);
+
+ for (i = 0; i < pdesc->nparts; i++)
+ {
+ Relation part_rel;
+ bool max_hazard_found;
+
+ part_rel = table_open(pdesc->oids[i], AccessShareLock);
+ max_hazard_found = target_rel_parallel_hazard_recurse(part_rel,
+ context,
+ true,
+ false);
+ table_close(part_rel, AccessShareLock);
+
+ if (max_hazard_found)
+ return true;
+ }
+
+ return false;
+}
+
+/*
+ * target_rel_chk_constr_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any CHECK expressions or CHECK constraints related to the
+ * specified relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_chk_constr_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ char temp_hazard;
+ int i;
+ TupleDesc tupdesc;
+ List *temp_objects;
+ ConstrCheck *check;
+
+ tupdesc = RelationGetDescr(rel);
+
+ if (tupdesc->constr == NULL)
+ return false;
+
+ check = tupdesc->constr->check;
+
+ /*
+ * Determine if there are any CHECK constraints which are not
+ * parallel-safe.
+ */
+ for (i = 0; i < tupdesc->constr->num_check; i++)
+ {
+ Expr *check_expr = stringToNode(check[i].ccbin);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ if (parallel_hazard_walker((Node *) check_expr, context))
+ return true;
+
+ /* Add the constraint itself to the objects list */
+ if (context->objects != NIL)
+ {
+ Oid constr_oid;
+ safety_object *object;
+
+ constr_oid = get_relation_constraint_oid(rel->rd_rel->oid,
+ check->ccname,
+ true);
+
+ object = make_safety_object(constr_oid,
+ ConstraintRelationId,
+ context->max_hazard);
+
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ return false;
+}
+
+/*
* is_parallel_allowed_for_modify
*
* Check at a high-level if parallel mode is able to be used for the specified
diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c
index 88faf4d..06d859c 100644
--- a/src/backend/utils/adt/misc.c
+++ b/src/backend/utils/adt/misc.c
@@ -23,6 +23,8 @@
#include "access/sysattr.h"
#include "access/table.h"
#include "catalog/catalog.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_type.h"
#include "catalog/system_fk_info.h"
@@ -31,6 +33,7 @@
#include "common/keywords.h"
#include "funcapi.h"
#include "miscadmin.h"
+#include "optimizer/clauses.h"
#include "parser/scansup.h"
#include "pgstat.h"
#include "postmaster/syslogger.h"
@@ -43,6 +46,7 @@
#include "utils/lsyscache.h"
#include "utils/ruleutils.h"
#include "utils/timestamp.h"
+#include "utils/varlena.h"
/*
* Common subroutine for num_nulls() and num_nonnulls().
@@ -605,6 +609,96 @@ pg_collation_for(PG_FUNCTION_ARGS)
PG_RETURN_TEXT_P(cstring_to_text(generate_collation_name(collid)));
}
+/*
+ * Find the worst parallel-hazard level in the given relation
+ *
+ * Returns the worst parallel hazard level (the earliest in this list:
+ * PROPARALLEL_UNSAFE, PROPARALLEL_RESTRICTED, PROPARALLEL_SAFE) that can
+ * be found in the given relation.
+ */
+Datum
+pg_get_table_max_parallel_dml_hazard(PG_FUNCTION_ARGS)
+{
+ char max_parallel_hazard;
+ Oid relOid = PG_GETARG_OID(0);
+
+ (void) target_rel_parallel_hazard(relOid, false,
+ PROPARALLEL_UNSAFE,
+ &max_parallel_hazard);
+
+ PG_RETURN_CHAR(max_parallel_hazard);
+}
+
+/*
+ * Determine whether the target relation is safe to execute parallel modification.
+ *
+ * Return all the PARALLEL RESTRICTED/UNSAFE objects.
+ */
+Datum
+pg_get_table_parallel_dml_safety(PG_FUNCTION_ARGS)
+{
+#define PG_GET_PARALLEL_SAFETY_COLS 3
+ List *objects;
+ ListCell *object;
+ TupleDesc tupdesc;
+ Tuplestorestate *tupstore;
+ MemoryContext per_query_ctx;
+ MemoryContext oldcontext;
+ ReturnSetInfo *rsinfo;
+ char max_parallel_hazard;
+ Oid relOid = PG_GETARG_OID(0);
+
+ rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+
+ /* check to see if caller supports us returning a tuplestore */
+ if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
+
+ if (!(rsinfo->allowedModes & SFRM_Materialize))
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("materialize mode required, but it is not allowed in this context")));
+
+ /* Build a tuple descriptor for our result type */
+ if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+ elog(ERROR, "return type must be a row type");
+
+ per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+ oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+ tupstore = tuplestore_begin_heap(true, false, work_mem);
+ rsinfo->returnMode = SFRM_Materialize;
+ rsinfo->setResult = tupstore;
+ rsinfo->setDesc = tupdesc;
+
+ MemoryContextSwitchTo(oldcontext);
+
+ objects = target_rel_parallel_hazard(relOid, true,
+ PROPARALLEL_UNSAFE,
+ &max_parallel_hazard);
+ foreach(object, objects)
+ {
+ Datum values[PG_GET_PARALLEL_SAFETY_COLS];
+ bool nulls[PG_GET_PARALLEL_SAFETY_COLS];
+ safety_object *sobject = (safety_object *) lfirst(object);
+
+ memset(nulls, 0, sizeof(nulls));
+
+ values[0] = sobject->objid;
+ values[1] = sobject->classid;
+ values[2] = sobject->proparallel;
+
+ tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+ }
+
+ /* clean up and return the tuplestore */
+ tuplestore_donestoring(tupstore);
+
+ return (Datum) 0;
+}
+
/*
* pg_relation_is_updatable - determine which update events the specified
diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c
index 326fae6..02a8f70 100644
--- a/src/backend/utils/cache/typcache.c
+++ b/src/backend/utils/cache/typcache.c
@@ -2535,6 +2535,23 @@ compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2)
}
/*
+ * GetDomainConstraints --- get DomainConstraintState list of specified domain type
+ */
+List *
+GetDomainConstraints(Oid type_id)
+{
+ TypeCacheEntry *typentry;
+ List *constraints = NIL;
+
+ typentry = lookup_type_cache(type_id, TYPECACHE_DOMAIN_CONSTR_INFO);
+
+ if(typentry->domainData != NULL)
+ constraints = typentry->domainData->constraints;
+
+ return constraints;
+}
+
+/*
* Load (or re-load) the enumData member of the typcache entry.
*/
static void
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8cd0252..4483cd1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3770,6 +3770,20 @@
provolatile => 's', prorettype => 'regclass', proargtypes => 'regclass',
prosrc => 'pg_get_replica_identity_index' },
+{ oid => '6122',
+ descr => 'parallel unsafe/restricted objects in the target relation',
+ proname => 'pg_get_table_parallel_dml_safety', prorows => '100',
+ proretset => 't', provolatile => 'v', proparallel => 'u',
+ prorettype => 'record', proargtypes => 'regclass',
+ proallargtypes => '{regclass,oid,oid,char}',
+ proargmodes => '{i,o,o,o}',
+ proargnames => '{table_name, objid, classid, proparallel}',
+ prosrc => 'pg_get_table_parallel_dml_safety' },
+
+{ oid => '6123', descr => 'worst parallel-hazard level in the given relation for DML',
+ proname => 'pg_get_table_max_parallel_dml_hazard', prorettype => 'char', proargtypes => 'regclass',
+ prosrc => 'pg_get_table_max_parallel_dml_hazard', provolatile => 'v', proparallel => 'u' },
+
# Deferrable unique constraint trigger
{ oid => '1250', descr => 'deferred UNIQUE constraint check',
proname => 'unique_key_recheck', provolatile => 'v', prorettype => 'trigger',
@@ -3777,11 +3791,11 @@
# Generic referential integrity constraint triggers
{ oid => '1644', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
- proname => 'RI_FKey_check_ins', provolatile => 'v', prorettype => 'trigger',
- proargtypes => '', prosrc => 'RI_FKey_check_ins' },
+ proname => 'RI_FKey_check_ins', provolatile => 'v', proparallel => 'r',
+ prorettype => 'trigger', proargtypes => '', prosrc => 'RI_FKey_check_ins' },
{ oid => '1645', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
- proname => 'RI_FKey_check_upd', provolatile => 'v', prorettype => 'trigger',
- proargtypes => '', prosrc => 'RI_FKey_check_upd' },
+ proname => 'RI_FKey_check_upd', provolatile => 'v', proparallel => 'r',
+ prorettype => 'trigger', proargtypes => '', prosrc => 'RI_FKey_check_upd' },
{ oid => '1646', descr => 'referential integrity ON DELETE CASCADE',
proname => 'RI_FKey_cascade_del', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'RI_FKey_cascade_del' },
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 32b5656..f8b2a72 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -23,6 +23,17 @@ typedef struct
List **windowFuncs; /* lists of WindowFuncs for each winref */
} WindowFuncLists;
+/*
+ * Information about a table-related object which could affect the safety of
+ * parallel data modification on table.
+ */
+typedef struct safety_object
+{
+ Oid objid; /* OID of object itself */
+ Oid classid; /* OID of its catalog */
+ char proparallel; /* parallel safety of the object */
+} safety_object;
+
extern bool contain_agg_clause(Node *clause);
extern bool contain_window_function(Node *clause);
@@ -54,5 +65,8 @@ extern Query *inline_set_returning_function(PlannerInfo *root,
RangeTblEntry *rte);
extern bool is_parallel_allowed_for_modify(Query *parse);
+extern List *target_rel_parallel_hazard(Oid relOid, bool findall,
+ char max_interesting,
+ char *max_hazard);
#endif /* CLAUSES_H */
diff --git a/src/include/utils/typcache.h b/src/include/utils/typcache.h
index 1d68a9a..28ca7d8 100644
--- a/src/include/utils/typcache.h
+++ b/src/include/utils/typcache.h
@@ -199,6 +199,8 @@ extern uint64 assign_record_type_identifier(Oid type_id, int32 typmod);
extern int compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2);
+extern List *GetDomainConstraints(Oid type_id);
+
extern size_t SharedRecordTypmodRegistryEstimate(void);
extern void SharedRecordTypmodRegistryInit(SharedRecordTypmodRegistry *,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 37cf4b2..307bb97 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3491,6 +3491,7 @@ rm_detail_t
role_auth_extra
row_security_policy_hook_type
rsv_callback
+safety_object
saophash_hash
save_buffer
scram_state
--
2.7.2.windows.1
v15-0002-parallel-SELECT-for-INSERT.patchapplication/octet-stream; name=v15-0002-parallel-SELECT-for-INSERT.patchDownload
From 7cad3cf052856ec9f5e087f1edec1c24b920dc74 Mon Sep 17 00:00:00 2001
From: houzj <houzj.fnst@fujitsu.com>
Date: Mon, 31 May 2021 09:32:54 +0800
Subject: [PATCH v14 2/4] parallel-SELECT-for-INSERT
Enable parallel select for insert.
Prepare for entering parallel mode by assigning a TransactionId.
---
src/backend/access/transam/xact.c | 26 +++++++++
src/backend/executor/execMain.c | 3 +
src/backend/optimizer/plan/planner.c | 21 +++----
src/backend/optimizer/util/clauses.c | 87 +++++++++++++++++++++++++++-
src/include/access/xact.h | 15 +++++
src/include/optimizer/clauses.h | 2 +
6 files changed, 143 insertions(+), 11 deletions(-)
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 441445927e..2d68e4633a 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -1014,6 +1014,32 @@ IsInParallelMode(void)
return CurrentTransactionState->parallelModeLevel != 0;
}
+/*
+ * PrepareParallelModePlanExec
+ *
+ * Prepare for entering parallel mode plan execution, based on command-type.
+ */
+void
+PrepareParallelModePlanExec(CmdType commandType)
+{
+ if (IsModifySupportedInParallelMode(commandType))
+ {
+ Assert(!IsInParallelMode());
+
+ /*
+ * Prepare for entering parallel mode by assigning a TransactionId.
+ * Failure to do this now would result in heap_insert() subsequently
+ * attempting to assign a TransactionId whilst in parallel-mode, which
+ * is not allowed.
+ *
+ * This approach has a disadvantage in that if the underlying SELECT
+ * does not return any rows, then the TransactionId is not used,
+ * however that shouldn't happen in practice in many cases.
+ */
+ (void) GetCurrentTransactionId();
+ }
+}
+
/*
* CommandCounterIncrement
*/
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index b3ce4bae53..ea685f0846 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -1535,7 +1535,10 @@ ExecutePlan(EState *estate,
estate->es_use_parallel_mode = use_parallel_mode;
if (use_parallel_mode)
+ {
+ PrepareParallelModePlanExec(estate->es_plannedstmt->commandType);
EnterParallelMode();
+ }
/*
* Loop until we've processed the proper number of tuples from the plan.
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 1868c4eff4..7736813230 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -314,16 +314,16 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
/*
* Assess whether it's feasible to use parallel mode for this query. We
* can't do this in a standalone backend, or if the command will try to
- * modify any data, or if this is a cursor operation, or if GUCs are set
- * to values that don't permit parallelism, or if parallel-unsafe
- * functions are present in the query tree.
+ * modify any data (except for Insert), or if this is a cursor operation,
+ * or if GUCs are set to values that don't permit parallelism, or if
+ * parallel-unsafe functions are present in the query tree.
*
- * (Note that we do allow CREATE TABLE AS, SELECT INTO, and CREATE
- * MATERIALIZED VIEW to use parallel plans, but as of now, only the leader
- * backend writes into a completely new table. In the future, we can
- * extend it to allow workers to write into the table. However, to allow
- * parallel updates and deletes, we have to solve other problems,
- * especially around combo CIDs.)
+ * (Note that we do allow CREATE TABLE AS, INSERT INTO...SELECT, SELECT
+ * INTO, and CREATE MATERIALIZED VIEW to use parallel plans. However, as
+ * of now, only the leader backend writes into a completely new table. In
+ * the future, we can extend it to allow workers to write into the table.
+ * However, to allow parallel updates and deletes, we have to solve other
+ * problems, especially around combo CIDs.)
*
* For now, we don't try to use parallel mode if we're running inside a
* parallel worker. We might eventually be able to relax this
@@ -332,7 +332,8 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
*/
if ((cursorOptions & CURSOR_OPT_PARALLEL_OK) != 0 &&
IsUnderPostmaster &&
- parse->commandType == CMD_SELECT &&
+ (parse->commandType == CMD_SELECT ||
+ is_parallel_allowed_for_modify(parse)) &&
!parse->hasModifyingCTE &&
max_parallel_workers_per_gather > 0 &&
!IsParallelWorker())
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 7187f17da5..ac0f243bf1 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -20,6 +20,8 @@
#include "postgres.h"
#include "access/htup_details.h"
+#include "access/table.h"
+#include "access/xact.h"
#include "catalog/pg_aggregate.h"
#include "catalog/pg_class.h"
#include "catalog/pg_language.h"
@@ -43,6 +45,7 @@
#include "parser/parse_agg.h"
#include "parser/parse_coerce.h"
#include "parser/parse_func.h"
+#include "parser/parsetree.h"
#include "rewrite/rewriteManip.h"
#include "tcop/tcopprot.h"
#include "utils/acl.h"
@@ -51,6 +54,7 @@
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
+#include "utils/rel.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
@@ -151,6 +155,7 @@ static Query *substitute_actual_srf_parameters(Query *expr,
int nargs, List *args);
static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
+static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
/*****************************************************************************
@@ -618,12 +623,34 @@ contain_volatile_functions_not_nextval_walker(Node *node, void *context)
char
max_parallel_hazard(Query *parse)
{
+ bool max_hazard_found;
max_parallel_hazard_context context;
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_UNSAFE;
context.safe_param_ids = NIL;
- (void) max_parallel_hazard_walker((Node *) parse, &context);
+
+ max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
+
+ if (!max_hazard_found &&
+ IsModifySupportedInParallelMode(parse->commandType))
+ {
+ RangeTblEntry *rte;
+ Relation target_rel;
+
+ rte = rt_fetch(parse->resultRelation, parse->rtable);
+
+ /*
+ * The target table is already locked by the caller (this is done in the
+ * parse/analyze phase), and remains locked until end-of-transaction.
+ */
+ target_rel = table_open(rte->relid, NoLock);
+
+ (void) max_parallel_hazard_test(target_rel->rd_rel->relparalleldml,
+ &context);
+ table_close(target_rel, NoLock);
+ }
+
return context.max_hazard;
}
@@ -857,6 +884,64 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
context);
}
+/*
+ * is_parallel_allowed_for_modify
+ *
+ * Check at a high-level if parallel mode is able to be used for the specified
+ * table-modification statement. Currently, we support only Inserts.
+ *
+ * It's not possible in the following cases:
+ *
+ * 1) INSERT...ON CONFLICT...DO UPDATE
+ * 2) INSERT without SELECT
+ *
+ * (Note: we don't do in-depth parallel-safety checks here, we do only the
+ * cheaper tests that can quickly exclude obvious cases for which
+ * parallelism isn't supported, to avoid having to do further parallel-safety
+ * checks for these)
+ */
+bool
+is_parallel_allowed_for_modify(Query *parse)
+{
+ bool hasSubQuery;
+ RangeTblEntry *rte;
+ ListCell *lc;
+
+ if (!IsModifySupportedInParallelMode(parse->commandType))
+ return false;
+
+ /*
+ * UPDATE is not currently supported in parallel-mode, so prohibit
+ * INSERT...ON CONFLICT...DO UPDATE...
+ *
+ * In order to support update, even if only in the leader, some further
+ * work would need to be done. A mechanism would be needed for sharing
+ * combo-cids between leader and workers during parallel-mode, since for
+ * example, the leader might generate a combo-cid and it needs to be
+ * propagated to the workers.
+ */
+ if (parse->commandType == CMD_INSERT &&
+ parse->onConflict != NULL &&
+ parse->onConflict->action == ONCONFLICT_UPDATE)
+ return false;
+
+ /*
+ * If there is no underlying SELECT, a parallel insert operation is not
+ * desirable.
+ */
+ hasSubQuery = false;
+ foreach(lc, parse->rtable)
+ {
+ rte = lfirst_node(RangeTblEntry, lc);
+ if (rte->rtekind == RTE_SUBQUERY)
+ {
+ hasSubQuery = true;
+ break;
+ }
+ }
+
+ return hasSubQuery;
+}
/*****************************************************************************
* Check clauses for nonstrict functions
diff --git a/src/include/access/xact.h b/src/include/access/xact.h
index 134f6862da..fd3f86bf7c 100644
--- a/src/include/access/xact.h
+++ b/src/include/access/xact.h
@@ -466,5 +466,20 @@ extern void ParsePrepareRecord(uint8 info, xl_xact_prepare *xlrec, xl_xact_parse
extern void EnterParallelMode(void);
extern void ExitParallelMode(void);
extern bool IsInParallelMode(void);
+extern void PrepareParallelModePlanExec(CmdType commandType);
+
+/*
+ * IsModifySupportedInParallelMode
+ *
+ * Indicates whether execution of the specified table-modification command
+ * (INSERT/UPDATE/DELETE) in parallel-mode is supported, subject to certain
+ * parallel-safety conditions.
+ */
+static inline bool
+IsModifySupportedInParallelMode(CmdType commandType)
+{
+ /* Currently only INSERT is supported */
+ return (commandType == CMD_INSERT);
+}
#endif /* XACT_H */
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 0673887a85..32b56565e5 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -53,4 +53,6 @@ extern void CommuteOpExpr(OpExpr *clause);
extern Query *inline_set_returning_function(PlannerInfo *root,
RangeTblEntry *rte);
+extern bool is_parallel_allowed_for_modify(Query *parse);
+
#endif /* CLAUSES_H */
--
2.27.0
0006-hack-the-rewriter-bug.patchapplication/octet-stream; name=0006-hack-the-rewriter-bug.patchDownload
From a8f92eec97e7d3d6afc2cdfd5982b3300de9f45b Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Wed, 28 Jul 2021 19:05:34 +0800
Subject: [PATCH] hack the rewriter bug
---
src/backend/optimizer/util/clauses.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 09c7e92..9195ae4 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -941,6 +941,27 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
}
/*
+ * ModifyingCTE expressions are treated as parallel-unsafe.
+ *
+ * XXX Normally, if the Query has a modifying CTE, the hasModifyingCTE
+ * flag is set in the Query tree, and the query will be regarded as
+ * parallel-usafe. However, in some cases, a re-written query with a
+ * modifying CTE does not have that flag set, due to a bug in the query
+ * rewriter.
+ */
+ else if (IsA(node, CommonTableExpr))
+ {
+ CommonTableExpr *cte = (CommonTableExpr *) node;
+ Query *ctequery = castNode(Query, cte->ctequery);
+
+ if (ctequery->commandType != CMD_SELECT)
+ {
+ context->max_hazard = PROPARALLEL_UNSAFE;
+ return true;
+ }
+ }
+
+ /*
* As a notational convenience for callers, look through RestrictInfo.
*/
else if (IsA(node, RestrictInfo))
--
2.7.2.windows.1
v15-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patchapplication/octet-stream; name=v15-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patchDownload
From 01bdde01fb66e93928cb84b6aeee7dd31ea9ad83 Mon Sep 17 00:00:00 2001
From: Hou Zhijie <HouZhijie@foxmail.com>
Date: Tue, 3 Aug 2021 14:13:39 +0800
Subject: [PATCH] CREATE-ALTER-TABLE-PARALLEL-DML
Enable users to declare a table's parallel data-modification safety
(DEFAULT/SAFE/RESTRICTED/UNSAFE).
Add a table property that represents parallel safety of a table for
DML statement execution.
It may be specified as follows:
CREATE TABLE table_name PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE };
ALTER TABLE table_name PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE };
This property is recorded in pg_class's relparalleldml column as 'u',
'r', or 's' like pg_proc's proparallel and as 'd' if not set.
The default is 'd'.
If relparalleldml is specific(safe/restricted/unsafe), then
the planner assumes that all of the table, its descendant partitions,
and their ancillary objects have, at worst, the specified parallel
safety. The user is responsible for its correctness.
If not set, for non-partitioned table planner will check the parallel
safety defaultmatically. But for partitioned table, DEFAULT is the same as
UNSAFE.
---
src/backend/bootstrap/bootparse.y | 3 +
src/backend/catalog/heap.c | 7 +-
src/backend/catalog/index.c | 2 +
src/backend/catalog/toasting.c | 1 +
src/backend/commands/cluster.c | 1 +
src/backend/commands/createas.c | 1 +
src/backend/commands/sequence.c | 1 +
src/backend/commands/tablecmds.c | 97 +++++++++++++++++++
src/backend/commands/typecmds.c | 1 +
src/backend/commands/view.c | 1 +
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 2 +
src/backend/nodes/outfuncs.c | 2 +
src/backend/nodes/readfuncs.c | 1 +
src/backend/parser/gram.y | 73 ++++++++++----
src/backend/utils/cache/relcache.c | 6 +-
src/bin/pg_dump/pg_dump.c | 50 ++++++++--
src/bin/pg_dump/pg_dump.h | 1 +
src/bin/psql/describe.c | 71 ++++++++++++--
src/include/catalog/heap.h | 2 +
src/include/catalog/pg_class.h | 3 +
src/include/catalog/pg_proc.h | 2 +
src/include/nodes/parsenodes.h | 4 +-
src/include/nodes/primnodes.h | 1 +
src/include/parser/kwlist.h | 1 +
src/include/utils/relcache.h | 3 +-
.../test_ddl_deparse/test_ddl_deparse.c | 3 +
27 files changed, 302 insertions(+), 39 deletions(-)
diff --git a/src/backend/bootstrap/bootparse.y b/src/backend/bootstrap/bootparse.y
index 5fcd004e1b..4712536088 100644
--- a/src/backend/bootstrap/bootparse.y
+++ b/src/backend/bootstrap/bootparse.y
@@ -25,6 +25,7 @@
#include "catalog/pg_authid.h"
#include "catalog/pg_class.h"
#include "catalog/pg_namespace.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/toasting.h"
#include "commands/defrem.h"
@@ -208,6 +209,7 @@ Boot_CreateStmt:
tupdesc,
RELKIND_RELATION,
RELPERSISTENCE_PERMANENT,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
true,
@@ -231,6 +233,7 @@ Boot_CreateStmt:
NIL,
RELKIND_RELATION,
RELPERSISTENCE_PERMANENT,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
ONCOMMIT_NOOP,
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 83746d3fd9..135df961c9 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -302,6 +302,7 @@ heap_create(const char *relname,
TupleDesc tupDesc,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
bool allow_system_table_mods,
@@ -404,7 +405,8 @@ heap_create(const char *relname,
shared_relation,
mapped_relation,
relpersistence,
- relkind);
+ relkind,
+ relparalleldml);
/*
* Have the storage manager create the relation's disk file, if needed.
@@ -959,6 +961,7 @@ InsertPgClassTuple(Relation pg_class_desc,
values[Anum_pg_class_relhassubclass - 1] = BoolGetDatum(rd_rel->relhassubclass);
values[Anum_pg_class_relispopulated - 1] = BoolGetDatum(rd_rel->relispopulated);
values[Anum_pg_class_relreplident - 1] = CharGetDatum(rd_rel->relreplident);
+ values[Anum_pg_class_relparalleldml - 1] = CharGetDatum(rd_rel->relparalleldml);
values[Anum_pg_class_relispartition - 1] = BoolGetDatum(rd_rel->relispartition);
values[Anum_pg_class_relrewrite - 1] = ObjectIdGetDatum(rd_rel->relrewrite);
values[Anum_pg_class_relfrozenxid - 1] = TransactionIdGetDatum(rd_rel->relfrozenxid);
@@ -1152,6 +1155,7 @@ heap_create_with_catalog(const char *relname,
List *cooked_constraints,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
OnCommitAction oncommit,
@@ -1299,6 +1303,7 @@ heap_create_with_catalog(const char *relname,
tupdesc,
relkind,
relpersistence,
+ relparalleldml,
shared_relation,
mapped_relation,
allow_system_table_mods,
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 26bfa74ce7..18f3a51686 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -50,6 +50,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_opclass.h"
#include "catalog/pg_operator.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_trigger.h"
#include "catalog/pg_type.h"
@@ -935,6 +936,7 @@ index_create(Relation heapRelation,
indexTupDesc,
relkind,
relpersistence,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
allow_system_table_mods,
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index 147b5abc19..b32d2d4132 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -251,6 +251,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
NIL,
RELKIND_TOASTVALUE,
rel->rd_rel->relpersistence,
+ rel->rd_rel->relparalleldml,
shared_relation,
mapped_relation,
ONCOMMIT_NOOP,
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index b3d8b6deb0..d1a7603d90 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -693,6 +693,7 @@ make_new_heap(Oid OIDOldHeap, Oid NewTableSpace, Oid NewAccessMethod,
NIL,
RELKIND_RELATION,
relpersistence,
+ OldHeap->rd_rel->relparalleldml,
false,
RelationIsMapped(OldHeap),
ONCOMMIT_NOOP,
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 0982851715..7607b91ae8 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -107,6 +107,7 @@ create_ctas_internal(List *attrList, IntoClause *into)
create->options = into->options;
create->oncommit = into->onCommit;
create->tablespacename = into->tableSpaceName;
+ create->paralleldmlsafety = into->paralleldmlsafety;
create->if_not_exists = false;
create->accessMethod = into->accessMethod;
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..384770050a 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -211,6 +211,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
stmt->options = NIL;
stmt->oncommit = ONCOMMIT_NOOP;
stmt->tablespacename = NULL;
+ stmt->paralleldmlsafety = NULL;
stmt->if_not_exists = seq->if_not_exists;
address = DefineRelation(stmt, RELKIND_SEQUENCE, seq->ownerId, NULL, NULL);
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fcd778c62a..5968252648 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -40,6 +40,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_namespace.h"
#include "catalog/pg_opclass.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_statistic_ext.h"
#include "catalog/pg_trigger.h"
@@ -603,6 +604,7 @@ static void refuseDupeIndexAttach(Relation parentIdx, Relation partIdx,
static List *GetParentedForeignKeyRefs(Relation partition);
static void ATDetachCheckNoForeignKeyRefs(Relation partition);
static char GetAttributeCompression(Oid atttypid, char *compression);
+static void ATExecParallelDMLSafety(Relation rel, Node *def);
/* ----------------------------------------------------------------
@@ -648,6 +650,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
LOCKMODE parentLockmode;
const char *accessMethod = NULL;
Oid accessMethodId = InvalidOid;
+ char relparalleldml = PROPARALLEL_DEFAULT;
/*
* Truncate relname to appropriate length (probably a waste of time, as
@@ -926,6 +929,32 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
if (accessMethod != NULL)
accessMethodId = get_table_am_oid(accessMethod, false);
+ if (stmt->paralleldmlsafety != NULL)
+ {
+ if (strcmp(stmt->paralleldmlsafety, "safe") == 0)
+ {
+ if (relkind == RELKIND_FOREIGN_TABLE ||
+ stmt->relation->relpersistence == RELPERSISTENCE_TEMP)
+ ereport(ERROR,
+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ errmsg("cannot perform parallel data modification on relation \"%s\"",
+ relname),
+ errdetail_relkind_not_supported(relkind)));
+
+ relparalleldml = PROPARALLEL_SAFE;
+ }
+ else if (strcmp(stmt->paralleldmlsafety, "restricted") == 0)
+ relparalleldml = PROPARALLEL_RESTRICTED;
+ else if (strcmp(stmt->paralleldmlsafety, "unsafe") == 0)
+ relparalleldml = PROPARALLEL_UNSAFE;
+ else if (strcmp(stmt->paralleldmlsafety, "default") == 0)
+ relparalleldml = PROPARALLEL_DEFAULT;
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("parameter \"parallel dml\" must be SAFE, RESTRICTED, UNSAFE or DEFAULT")));
+ }
+
/*
* Create the relation. Inherited defaults and constraints are passed in
* for immediate handling --- since they don't need parsing, they can be
@@ -944,6 +973,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
old_constraints),
relkind,
stmt->relation->relpersistence,
+ relparalleldml,
false,
false,
stmt->oncommit,
@@ -4187,6 +4217,7 @@ AlterTableGetLockLevel(List *cmds)
case AT_SetIdentity:
case AT_DropExpression:
case AT_SetCompression:
+ case AT_ParallelDMLSafety:
cmd_lockmode = AccessExclusiveLock;
break;
@@ -4737,6 +4768,11 @@ ATPrepCmd(List **wqueue, Relation rel, AlterTableCmd *cmd,
/* No command-specific prep needed */
pass = AT_PASS_MISC;
break;
+ case AT_ParallelDMLSafety:
+ ATSimplePermissions(cmd->subtype, rel, ATT_TABLE | ATT_FOREIGN_TABLE);
+ /* No command-specific prep needed */
+ pass = AT_PASS_MISC;
+ break;
default: /* oops */
elog(ERROR, "unrecognized alter table type: %d",
(int) cmd->subtype);
@@ -5142,6 +5178,9 @@ ATExecCmd(List **wqueue, AlteredTableInfo *tab,
case AT_DetachPartitionFinalize:
ATExecDetachPartitionFinalize(rel, ((PartitionCmd *) cmd->def)->name);
break;
+ case AT_ParallelDMLSafety:
+ ATExecParallelDMLSafety(rel, cmd->def);
+ break;
default: /* oops */
elog(ERROR, "unrecognized alter table type: %d",
(int) cmd->subtype);
@@ -6113,6 +6152,8 @@ alter_table_type_to_string(AlterTableType cmdtype)
return "ALTER COLUMN ... DROP IDENTITY";
case AT_ReAddStatistics:
return NULL; /* not real grammar */
+ case AT_ParallelDMLSafety:
+ return "PARALLEL DML SAFETY";
}
return NULL;
@@ -18773,3 +18814,59 @@ GetAttributeCompression(Oid atttypid, char *compression)
return cmethod;
}
+
+static void
+ATExecParallelDMLSafety(Relation rel, Node *def)
+{
+ Relation pg_class;
+ Oid relid;
+ HeapTuple tuple;
+ char relparallel = PROPARALLEL_DEFAULT;
+ char *parallel = strVal(def);
+
+ if (parallel)
+ {
+ if (strcmp(parallel, "safe") == 0)
+ {
+ /*
+ * We can't support table modification in a parallel worker if it's
+ * a foreign table/partition (no FDW API for supporting parallel
+ * access) or a temporary table.
+ */
+ if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||
+ RelationUsesLocalBuffers(rel))
+ ereport(ERROR,
+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ errmsg("cannot perform parallel data modification on relation \"%s\"",
+ RelationGetRelationName(rel)),
+ errdetail_relkind_not_supported(rel->rd_rel->relkind)));
+
+ relparallel = PROPARALLEL_SAFE;
+ }
+ else if (strcmp(parallel, "restricted") == 0)
+ relparallel = PROPARALLEL_RESTRICTED;
+ else if (strcmp(parallel, "unsafe") == 0)
+ relparallel = PROPARALLEL_UNSAFE;
+ else if (strcmp(parallel, "default") == 0)
+ relparallel = PROPARALLEL_DEFAULT;
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("parameter \"parallel dml\" must be SAFE, RESTRICTED, UNSAFE or DEFAULT")));
+ }
+
+ relid = RelationGetRelid(rel);
+
+ pg_class = table_open(RelationRelationId, RowExclusiveLock);
+
+ tuple = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(relid));
+
+ if (!HeapTupleIsValid(tuple))
+ elog(ERROR, "cache lookup failed for relation %u", relid);
+
+ ((Form_pg_class) GETSTRUCT(tuple))->relparalleldml = relparallel;
+ CatalogTupleUpdate(pg_class, &tuple->t_self, tuple);
+
+ table_close(pg_class, RowExclusiveLock);
+ heap_freetuple(tuple);
+}
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index 93eeff950b..a2f06c3e79 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2525,6 +2525,7 @@ DefineCompositeType(RangeVar *typevar, List *coldeflist)
createStmt->options = NIL;
createStmt->oncommit = ONCOMMIT_NOOP;
createStmt->tablespacename = NULL;
+ createStmt->paralleldmlsafety = NULL;
createStmt->if_not_exists = false;
/*
diff --git a/src/backend/commands/view.c b/src/backend/commands/view.c
index 4df05a0b33..65f33a95d8 100644
--- a/src/backend/commands/view.c
+++ b/src/backend/commands/view.c
@@ -227,6 +227,7 @@ DefineVirtualRelation(RangeVar *relation, List *tlist, bool replace,
createStmt->options = options;
createStmt->oncommit = ONCOMMIT_NOOP;
createStmt->tablespacename = NULL;
+ createStmt->paralleldmlsafety = NULL;
createStmt->if_not_exists = false;
/*
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 29020c908e..df41165c5f 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -3534,6 +3534,7 @@ CopyCreateStmtFields(const CreateStmt *from, CreateStmt *newnode)
COPY_SCALAR_FIELD(oncommit);
COPY_STRING_FIELD(tablespacename);
COPY_STRING_FIELD(accessMethod);
+ COPY_STRING_FIELD(paralleldmlsafety);
COPY_SCALAR_FIELD(if_not_exists);
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 8a1762000c..67b1966f18 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -146,6 +146,7 @@ _equalIntoClause(const IntoClause *a, const IntoClause *b)
COMPARE_NODE_FIELD(options);
COMPARE_SCALAR_FIELD(onCommit);
COMPARE_STRING_FIELD(tableSpaceName);
+ COMPARE_STRING_FIELD(paralleldmlsafety);
COMPARE_NODE_FIELD(viewQuery);
COMPARE_SCALAR_FIELD(skipData);
@@ -1292,6 +1293,7 @@ _equalCreateStmt(const CreateStmt *a, const CreateStmt *b)
COMPARE_SCALAR_FIELD(oncommit);
COMPARE_STRING_FIELD(tablespacename);
COMPARE_STRING_FIELD(accessMethod);
+ COMPARE_STRING_FIELD(paralleldmlsafety);
COMPARE_SCALAR_FIELD(if_not_exists);
return true;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 48202d2232..fdc5b63c28 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -1107,6 +1107,7 @@ _outIntoClause(StringInfo str, const IntoClause *node)
WRITE_NODE_FIELD(options);
WRITE_ENUM_FIELD(onCommit, OnCommitAction);
WRITE_STRING_FIELD(tableSpaceName);
+ WRITE_STRING_FIELD(paralleldmlsafety);
WRITE_NODE_FIELD(viewQuery);
WRITE_BOOL_FIELD(skipData);
}
@@ -2714,6 +2715,7 @@ _outCreateStmtInfo(StringInfo str, const CreateStmt *node)
WRITE_ENUM_FIELD(oncommit, OnCommitAction);
WRITE_STRING_FIELD(tablespacename);
WRITE_STRING_FIELD(accessMethod);
+ WRITE_STRING_FIELD(paralleldmlsafety);
WRITE_BOOL_FIELD(if_not_exists);
}
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 77d082d8b4..ba725cb290 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -563,6 +563,7 @@ _readIntoClause(void)
READ_NODE_FIELD(options);
READ_ENUM_FIELD(onCommit, OnCommitAction);
READ_STRING_FIELD(tableSpaceName);
+ READ_STRING_FIELD(paralleldmlsafety);
READ_NODE_FIELD(viewQuery);
READ_BOOL_FIELD(skipData);
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 39a2849eba..f74a7cac60 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -609,7 +609,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <partboundspec> PartitionBoundSpec
%type <list> hash_partbound
%type <defelt> hash_partbound_elem
-
+%type <str> ParallelDMLSafety
/*
* Non-keyword token types. These are hard-wired into the "flex" lexer.
@@ -654,7 +654,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
DATA_P DATABASE DAY_P DEALLOCATE DEC DECIMAL_P DECLARE DEFAULT DEFAULTS
DEFERRABLE DEFERRED DEFINER DELETE_P DELIMITER DELIMITERS DEPENDS DEPTH DESC
- DETACH DICTIONARY DISABLE_P DISCARD DISTINCT DO DOCUMENT_P DOMAIN_P
+ DETACH DICTIONARY DISABLE_P DISCARD DISTINCT DML DO DOCUMENT_P DOMAIN_P
DOUBLE_P DROP
EACH ELSE ENABLE_P ENCODING ENCRYPTED END_P ENUM_P ESCAPE EVENT EXCEPT
@@ -2691,6 +2691,21 @@ alter_table_cmd:
n->subtype = AT_NoForceRowSecurity;
$$ = (Node *)n;
}
+ /* ALTER TABLE <name> PARALLEL DML SAFE/RESTRICTED/UNSAFE/DEFAULT */
+ | PARALLEL DML ColId
+ {
+ AlterTableCmd *n = makeNode(AlterTableCmd);
+ n->subtype = AT_ParallelDMLSafety;
+ n->def = (Node *)makeString($3);
+ $$ = (Node *)n;
+ }
+ | PARALLEL DML DEFAULT
+ {
+ AlterTableCmd *n = makeNode(AlterTableCmd);
+ n->subtype = AT_ParallelDMLSafety;
+ n->def = (Node *)makeString("default");
+ $$ = (Node *)n;
+ }
| alter_generic_options
{
AlterTableCmd *n = makeNode(AlterTableCmd);
@@ -3276,7 +3291,7 @@ copy_generic_opt_arg_list_item:
CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
OptInherit OptPartitionSpec table_access_method_clause OptWith
- OnCommitOption OptTableSpace
+ OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3290,12 +3305,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $11;
n->oncommit = $12;
n->tablespacename = $13;
+ n->paralleldmlsafety = $14;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name '('
OptTableElementList ')' OptInherit OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3309,12 +3325,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $14;
n->oncommit = $15;
n->tablespacename = $16;
+ n->paralleldmlsafety = $17;
n->if_not_exists = true;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE qualified_name OF any_name
OptTypedTableElementList OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3329,12 +3346,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $10;
n->oncommit = $11;
n->tablespacename = $12;
+ n->paralleldmlsafety = $13;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name OF any_name
OptTypedTableElementList OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3349,12 +3367,14 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $13;
n->oncommit = $14;
n->tablespacename = $15;
+ n->paralleldmlsafety = $16;
n->if_not_exists = true;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE qualified_name PARTITION OF qualified_name
OptTypedTableElementList PartitionBoundSpec OptPartitionSpec
table_access_method_clause OptWith OnCommitOption OptTableSpace
+ ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3369,12 +3389,14 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $12;
n->oncommit = $13;
n->tablespacename = $14;
+ n->paralleldmlsafety = $15;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name PARTITION OF
qualified_name OptTypedTableElementList PartitionBoundSpec OptPartitionSpec
table_access_method_clause OptWith OnCommitOption OptTableSpace
+ ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3389,6 +3411,7 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $15;
n->oncommit = $16;
n->tablespacename = $17;
+ n->paralleldmlsafety = $18;
n->if_not_exists = true;
$$ = (Node *)n;
}
@@ -4089,6 +4112,11 @@ OptTableSpace: TABLESPACE name { $$ = $2; }
| /*EMPTY*/ { $$ = NULL; }
;
+ParallelDMLSafety: PARALLEL DML name { $$ = $3; }
+ | PARALLEL DML DEFAULT { $$ = pstrdup("default"); }
+ | /*EMPTY*/ { $$ = NULL; }
+ ;
+
OptConsTableSpace: USING INDEX TABLESPACE name { $$ = $4; }
| /*EMPTY*/ { $$ = NULL; }
;
@@ -4236,7 +4264,7 @@ CreateAsStmt:
create_as_target:
qualified_name opt_column_list table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
$$ = makeNode(IntoClause);
$$->rel = $1;
@@ -4245,6 +4273,7 @@ create_as_target:
$$->options = $4;
$$->onCommit = $5;
$$->tableSpaceName = $6;
+ $$->paralleldmlsafety = $7;
$$->viewQuery = NULL;
$$->skipData = false; /* might get changed later */
}
@@ -5024,7 +5053,7 @@ AlterForeignServerStmt: ALTER SERVER name foreign_server_version alter_generic_o
CreateForeignTableStmt:
CREATE FOREIGN TABLE qualified_name
'(' OptTableElementList ')'
- OptInherit SERVER name create_generic_options
+ OptInherit ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$4->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5036,15 +5065,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $9;
n->base.if_not_exists = false;
/* FDW-specific data */
- n->servername = $10;
- n->options = $11;
+ n->servername = $11;
+ n->options = $12;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE IF_P NOT EXISTS qualified_name
'(' OptTableElementList ')'
- OptInherit SERVER name create_generic_options
+ OptInherit ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$7->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5056,15 +5086,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $12;
n->base.if_not_exists = true;
/* FDW-specific data */
- n->servername = $13;
- n->options = $14;
+ n->servername = $14;
+ n->options = $15;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE qualified_name
PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec
- SERVER name create_generic_options
+ ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$4->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5077,15 +5108,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $10;
n->base.if_not_exists = false;
/* FDW-specific data */
- n->servername = $11;
- n->options = $12;
+ n->servername = $12;
+ n->options = $13;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE IF_P NOT EXISTS qualified_name
PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec
- SERVER name create_generic_options
+ ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$7->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5098,10 +5130,11 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $13;
n->base.if_not_exists = true;
/* FDW-specific data */
- n->servername = $14;
- n->options = $15;
+ n->servername = $15;
+ n->options = $16;
$$ = (Node *) n;
}
;
@@ -15547,6 +15580,7 @@ unreserved_keyword:
| DICTIONARY
| DISABLE_P
| DISCARD
+ | DML
| DOCUMENT_P
| DOMAIN_P
| DOUBLE_P
@@ -16087,6 +16121,7 @@ bare_label_keyword:
| DISABLE_P
| DISCARD
| DISTINCT
+ | DML
| DO
| DOCUMENT_P
| DOMAIN_P
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 13d9994af3..70d8ecb1dd 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -1873,6 +1873,7 @@ formrdesc(const char *relationName, Oid relationReltype,
relation->rd_rel->relkind = RELKIND_RELATION;
relation->rd_rel->relnatts = (int16) natts;
relation->rd_rel->relam = HEAP_TABLE_AM_OID;
+ relation->rd_rel->relparalleldml = PROPARALLEL_DEFAULT;
/*
* initialize attribute tuple form
@@ -3359,7 +3360,8 @@ RelationBuildLocalRelation(const char *relname,
bool shared_relation,
bool mapped_relation,
char relpersistence,
- char relkind)
+ char relkind,
+ char relparalleldml)
{
Relation rel;
MemoryContext oldcxt;
@@ -3509,6 +3511,8 @@ RelationBuildLocalRelation(const char *relname,
else
rel->rd_rel->relreplident = REPLICA_IDENTITY_NOTHING;
+ rel->rd_rel->relparalleldml = relparalleldml;
+
/*
* Insert relation physical and logical identifiers (OIDs) into the right
* places. For a mapped relation, we set relfilenode to zero and rely on
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 90ac445bcd..5165202e84 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -6253,6 +6253,7 @@ getTables(Archive *fout, int *numTables)
int i_relpersistence;
int i_relispopulated;
int i_relreplident;
+ int i_relparalleldml;
int i_owning_tab;
int i_owning_col;
int i_reltablespace;
@@ -6358,7 +6359,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, am.amname, "
+ "c.relreplident, c.relparalleldml, c.relpages, am.amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
"ELSE 0 END AS foreignserver, "
@@ -6450,7 +6451,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, "
+ "c.relreplident, c.relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6503,7 +6504,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, "
+ "c.relreplident, c.relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6556,7 +6557,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6609,7 +6610,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"c.relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6660,7 +6661,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, "
@@ -6708,7 +6709,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6756,7 +6757,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6803,7 +6804,7 @@ getTables(Archive *fout, int *numTables)
"0 AS toid, "
"0 AS tfrozenxid, 0 AS tminmxid,"
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6872,6 +6873,7 @@ getTables(Archive *fout, int *numTables)
i_relpersistence = PQfnumber(res, "relpersistence");
i_relispopulated = PQfnumber(res, "relispopulated");
i_relreplident = PQfnumber(res, "relreplident");
+ i_relparalleldml = PQfnumber(res, "relparalleldml");
i_relpages = PQfnumber(res, "relpages");
i_foreignserver = PQfnumber(res, "foreignserver");
i_owning_tab = PQfnumber(res, "owning_tab");
@@ -6927,6 +6929,7 @@ getTables(Archive *fout, int *numTables)
tblinfo[i].hasoids = (strcmp(PQgetvalue(res, i, i_relhasoids), "t") == 0);
tblinfo[i].relispopulated = (strcmp(PQgetvalue(res, i, i_relispopulated), "t") == 0);
tblinfo[i].relreplident = *(PQgetvalue(res, i, i_relreplident));
+ tblinfo[i].relparalleldml = *(PQgetvalue(res, i, i_relparalleldml));
tblinfo[i].relpages = atoi(PQgetvalue(res, i, i_relpages));
tblinfo[i].frozenxid = atooid(PQgetvalue(res, i, i_relfrozenxid));
tblinfo[i].minmxid = atooid(PQgetvalue(res, i, i_relminmxid));
@@ -16555,6 +16558,35 @@ dumpTableSchema(Archive *fout, const TableInfo *tbinfo)
}
}
+ if (tbinfo->relkind == RELKIND_RELATION ||
+ tbinfo->relkind == RELKIND_PARTITIONED_TABLE ||
+ tbinfo->relkind == RELKIND_FOREIGN_TABLE)
+ {
+ appendPQExpBuffer(q, "\nALTER %sTABLE %s PARALLEL DML ",
+ tbinfo->relkind == RELKIND_FOREIGN_TABLE ? "FOREIGN " : "",
+ qualrelname);
+
+ switch (tbinfo->relparalleldml)
+ {
+ case 's':
+ appendPQExpBuffer(q, "SAFE;\n");
+ break;
+ case 'r':
+ appendPQExpBuffer(q, "RESTRICTED;\n");
+ break;
+ case 'u':
+ appendPQExpBuffer(q, "UNSAFE;\n");
+ break;
+ case 'd':
+ appendPQExpBuffer(q, "DEFAULT;\n");
+ break;
+ default:
+ /* should not reach here */
+ appendPQExpBuffer(q, "DEFAULT;\n");
+ break;
+ }
+ }
+
if (tbinfo->forcerowsec)
appendPQExpBuffer(q, "\nALTER TABLE ONLY %s FORCE ROW LEVEL SECURITY;\n",
qualrelname);
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index f5e170e0db..8175a0bc82 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -270,6 +270,7 @@ typedef struct _tableInfo
char relpersistence; /* relation persistence */
bool relispopulated; /* relation is populated */
char relreplident; /* replica identifier */
+ char relparalleldml; /* parallel safety of dml on the relation */
char *reltablespace; /* relation tablespace */
char *reloptions; /* options specified by WITH (...) */
char *checkoption; /* WITH CHECK OPTION, if any */
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 8333558bda..f896fe1793 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1656,6 +1656,7 @@ describeOneTableDetails(const char *schemaname,
char *reloftype;
char relpersistence;
char relreplident;
+ char relparalleldml;
char *relam;
} tableinfo;
bool show_column_details = false;
@@ -1669,7 +1670,25 @@ describeOneTableDetails(const char *schemaname,
initPQExpBuffer(&tmpbuf);
/* Get general table info */
- if (pset.sversion >= 120000)
+ if (pset.sversion >= 150000)
+ {
+ printfPQExpBuffer(&buf,
+ "SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, "
+ "c.relhastriggers, c.relrowsecurity, c.relforcerowsecurity, "
+ "false AS relhasoids, c.relispartition, %s, c.reltablespace, "
+ "CASE WHEN c.reloftype = 0 THEN '' ELSE c.reloftype::pg_catalog.regtype::pg_catalog.text END, "
+ "c.relpersistence, c.relreplident, am.amname, c.relparalleldml\n"
+ "FROM pg_catalog.pg_class c\n "
+ "LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)\n"
+ "LEFT JOIN pg_catalog.pg_am am ON (c.relam = am.oid)\n"
+ "WHERE c.oid = '%s';",
+ (verbose ?
+ "pg_catalog.array_to_string(c.reloptions || "
+ "array(select 'toast.' || x from pg_catalog.unnest(tc.reloptions) x), ', ')\n"
+ : "''"),
+ oid);
+ }
+ else if (pset.sversion >= 120000)
{
printfPQExpBuffer(&buf,
"SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, "
@@ -1853,6 +1872,8 @@ describeOneTableDetails(const char *schemaname,
(char *) NULL : pg_strdup(PQgetvalue(res, 0, 14));
else
tableinfo.relam = NULL;
+ tableinfo.relparalleldml = (pset.sversion >= 150000) ?
+ *(PQgetvalue(res, 0, 15)) : 0;
PQclear(res);
res = NULL;
@@ -3630,6 +3651,21 @@ describeOneTableDetails(const char *schemaname,
printfPQExpBuffer(&buf, _("Access method: %s"), tableinfo.relam);
printTableAddFooter(&cont, buf.data);
}
+
+ if (verbose &&
+ (tableinfo.relkind == RELKIND_RELATION ||
+ tableinfo.relkind == RELKIND_PARTITIONED_TABLE ||
+ tableinfo.relkind == RELKIND_FOREIGN_TABLE) &&
+ tableinfo.relparalleldml != 0)
+ {
+ printfPQExpBuffer(&buf, _("Parallel DML: %s"),
+ tableinfo.relparalleldml == 'd' ? "default" :
+ tableinfo.relparalleldml == 'u' ? "unsafe" :
+ tableinfo.relparalleldml == 'r' ? "restricted" :
+ tableinfo.relparalleldml == 's' ? "safe" :
+ "???");
+ printTableAddFooter(&cont, buf.data);
+ }
}
/* reloptions, if verbose */
@@ -4005,7 +4041,7 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys
PGresult *res;
printQueryOpt myopt = pset.popt;
int cols_so_far;
- bool translate_columns[] = {false, false, true, false, false, false, false, false, false};
+ bool translate_columns[] = {false, false, true, false, false, false, false, false, false, false};
/* If tabtypes is empty, we default to \dtvmsE (but see also command.c) */
if (!(showTables || showIndexes || showViews || showMatViews || showSeq || showForeign))
@@ -4073,22 +4109,43 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys
gettext_noop("unlogged"),
gettext_noop("Persistence"));
translate_columns[cols_so_far] = true;
+ cols_so_far++;
}
- /*
- * We don't bother to count cols_so_far below here, as there's no need
- * to; this might change with future additions to the output columns.
- */
-
/*
* Access methods exist for tables, materialized views and indexes.
* This has been introduced in PostgreSQL 12 for tables.
*/
if (pset.sversion >= 120000 && !pset.hide_tableam &&
(showTables || showMatViews || showIndexes))
+ {
appendPQExpBuffer(&buf,
",\n am.amname as \"%s\"",
gettext_noop("Access method"));
+ cols_so_far++;
+ }
+
+ /*
+ * Show whether the data in the relation is default('d') unsafe('u'),
+ * restricted('r'), or safe('s') can be modified in parallel mode.
+ * This has been introduced in PostgreSQL 15 for tables.
+ */
+ if (pset.sversion >= 150000)
+ {
+ appendPQExpBuffer(&buf,
+ ",\n CASE c.relparalleldml WHEN 'd' THEN '%s' WHEN 'u' THEN '%s' WHEN 'r' THEN '%s' WHEN 's' THEN '%s' END as \"%s\"",
+ gettext_noop("default"),
+ gettext_noop("unsafe"),
+ gettext_noop("restricted"),
+ gettext_noop("safe"),
+ gettext_noop("Parallel DML"));
+ translate_columns[cols_so_far] = true;
+ }
+
+ /*
+ * We don't bother to count cols_so_far below here, as there's no need
+ * to; this might change with future additions to the output columns.
+ */
/*
* As of PostgreSQL 9.0, use pg_table_size() to show a more accurate
diff --git a/src/include/catalog/heap.h b/src/include/catalog/heap.h
index 6ce480b49c..b59975919b 100644
--- a/src/include/catalog/heap.h
+++ b/src/include/catalog/heap.h
@@ -55,6 +55,7 @@ extern Relation heap_create(const char *relname,
TupleDesc tupDesc,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
bool allow_system_table_mods,
@@ -73,6 +74,7 @@ extern Oid heap_create_with_catalog(const char *relname,
List *cooked_constraints,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
OnCommitAction oncommit,
diff --git a/src/include/catalog/pg_class.h b/src/include/catalog/pg_class.h
index fef9945ed8..244eac6bd8 100644
--- a/src/include/catalog/pg_class.h
+++ b/src/include/catalog/pg_class.h
@@ -116,6 +116,9 @@ CATALOG(pg_class,1259,RelationRelationId) BKI_BOOTSTRAP BKI_ROWTYPE_OID(83,Relat
/* see REPLICA_IDENTITY_xxx constants */
char relreplident BKI_DEFAULT(n);
+ /* parallel safety of the dml on the relation */
+ char relparalleldml BKI_DEFAULT(d);
+
/* is relation a partition? */
bool relispartition BKI_DEFAULT(f);
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index b33b8b0134..cd52c0e254 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -171,6 +171,8 @@ DECLARE_UNIQUE_INDEX(pg_proc_proname_args_nsp_index, 2691, ProcedureNameArgsNspI
#define PROPARALLEL_RESTRICTED 'r' /* can run in parallel leader only */
#define PROPARALLEL_UNSAFE 'u' /* banned while in parallel mode */
+#define PROPARALLEL_DEFAULT 'd' /* only used for parallel dml safety */
+
/*
* Symbolic values for proargmodes column. Note that these must agree with
* the FunctionParameterMode enum in parsenodes.h; we declare them here to
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index e28248af32..0352e41c6e 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -1934,7 +1934,8 @@ typedef enum AlterTableType
AT_AddIdentity, /* ADD IDENTITY */
AT_SetIdentity, /* SET identity column options */
AT_DropIdentity, /* DROP IDENTITY */
- AT_ReAddStatistics /* internal to commands/tablecmds.c */
+ AT_ReAddStatistics, /* internal to commands/tablecmds.c */
+ AT_ParallelDMLSafety /* PARALLEL DML SAFE/RESTRICTED/UNSAFE/DEFAULT */
} AlterTableType;
typedef struct ReplicaIdentityStmt
@@ -2180,6 +2181,7 @@ typedef struct CreateStmt
OnCommitAction oncommit; /* what do we do at COMMIT? */
char *tablespacename; /* table space to use, or NULL */
char *accessMethod; /* table access method */
+ char *paralleldmlsafety; /* parallel dml safety */
bool if_not_exists; /* just do nothing if it already exists? */
} CreateStmt;
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index c04282f91f..6e679d9f97 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -115,6 +115,7 @@ typedef struct IntoClause
List *options; /* options from WITH clause */
OnCommitAction onCommit; /* what do we do at COMMIT? */
char *tableSpaceName; /* table space to use, or NULL */
+ char *paralleldmlsafety; /* parallel dml safety */
Node *viewQuery; /* materialized view's SELECT query */
bool skipData; /* true for WITH NO DATA */
} IntoClause;
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index f836acf876..05222faccd 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -139,6 +139,7 @@ PG_KEYWORD("dictionary", DICTIONARY, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("disable", DISABLE_P, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("discard", DISCARD, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("distinct", DISTINCT, RESERVED_KEYWORD, BARE_LABEL)
+PG_KEYWORD("dml", DML, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("do", DO, RESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("document", DOCUMENT_P, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("domain", DOMAIN_P, UNRESERVED_KEYWORD, BARE_LABEL)
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index f772855ac6..5ea225ac2d 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -108,7 +108,8 @@ extern Relation RelationBuildLocalRelation(const char *relname,
bool shared_relation,
bool mapped_relation,
char relpersistence,
- char relkind);
+ char relkind,
+ char relparalleldml);
/*
* Routines to manage assignment of new relfilenode to a relation
diff --git a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
index 1bae1e5438..e1f5678eef 100644
--- a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
+++ b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
@@ -276,6 +276,9 @@ get_altertable_subcmdtypes(PG_FUNCTION_ARGS)
case AT_NoForceRowSecurity:
strtype = "NO FORCE ROW SECURITY";
break;
+ case AT_ParallelDMLSafety:
+ strtype = "PARALLEL DML SAFETY";
+ break;
case AT_GenericOptions:
strtype = "SET OPTIONS";
break;
--
2.27.0
v15-0005-regression-test-and-doc-updates.patchapplication/octet-stream; name=v15-0005-regression-test-and-doc-updates.patchDownload
From 86c0b68d9d6c2c4ec4d42b97d1f8fa4677adb475 Mon Sep 17 00:00:00 2001
From: Hou Zhijie <HouZhijie@foxmail.com>
Date: Fri, 30 Jul 2021 10:06:04 +0800
Subject: [PATCH] regression-test-and-doc-updates
---
contrib/test_decoding/expected/ddl.out | 4 +
doc/src/sgml/func.sgml | 61 ++
doc/src/sgml/ref/alter_foreign_table.sgml | 13 +
doc/src/sgml/ref/alter_function.sgml | 2 +-
doc/src/sgml/ref/alter_table.sgml | 12 +
doc/src/sgml/ref/create_foreign_table.sgml | 39 +
doc/src/sgml/ref/create_table.sgml | 44 ++
doc/src/sgml/ref/create_table_as.sgml | 38 +
src/test/regress/expected/alter_table.out | 2 +
src/test/regress/expected/compression_1.out | 9 +
src/test/regress/expected/copy2.out | 1 +
src/test/regress/expected/create_table.out | 14 +
.../regress/expected/create_table_like.out | 8 +
src/test/regress/expected/domain.out | 2 +
src/test/regress/expected/foreign_data.out | 42 ++
src/test/regress/expected/identity.out | 1 +
src/test/regress/expected/inherit.out | 13 +
src/test/regress/expected/insert.out | 12 +
src/test/regress/expected/insert_parallel.out | 713 ++++++++++++++++++
src/test/regress/expected/psql.out | 58 +-
src/test/regress/expected/publication.out | 4 +
.../regress/expected/replica_identity.out | 1 +
src/test/regress/expected/rowsecurity.out | 1 +
src/test/regress/expected/rules.out | 3 +
src/test/regress/expected/stats_ext.out | 1 +
src/test/regress/expected/triggers.out | 1 +
src/test/regress/expected/update.out | 1 +
src/test/regress/output/tablespace.source | 2 +
src/test/regress/parallel_schedule | 1 +
src/test/regress/sql/insert_parallel.sql | 381 ++++++++++
30 files changed, 1456 insertions(+), 28 deletions(-)
create mode 100644 src/test/regress/expected/insert_parallel.out
create mode 100644 src/test/regress/sql/insert_parallel.sql
diff --git a/contrib/test_decoding/expected/ddl.out b/contrib/test_decoding/expected/ddl.out
index 4ff0044c78..5c9b5ea3b9 100644
--- a/contrib/test_decoding/expected/ddl.out
+++ b/contrib/test_decoding/expected/ddl.out
@@ -446,6 +446,7 @@ WITH (user_catalog_table = true)
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=true
INSERT INTO replication_metadata(relation, options)
@@ -460,6 +461,7 @@ ALTER TABLE replication_metadata RESET (user_catalog_table);
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
INSERT INTO replication_metadata(relation, options)
VALUES ('bar', ARRAY['a', 'b']);
@@ -473,6 +475,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = true);
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=true
INSERT INTO replication_metadata(relation, options)
@@ -492,6 +495,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = false);
rewritemeornot | integer | | | | plain | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=false
INSERT INTO replication_metadata(relation, options)
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index d83f39f7cd..6679ad9974 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -23940,6 +23940,67 @@ SELECT collation for ('foo' COLLATE "de_DE");
Undefined objects are identified with <literal>NULL</literal> values.
</para></entry>
</row>
+
+ <row>
+ <entry role="func_table_entry"><para role="func_signature">
+ <indexterm>
+ <primary>pg_get_table_parallel_dml_safety</primary>
+ </indexterm>
+ <function>pg_get_table_parallel_dml_safety</function> ( <parameter>table_name</parameter> <type>regclass</type> )
+ <returnvalue>record</returnvalue>
+ ( <parameter>objid</parameter> <type>oid</type>,
+ <parameter>classid</parameter> <type>oid</type>,
+ <parameter>proparallel</parameter> <type>char</type> )
+ </para>
+ <para>
+ Returns a row containing enough information to uniquely identify the
+ parallel unsafe/restricted table-related objects from which the
+ table's parallel DML safety is determined. The user can use this
+ information during development in order to accurately declare a
+ table's parallel DML safety, or to identify any problematic objects
+ if parallel DML fails or behaves unexpectedly. Note that when the
+ use of an object-related parallel unsafe/restricted function is
+ detected, both the function OID and the object OID are returned.
+ <parameter>classid</parameter> is the OID of the system catalog
+ containing the object;
+ <parameter>objid</parameter> is the OID of the object itself.
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="func_table_entry"><para role="func_signature">
+ <indexterm>
+ <primary>pg_get_table_max_parallel_dml_hazard</primary>
+ </indexterm>
+ <function>pg_get_table_max_parallel_dml_hazard</function> ( <type>regclass</type> )
+ <returnvalue>char</returnvalue>
+ </para>
+ <para>
+ Returns the worst parallel DML safety hazard that can be found in the
+ given relation:
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal>s</literal> safe
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>r</literal> restricted
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>u</literal> unsafe
+ </para>
+ </listitem>
+ </itemizedlist>
+ </para>
+ <para>
+ Users can use this function to do a quick check without caring about
+ specific parallel-related objects.
+ </para></entry>
+ </row>
</tbody>
</tgroup>
</table>
diff --git a/doc/src/sgml/ref/alter_foreign_table.sgml b/doc/src/sgml/ref/alter_foreign_table.sgml
index 7ca03f3ac9..58f1c0d567 100644
--- a/doc/src/sgml/ref/alter_foreign_table.sgml
+++ b/doc/src/sgml/ref/alter_foreign_table.sgml
@@ -29,6 +29,8 @@ ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceab
RENAME TO <replaceable class="parameter">new_name</replaceable>
ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
SET SCHEMA <replaceable class="parameter">new_schema</replaceable>
+ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
<phrase>where <replaceable class="parameter">action</replaceable> is one of:</phrase>
@@ -299,6 +301,17 @@ ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceab
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML</literal></term>
+ <listitem>
+ <para>
+ Change whether the data in the table can be modified in parallel mode.
+ See the similar form of <link linkend="sql-altertable"><command>ALTER TABLE</command></link>
+ for more details.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</para>
diff --git a/doc/src/sgml/ref/alter_function.sgml b/doc/src/sgml/ref/alter_function.sgml
index 0ee756a94d..1a0fd3cd88 100644
--- a/doc/src/sgml/ref/alter_function.sgml
+++ b/doc/src/sgml/ref/alter_function.sgml
@@ -38,7 +38,7 @@ ALTER FUNCTION <replaceable>name</replaceable> [ ( [ [ <replaceable class="param
IMMUTABLE | STABLE | VOLATILE
[ NOT ] LEAKPROOF
[ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER
- PARALLEL { UNSAFE | RESTRICTED | SAFE }
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
COST <replaceable class="parameter">execution_cost</replaceable>
ROWS <replaceable class="parameter">result_rows</replaceable>
SUPPORT <replaceable class="parameter">support_function</replaceable>
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index 81291577f8..99bd75648f 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -37,6 +37,8 @@ ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
ATTACH PARTITION <replaceable class="parameter">partition_name</replaceable> { FOR VALUES <replaceable class="parameter">partition_bound_spec</replaceable> | DEFAULT }
ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
DETACH PARTITION <replaceable class="parameter">partition_name</replaceable> [ CONCURRENTLY | FINALIZE ]
+ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
<phrase>where <replaceable class="parameter">action</replaceable> is one of:</phrase>
@@ -1030,6 +1032,16 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML</literal></term>
+ <listitem>
+ <para>
+ Change whether the data in the table can be modified in parallel mode.
+ See <link linkend="sql-createtable"><command>CREATE TABLE</command></link> for details.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</para>
diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml
index f9477efe58..7a8a7ddbec 100644
--- a/doc/src/sgml/ref/create_foreign_table.sgml
+++ b/doc/src/sgml/ref/create_foreign_table.sgml
@@ -27,6 +27,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name
[, ... ]
] )
[ INHERITS ( <replaceable>parent_table</replaceable> [, ... ] ) ]
+[ PARALLEL DML { NOTESET | UNSAFE | RESTRICTED | SAFE } ]
SERVER <replaceable class="parameter">server_name</replaceable>
[ OPTIONS ( <replaceable class="parameter">option</replaceable> '<replaceable class="parameter">value</replaceable>' [, ... ] ) ]
@@ -36,6 +37,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name
| <replaceable>table_constraint</replaceable> }
[, ... ]
) ] <replaceable class="parameter">partition_bound_spec</replaceable>
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
SERVER <replaceable class="parameter">server_name</replaceable>
[ OPTIONS ( <replaceable class="parameter">option</replaceable> '<replaceable class="parameter">value</replaceable>' [, ... ] ) ]
@@ -290,6 +292,43 @@ CHECK ( <replaceable class="parameter">expression</replaceable> ) [ NO INHERIT ]
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML DEFAULT</literal> indicates that the safety of
+ parallel modification will be checked automatically. This is default.
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the
+ table can't be modified in parallel mode, and this forces a serial
+ execution plan for DML statements operating on the table.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader. <literal>PARALLEL DML
+ SAFE</literal> indicates that the data in the table can be modified in
+ parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ the table (e.g., functions in triggers/index expression/constraints etc.).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable class="parameter">server_name</replaceable></term>
<listitem>
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 15aed2f251..7abc527bf9 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -33,6 +33,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name</replaceable>
OF <replaceable class="parameter">type_name</replaceable> [ (
@@ -45,6 +46,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name</replaceable>
PARTITION OF <replaceable class="parameter">parent_table</replaceable> [ (
@@ -57,6 +59,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
<phrase>where <replaceable class="parameter">column_constraint</replaceable> is:</phrase>
@@ -1336,6 +1339,47 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry id="sql-createtable-paralleldmlsafety">
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the table
+ can't be modified in parallel mode, and this forces a serial execution plan
+ for DML statements operating on the table. This is the default.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader.
+ <literal>PARALLEL DML SAFE</literal> indicates that the data in the table
+ can be modified in parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Note that for partitioned table, <literal>PARALLEL DML DEFAULT</literal>
+ is the same as <literal>PARALLEL DML UNSAFE</literal> which indicates
+ that the data in the table can't be modified in parallel mode.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ the table
+ (e.g., functions in triggers/index expressions/constraints etc.).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>USING INDEX TABLESPACE <replaceable class="parameter">tablespace_name</replaceable></literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml
index 07558ab56c..2e7851db44 100644
--- a/doc/src/sgml/ref/create_table_as.sgml
+++ b/doc/src/sgml/ref/create_table_as.sgml
@@ -27,6 +27,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+ [ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
AS <replaceable>query</replaceable>
[ WITH [ NO ] DATA ]
</synopsis>
@@ -223,6 +224,43 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML DEFAULT</literal> indicates that the safety of
+ parallel modification will be checked automatically. This is default.
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the
+ table can't be modified in parallel mode, and this forces a serial
+ execution plan for DML statements operating on the table.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader. <literal>PARALLEL DML
+ SAFE</literal> indicates that the data in the table can be modified in
+ parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ table (e.g., functions in trigger/index expression/constraints ...).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable>query</replaceable></term>
<listitem>
diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out
index 8dcb00ac67..1c360e04bf 100644
--- a/src/test/regress/expected/alter_table.out
+++ b/src/test/regress/expected/alter_table.out
@@ -2206,6 +2206,7 @@ alter table test_storage alter column a set storage external;
b | integer | | | 0 | plain | |
Indexes:
"test_storage_idx" btree (b, a)
+Parallel DML: default
\d+ test_storage_idx
Index "public.test_storage_idx"
@@ -4193,6 +4194,7 @@ ALTER TABLE range_parted2 DETACH PARTITION part_rp CONCURRENTLY;
a | integer | | | | plain | |
Partition key: RANGE (a)
Number of partitions: 0
+Parallel DML: default
-- constraint should be created
\d part_rp
diff --git a/src/test/regress/expected/compression_1.out b/src/test/regress/expected/compression_1.out
index 1ce2962d55..8559e94226 100644
--- a/src/test/regress/expected/compression_1.out
+++ b/src/test/regress/expected/compression_1.out
@@ -12,6 +12,7 @@ INSERT INTO cmdata VALUES(repeat('1234567890', 1000));
f1 | text | | | | extended | pglz | |
Indexes:
"idx" btree (f1)
+Parallel DML: default
CREATE TABLE cmdata1(f1 TEXT COMPRESSION lz4);
ERROR: compression method lz4 not supported
@@ -51,6 +52,7 @@ SELECT * INTO cmmove1 FROM cmdata;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | text | | | | extended | | |
+Parallel DML: default
SELECT pg_column_compression(f1) FROM cmmove1;
pg_column_compression
@@ -138,6 +140,7 @@ CREATE TABLE cmdata2 (f1 int);
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | integer | | | | plain | | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;
\d+ cmdata2
@@ -145,6 +148,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | character varying | | | | extended | | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE int USING f1::integer;
\d+ cmdata2
@@ -152,6 +156,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE int USING f1::integer;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | integer | | | | plain | | |
+Parallel DML: default
--changing column storage should not impact the compression method
--but the data should not be compressed
@@ -162,6 +167,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION pglz;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | character varying | | | | extended | pglz | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 SET STORAGE plain;
\d+ cmdata2
@@ -169,6 +175,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET STORAGE plain;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | character varying | | | | plain | pglz | |
+Parallel DML: default
INSERT INTO cmdata2 VALUES (repeat('123456789', 800));
SELECT pg_column_compression(f1) FROM cmdata2;
@@ -249,6 +256,7 @@ INSERT INTO cmdata VALUES (repeat('123456789', 4004));
f1 | text | | | | extended | pglz | |
Indexes:
"idx" btree (f1)
+Parallel DML: default
SELECT pg_column_compression(f1) FROM cmdata;
pg_column_compression
@@ -263,6 +271,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION default;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | character varying | | | | plain | | |
+Parallel DML: default
-- test alter compression method for materialized views
ALTER MATERIALIZED VIEW compressmv ALTER COLUMN x SET COMPRESSION lz4;
diff --git a/src/test/regress/expected/copy2.out b/src/test/regress/expected/copy2.out
index 5f3685e9ef..46f817417a 100644
--- a/src/test/regress/expected/copy2.out
+++ b/src/test/regress/expected/copy2.out
@@ -519,6 +519,7 @@ alter table check_con_tbl add check (check_con_function(check_con_tbl.*));
f1 | integer | | | | plain | |
Check constraints:
"check_con_tbl_check" CHECK (check_con_function(check_con_tbl.*))
+Parallel DML: default
copy check_con_tbl from stdin;
NOTICE: input = {"f1":1}
diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out
index 96bf426d98..b7e2a535cd 100644
--- a/src/test/regress/expected/create_table.out
+++ b/src/test/regress/expected/create_table.out
@@ -505,6 +505,7 @@ Number of partitions: 0
b | text | | | | extended | |
Partition key: RANGE (((a + 1)), substr(b, 1, 5))
Number of partitions: 0
+Parallel DML: default
INSERT INTO partitioned2 VALUES (1, 'hello');
ERROR: no partition of relation "partitioned2" found for row
@@ -518,6 +519,7 @@ CREATE TABLE part2_1 PARTITION OF partitioned2 FOR VALUES FROM (-1, 'aaaaa') TO
b | text | | | | extended | |
Partition of: partitioned2 FOR VALUES FROM ('-1', 'aaaaa') TO (100, 'ccccc')
Partition constraint: (((a + 1) IS NOT NULL) AND (substr(b, 1, 5) IS NOT NULL) AND (((a + 1) > '-1'::integer) OR (((a + 1) = '-1'::integer) AND (substr(b, 1, 5) >= 'aaaaa'::text))) AND (((a + 1) < 100) OR (((a + 1) = 100) AND (substr(b, 1, 5) < 'ccccc'::text))))
+Parallel DML: default
DROP TABLE partitioned, partitioned2;
-- check reference to partitioned table's rowtype in partition descriptor
@@ -559,6 +561,7 @@ select * from partitioned where partitioned = '(1,2)'::partitioned;
b | integer | | | | plain | |
Partition of: partitioned FOR VALUES IN ('(1,2)')
Partition constraint: (((partitioned1.*)::partitioned IS DISTINCT FROM NULL) AND ((partitioned1.*)::partitioned = '(1,2)'::partitioned))
+Parallel DML: default
drop table partitioned;
-- check that dependencies of partition columns are handled correctly
@@ -618,6 +621,7 @@ Partitions: part_null FOR VALUES IN (NULL),
part_p1 FOR VALUES IN (1),
part_p2 FOR VALUES IN (2),
part_p3 FOR VALUES IN (3)
+Parallel DML: default
-- forbidden expressions for partition bound with list partitioned table
CREATE TABLE part_bogus_expr_fail PARTITION OF list_parted FOR VALUES IN (somename);
@@ -1064,6 +1068,7 @@ drop table test_part_coll_posix;
b | integer | | not null | 1 | plain | |
Partition of: parted FOR VALUES IN ('b')
Partition constraint: ((a IS NOT NULL) AND (a = 'b'::text))
+Parallel DML: default
-- Both partition bound and partition key in describe output
\d+ part_c
@@ -1076,6 +1081,7 @@ Partition of: parted FOR VALUES IN ('c')
Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text))
Partition key: RANGE (b)
Partitions: part_c_1_10 FOR VALUES FROM (1) TO (10)
+Parallel DML: default
-- a level-2 partition's constraint will include the parent's expressions
\d+ part_c_1_10
@@ -1086,6 +1092,7 @@ Partitions: part_c_1_10 FOR VALUES FROM (1) TO (10)
b | integer | | not null | 0 | plain | |
Partition of: part_c FOR VALUES FROM (1) TO (10)
Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text) AND (b IS NOT NULL) AND (b >= 1) AND (b < 10))
+Parallel DML: default
-- Show partition count in the parent's describe output
-- Tempted to include \d+ output listing partitions with bound info but
@@ -1120,6 +1127,7 @@ CREATE TABLE unbounded_range_part PARTITION OF range_parted4 FOR VALUES FROM (MI
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (MAXVALUE, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL))
+Parallel DML: default
DROP TABLE unbounded_range_part;
CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE);
@@ -1132,6 +1140,7 @@ CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALU
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND (abs(a) <= 1))
+Parallel DML: default
CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE);
\d+ range_parted4_2
@@ -1143,6 +1152,7 @@ CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 3) OR ((abs(a) = 3) AND (abs(b) > 4)) OR ((abs(a) = 3) AND (abs(b) = 4) AND (c >= 5))) AND ((abs(a) < 6) OR ((abs(a) = 6) AND (abs(b) <= 7))))
+Parallel DML: default
CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE);
\d+ range_parted4_3
@@ -1154,6 +1164,7 @@ CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, M
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 6) OR ((abs(a) = 6) AND (abs(b) >= 8))) AND (abs(a) <= 9))
+Parallel DML: default
DROP TABLE range_parted4;
-- user-defined operator class in partition key
@@ -1190,6 +1201,7 @@ SELECT obj_description('parted_col_comment'::regclass);
b | text | | | | extended | |
Partition key: LIST (a)
Number of partitions: 0
+Parallel DML: default
DROP TABLE parted_col_comment;
-- list partitioning on array type column
@@ -1202,6 +1214,7 @@ CREATE TABLE arrlp12 PARTITION OF arrlp FOR VALUES IN ('{1}', '{2}');
a | integer[] | | | | extended | |
Partition of: arrlp FOR VALUES IN ('{1}', '{2}')
Partition constraint: ((a IS NOT NULL) AND ((a = '{1}'::integer[]) OR (a = '{2}'::integer[])))
+Parallel DML: default
DROP TABLE arrlp;
-- partition on boolean column
@@ -1216,6 +1229,7 @@ create table boolspart_f partition of boolspart for values in (false);
Partition key: LIST (a)
Partitions: boolspart_f FOR VALUES IN (false),
boolspart_t FOR VALUES IN (true)
+Parallel DML: default
drop table boolspart;
-- partitions mixing temporary and permanent relations
diff --git a/src/test/regress/expected/create_table_like.out b/src/test/regress/expected/create_table_like.out
index 7ad5fafe93..da59d8b3c2 100644
--- a/src/test/regress/expected/create_table_like.out
+++ b/src/test/regress/expected/create_table_like.out
@@ -333,6 +333,7 @@ CREATE TABLE ctlt12_storage (LIKE ctlt1 INCLUDING STORAGE, LIKE ctlt2 INCLUDING
a | text | | not null | | main | |
b | text | | | | extended | |
c | text | | | | external | |
+Parallel DML: default
CREATE TABLE ctlt12_comments (LIKE ctlt1 INCLUDING COMMENTS, LIKE ctlt2 INCLUDING COMMENTS);
\d+ ctlt12_comments
@@ -342,6 +343,7 @@ CREATE TABLE ctlt12_comments (LIKE ctlt1 INCLUDING COMMENTS, LIKE ctlt2 INCLUDIN
a | text | | not null | | extended | | A
b | text | | | | extended | | B
c | text | | | | extended | | C
+Parallel DML: default
CREATE TABLE ctlt1_inh (LIKE ctlt1 INCLUDING CONSTRAINTS INCLUDING COMMENTS) INHERITS (ctlt1);
NOTICE: merging column "a" with inherited definition
@@ -356,6 +358,7 @@ NOTICE: merging constraint "ctlt1_a_check" with inherited definition
Check constraints:
"ctlt1_a_check" CHECK (length(a) > 2)
Inherits: ctlt1
+Parallel DML: default
SELECT description FROM pg_description, pg_constraint c WHERE classoid = 'pg_constraint'::regclass AND objoid = c.oid AND c.conrelid = 'ctlt1_inh'::regclass;
description
@@ -378,6 +381,7 @@ Check constraints:
"ctlt3_c_check" CHECK (length(c) < 7)
Inherits: ctlt1,
ctlt3
+Parallel DML: default
CREATE TABLE ctlt13_like (LIKE ctlt3 INCLUDING CONSTRAINTS INCLUDING INDEXES INCLUDING COMMENTS INCLUDING STORAGE) INHERITS (ctlt1);
NOTICE: merging column "a" with inherited definition
@@ -395,6 +399,7 @@ Check constraints:
"ctlt3_a_check" CHECK (length(a) < 5)
"ctlt3_c_check" CHECK (length(c) < 7)
Inherits: ctlt1
+Parallel DML: default
SELECT description FROM pg_description, pg_constraint c WHERE classoid = 'pg_constraint'::regclass AND objoid = c.oid AND c.conrelid = 'ctlt13_like'::regclass;
description
@@ -418,6 +423,7 @@ Check constraints:
Statistics objects:
"public"."ctlt_all_a_b_stat" ON a, b FROM ctlt_all
"public"."ctlt_all_expr_stat" ON ((a || b)) FROM ctlt_all
+Parallel DML: default
SELECT c.relname, objsubid, description FROM pg_description, pg_index i, pg_class c WHERE classoid = 'pg_class'::regclass AND objoid = i.indexrelid AND c.oid = i.indexrelid AND i.indrelid = 'ctlt_all'::regclass ORDER BY c.relname, objsubid;
relname | objsubid | description
@@ -458,6 +464,7 @@ Check constraints:
Statistics objects:
"public"."pg_attrdef_a_b_stat" ON a, b FROM public.pg_attrdef
"public"."pg_attrdef_expr_stat" ON ((a || b)) FROM public.pg_attrdef
+Parallel DML: default
DROP TABLE public.pg_attrdef;
-- Check that LIKE isn't confused when new table masks the old, either
@@ -480,6 +487,7 @@ Check constraints:
Statistics objects:
"ctl_schema"."ctlt1_a_b_stat" ON a, b FROM ctlt1
"ctl_schema"."ctlt1_expr_stat" ON ((a || b)) FROM ctlt1
+Parallel DML: default
ROLLBACK;
DROP TABLE ctlt1, ctlt2, ctlt3, ctlt4, ctlt12_storage, ctlt12_comments, ctlt1_inh, ctlt13_inh, ctlt13_like, ctlt_all, ctla, ctlb CASCADE;
diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out
index 411d5c003e..342e9d234d 100644
--- a/src/test/regress/expected/domain.out
+++ b/src/test/regress/expected/domain.out
@@ -276,6 +276,7 @@ Rules:
silly AS
ON DELETE TO dcomptable DO INSTEAD UPDATE dcomptable SET d1.r = (dcomptable.d1).r - 1::double precision, d1.i = (dcomptable.d1).i + 1::double precision
WHERE (dcomptable.d1).i > 0::double precision
+Parallel DML: default
drop table dcomptable;
drop type comptype cascade;
@@ -413,6 +414,7 @@ Rules:
silly AS
ON DELETE TO dcomptable DO INSTEAD UPDATE dcomptable SET d1[1].r = dcomptable.d1[1].r - 1::double precision, d1[1].i = dcomptable.d1[1].i + 1::double precision
WHERE dcomptable.d1[1].i > 0::double precision
+Parallel DML: default
drop table dcomptable;
drop type comptype cascade;
diff --git a/src/test/regress/expected/foreign_data.out b/src/test/regress/expected/foreign_data.out
index 426080ae39..330f25ea9e 100644
--- a/src/test/regress/expected/foreign_data.out
+++ b/src/test/regress/expected/foreign_data.out
@@ -735,6 +735,7 @@ Check constraints:
"ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
\det+
List of foreign tables
@@ -857,6 +858,7 @@ Check constraints:
"ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- can't change the column type if it's used elsewhere
CREATE TABLE use_ft1_column_type (x ft1);
@@ -1396,6 +1398,7 @@ CREATE FOREIGN TABLE ft2 () INHERITS (fd_pt1)
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1407,6 +1410,7 @@ Child tables: ft2
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
DROP FOREIGN TABLE ft2;
\d+ fd_pt1
@@ -1416,6 +1420,7 @@ DROP FOREIGN TABLE ft2;
c1 | integer | | not null | | plain | |
c2 | text | | | | extended | |
c3 | date | | | | plain | |
+Parallel DML: default
CREATE FOREIGN TABLE ft2 (
c1 integer NOT NULL,
@@ -1431,6 +1436,7 @@ CREATE FOREIGN TABLE ft2 (
c3 | date | | | | | plain | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER FOREIGN TABLE ft2 INHERIT fd_pt1;
\d+ fd_pt1
@@ -1441,6 +1447,7 @@ ALTER FOREIGN TABLE ft2 INHERIT fd_pt1;
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1452,6 +1459,7 @@ Child tables: ft2
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
CREATE TABLE ct3() INHERITS(ft2);
CREATE FOREIGN TABLE ft3 (
@@ -1475,6 +1483,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
\d+ ct3
Table "public.ct3"
@@ -1484,6 +1493,7 @@ Child tables: ct3,
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Inherits: ft2
+Parallel DML: default
\d+ ft3
Foreign table "public.ft3"
@@ -1494,6 +1504,7 @@ Inherits: ft2
c3 | date | | | | | plain | |
Server: s0
Inherits: ft2
+Parallel DML: default
-- add attributes recursively
ALTER TABLE fd_pt1 ADD COLUMN c4 integer;
@@ -1514,6 +1525,7 @@ ALTER TABLE fd_pt1 ADD COLUMN c8 integer;
c7 | integer | | not null | | plain | |
c8 | integer | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1532,6 +1544,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
\d+ ct3
Table "public.ct3"
@@ -1546,6 +1559,7 @@ Child tables: ct3,
c7 | integer | | not null | | plain | |
c8 | integer | | | | plain | |
Inherits: ft2
+Parallel DML: default
\d+ ft3
Foreign table "public.ft3"
@@ -1561,6 +1575,7 @@ Inherits: ft2
c8 | integer | | | | | plain | |
Server: s0
Inherits: ft2
+Parallel DML: default
-- alter attributes recursively
ALTER TABLE fd_pt1 ALTER COLUMN c4 SET DEFAULT 0;
@@ -1588,6 +1603,7 @@ ALTER TABLE fd_pt1 ALTER COLUMN c8 SET STORAGE EXTERNAL;
c7 | integer | | | | plain | |
c8 | text | | | | external | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1606,6 +1622,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
-- drop attributes recursively
ALTER TABLE fd_pt1 DROP COLUMN c4;
@@ -1621,6 +1638,7 @@ ALTER TABLE fd_pt1 DROP COLUMN c8;
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1634,6 +1652,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
-- add constraints recursively
ALTER TABLE fd_pt1 ADD CONSTRAINT fd_pt1chk1 CHECK (c1 > 0) NO INHERIT;
@@ -1661,6 +1680,7 @@ Check constraints:
"fd_pt1chk1" CHECK (c1 > 0) NO INHERIT
"fd_pt1chk2" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1676,6 +1696,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
DROP FOREIGN TABLE ft2; -- ERROR
ERROR: cannot drop foreign table ft2 because other objects depend on it
@@ -1708,6 +1729,7 @@ Check constraints:
"fd_pt1chk1" CHECK (c1 > 0) NO INHERIT
"fd_pt1chk2" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1721,6 +1743,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- drop constraints recursively
ALTER TABLE fd_pt1 DROP CONSTRAINT fd_pt1chk1 CASCADE;
@@ -1738,6 +1761,7 @@ ALTER TABLE fd_pt1 ADD CONSTRAINT fd_pt1chk3 CHECK (c2 <> '') NOT VALID;
Check constraints:
"fd_pt1chk3" CHECK (c2 <> ''::text) NOT VALID
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1752,6 +1776,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- VALIDATE CONSTRAINT need do nothing on foreign tables
ALTER TABLE fd_pt1 VALIDATE CONSTRAINT fd_pt1chk3;
@@ -1765,6 +1790,7 @@ ALTER TABLE fd_pt1 VALIDATE CONSTRAINT fd_pt1chk3;
Check constraints:
"fd_pt1chk3" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1779,6 +1805,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- changes name of an attribute recursively
ALTER TABLE fd_pt1 RENAME COLUMN c1 TO f1;
@@ -1796,6 +1823,7 @@ ALTER TABLE fd_pt1 RENAME CONSTRAINT fd_pt1chk3 TO f2_check;
Check constraints:
"f2_check" CHECK (f2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1810,6 +1838,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- TRUNCATE doesn't work on foreign tables, either directly or recursively
TRUNCATE ft2; -- ERROR
@@ -1859,6 +1888,7 @@ CREATE FOREIGN TABLE fd_pt2_1 PARTITION OF fd_pt2 FOR VALUES IN (1)
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1871,6 +1901,7 @@ Partition of: fd_pt2 FOR VALUES IN (1)
Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1))
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- partition cannot have additional columns
DROP FOREIGN TABLE fd_pt2_1;
@@ -1890,6 +1921,7 @@ CREATE FOREIGN TABLE fd_pt2_1 (
c4 | character(1) | | | | | extended | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: table "fd_pt2_1" contains column "c4" not found in parent "fd_pt2"
@@ -1904,6 +1936,7 @@ DROP FOREIGN TABLE fd_pt2_1;
c3 | date | | | | plain | |
Partition key: LIST (c1)
Number of partitions: 0
+Parallel DML: default
CREATE FOREIGN TABLE fd_pt2_1 (
c1 integer NOT NULL,
@@ -1919,6 +1952,7 @@ CREATE FOREIGN TABLE fd_pt2_1 (
c3 | date | | | | | plain | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- no attach partition validation occurs for foreign tables
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1);
@@ -1931,6 +1965,7 @@ ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1);
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1943,6 +1978,7 @@ Partition of: fd_pt2 FOR VALUES IN (1)
Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1))
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- cannot add column to a partition
ALTER TABLE fd_pt2_1 ADD c4 char;
@@ -1959,6 +1995,7 @@ ALTER TABLE fd_pt2_1 ADD CONSTRAINT p21chk CHECK (c2 <> '');
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1973,6 +2010,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- cannot drop inherited NOT NULL constraint from a partition
ALTER TABLE fd_pt2_1 ALTER c1 DROP NOT NULL;
@@ -1989,6 +2027,7 @@ ALTER TABLE fd_pt2 ALTER c2 SET NOT NULL;
c3 | date | | | | plain | |
Partition key: LIST (c1)
Number of partitions: 0
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -2001,6 +2040,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: column "c2" in child table must be marked NOT NULL
@@ -2019,6 +2059,7 @@ Partition key: LIST (c1)
Check constraints:
"fd_pt2chk1" CHECK (c1 > 0)
Number of partitions: 0
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -2031,6 +2072,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: child table is missing constraint "fd_pt2chk1"
diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out
index 99811570b7..6908fd141b 100644
--- a/src/test/regress/expected/identity.out
+++ b/src/test/regress/expected/identity.out
@@ -506,6 +506,7 @@ TABLE itest8;
f3 | integer | | not null | generated by default as identity | plain | |
f4 | bigint | | not null | generated always as identity | plain | |
f5 | bigint | | | | plain | |
+Parallel DML: default
\d itest8_f2_seq
Sequence "public.itest8_f2_seq"
diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out
index 06f44287bc..1c0da28d78 100644
--- a/src/test/regress/expected/inherit.out
+++ b/src/test/regress/expected/inherit.out
@@ -1059,6 +1059,7 @@ ALTER TABLE inhts RENAME d TO dd;
dd | integer | | | | plain | |
Inherits: inht1,
inhs1
+Parallel DML: default
DROP TABLE inhts;
-- Test for renaming in diamond inheritance
@@ -1079,6 +1080,7 @@ ALTER TABLE inht1 RENAME aa TO aaa;
z | integer | | | | plain | |
Inherits: inht2,
inht3
+Parallel DML: default
CREATE TABLE inhts (d int) INHERITS (inht2, inhs1);
NOTICE: merging multiple inherited definitions of column "b"
@@ -1096,6 +1098,7 @@ ERROR: cannot rename inherited column "b"
d | integer | | | | plain | |
Inherits: inht2,
inhs1
+Parallel DML: default
WITH RECURSIVE r AS (
SELECT 'inht1'::regclass AS inhrelid
@@ -1142,6 +1145,7 @@ CREATE TABLE test_constraints_inh () INHERITS (test_constraints);
Indexes:
"test_constraints_val1_val2_key" UNIQUE CONSTRAINT, btree (val1, val2)
Child tables: test_constraints_inh
+Parallel DML: default
ALTER TABLE ONLY test_constraints DROP CONSTRAINT test_constraints_val1_val2_key;
\d+ test_constraints
@@ -1152,6 +1156,7 @@ ALTER TABLE ONLY test_constraints DROP CONSTRAINT test_constraints_val1_val2_key
val1 | character varying | | | | extended | |
val2 | integer | | | | plain | |
Child tables: test_constraints_inh
+Parallel DML: default
\d+ test_constraints_inh
Table "public.test_constraints_inh"
@@ -1161,6 +1166,7 @@ Child tables: test_constraints_inh
val1 | character varying | | | | extended | |
val2 | integer | | | | plain | |
Inherits: test_constraints
+Parallel DML: default
DROP TABLE test_constraints_inh;
DROP TABLE test_constraints;
@@ -1177,6 +1183,7 @@ CREATE TABLE test_ex_constraints_inh () INHERITS (test_ex_constraints);
Indexes:
"test_ex_constraints_c_excl" EXCLUDE USING gist (c WITH &&)
Child tables: test_ex_constraints_inh
+Parallel DML: default
ALTER TABLE test_ex_constraints DROP CONSTRAINT test_ex_constraints_c_excl;
\d+ test_ex_constraints
@@ -1185,6 +1192,7 @@ ALTER TABLE test_ex_constraints DROP CONSTRAINT test_ex_constraints_c_excl;
--------+--------+-----------+----------+---------+---------+--------------+-------------
c | circle | | | | plain | |
Child tables: test_ex_constraints_inh
+Parallel DML: default
\d+ test_ex_constraints_inh
Table "public.test_ex_constraints_inh"
@@ -1192,6 +1200,7 @@ Child tables: test_ex_constraints_inh
--------+--------+-----------+----------+---------+---------+--------------+-------------
c | circle | | | | plain | |
Inherits: test_ex_constraints
+Parallel DML: default
DROP TABLE test_ex_constraints_inh;
DROP TABLE test_ex_constraints;
@@ -1208,6 +1217,7 @@ Indexes:
"test_primary_constraints_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "test_foreign_constraints" CONSTRAINT "test_foreign_constraints_id1_fkey" FOREIGN KEY (id1) REFERENCES test_primary_constraints(id)
+Parallel DML: default
\d+ test_foreign_constraints
Table "public.test_foreign_constraints"
@@ -1217,6 +1227,7 @@ Referenced by:
Foreign-key constraints:
"test_foreign_constraints_id1_fkey" FOREIGN KEY (id1) REFERENCES test_primary_constraints(id)
Child tables: test_foreign_constraints_inh
+Parallel DML: default
ALTER TABLE test_foreign_constraints DROP CONSTRAINT test_foreign_constraints_id1_fkey;
\d+ test_foreign_constraints
@@ -1225,6 +1236,7 @@ ALTER TABLE test_foreign_constraints DROP CONSTRAINT test_foreign_constraints_id
--------+---------+-----------+----------+---------+---------+--------------+-------------
id1 | integer | | | | plain | |
Child tables: test_foreign_constraints_inh
+Parallel DML: default
\d+ test_foreign_constraints_inh
Table "public.test_foreign_constraints_inh"
@@ -1232,6 +1244,7 @@ Child tables: test_foreign_constraints_inh
--------+---------+-----------+----------+---------+---------+--------------+-------------
id1 | integer | | | | plain | |
Inherits: test_foreign_constraints
+Parallel DML: default
DROP TABLE test_foreign_constraints_inh;
DROP TABLE test_foreign_constraints;
diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out
index 5063a3dc22..9e4a1bf886 100644
--- a/src/test/regress/expected/insert.out
+++ b/src/test/regress/expected/insert.out
@@ -177,6 +177,7 @@ Rules:
irule3 AS
ON INSERT TO inserttest2 DO INSERT INTO inserttest (f4[1].if1, f4[1].if2[2]) SELECT new.f1,
new.f2
+Parallel DML: default
drop table inserttest2;
drop table inserttest;
@@ -482,6 +483,7 @@ Partitions: part_aa_bb FOR VALUES IN ('aa', 'bb'),
part_null FOR VALUES IN (NULL),
part_xx_yy FOR VALUES IN ('xx', 'yy'), PARTITIONED,
part_default DEFAULT, PARTITIONED
+Parallel DML: default
-- cleanup
drop table range_parted, list_parted;
@@ -497,6 +499,7 @@ create table part_default partition of list_parted default;
a | integer | | | | plain | |
Partition of: list_parted DEFAULT
No partition constraint
+Parallel DML: default
insert into part_default values (null);
insert into part_default values (1);
@@ -888,6 +891,7 @@ Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVAL
mcrparted6_common_ge_10 FOR VALUES FROM ('common', 10) TO ('common', MAXVALUE),
mcrparted7_gt_common_lt_d FOR VALUES FROM ('common', MAXVALUE) TO ('d', MINVALUE),
mcrparted8_ge_d FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE)
+Parallel DML: default
\d+ mcrparted1_lt_b
Table "public.mcrparted1_lt_b"
@@ -897,6 +901,7 @@ Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVAL
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a < 'b'::text))
+Parallel DML: default
\d+ mcrparted2_b
Table "public.mcrparted2_b"
@@ -906,6 +911,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a < 'b'::text))
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('b', MINVALUE) TO ('c', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'b'::text) AND (a < 'c'::text))
+Parallel DML: default
\d+ mcrparted3_c_to_common
Table "public.mcrparted3_c_to_common"
@@ -915,6 +921,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'b'::text)
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('c', MINVALUE) TO ('common', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'c'::text) AND (a < 'common'::text))
+Parallel DML: default
\d+ mcrparted4_common_lt_0
Table "public.mcrparted4_common_lt_0"
@@ -924,6 +931,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'c'::text)
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', MINVALUE) TO ('common', 0)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b < 0))
+Parallel DML: default
\d+ mcrparted5_common_0_to_10
Table "public.mcrparted5_common_0_to_10"
@@ -933,6 +941,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', 0) TO ('common', 10)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b >= 0) AND (b < 10))
+Parallel DML: default
\d+ mcrparted6_common_ge_10
Table "public.mcrparted6_common_ge_10"
@@ -942,6 +951,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', 10) TO ('common', MAXVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b >= 10))
+Parallel DML: default
\d+ mcrparted7_gt_common_lt_d
Table "public.mcrparted7_gt_common_lt_d"
@@ -951,6 +961,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', MAXVALUE) TO ('d', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a > 'common'::text) AND (a < 'd'::text))
+Parallel DML: default
\d+ mcrparted8_ge_d
Table "public.mcrparted8_ge_d"
@@ -960,6 +971,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a > 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'd'::text))
+Parallel DML: default
insert into mcrparted values ('aaa', 0), ('b', 0), ('bz', 10), ('c', -10),
('comm', -10), ('common', -10), ('common', 0), ('common', 10),
diff --git a/src/test/regress/expected/insert_parallel.out b/src/test/regress/expected/insert_parallel.out
new file mode 100644
index 0000000000..28eb537687
--- /dev/null
+++ b/src/test/regress/expected/insert_parallel.out
@@ -0,0 +1,713 @@
+--
+-- PARALLEL
+--
+--
+-- START: setup some tables and data needed by the tests.
+--
+-- Setup - index expressions test
+create function pg_class_relname(Oid)
+returns name language sql parallel unsafe
+as 'select relname from pg_class where $1 = oid';
+-- For testing purposes, we'll mark this function as parallel-unsafe
+create or replace function fullname_parallel_unsafe(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel unsafe;
+create or replace function fullname_parallel_restricted(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel restricted;
+create table names(index int, first_name text, last_name text);
+create table names2(index int, first_name text, last_name text);
+create index names2_fullname_idx on names2 (fullname_parallel_unsafe(first_name, last_name));
+create table names4(index int, first_name text, last_name text);
+create index names4_fullname_idx on names4 (fullname_parallel_restricted(first_name, last_name));
+insert into names values
+ (1, 'albert', 'einstein'),
+ (2, 'niels', 'bohr'),
+ (3, 'erwin', 'schrodinger'),
+ (4, 'leonhard', 'euler'),
+ (5, 'stephen', 'hawking'),
+ (6, 'isaac', 'newton'),
+ (7, 'alan', 'turing'),
+ (8, 'richard', 'feynman');
+-- Setup - column default tests
+create or replace function bdefault_unsafe ()
+returns int language plpgsql parallel unsafe as $$
+begin
+ RETURN 5;
+end $$;
+create or replace function cdefault_restricted ()
+returns int language plpgsql parallel restricted as $$
+begin
+ RETURN 10;
+end $$;
+create or replace function ddefault_safe ()
+returns int language plpgsql parallel safe as $$
+begin
+ RETURN 20;
+end $$;
+create table testdef(a int, b int default bdefault_unsafe(), c int default cdefault_restricted(), d int default ddefault_safe());
+create table test_data(a int);
+insert into test_data select * from generate_series(1,10);
+--
+-- END: setup some tables and data needed by the tests.
+--
+begin;
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+create table para_insert_p1 (
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+);
+create table para_insert_f1 (
+ unique1 int4 REFERENCES para_insert_p1(unique1),
+ stringu1 name
+);
+create table para_insert_with_parallel_unsafe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml unsafe;
+create table para_insert_with_parallel_restricted(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml restricted;
+create table para_insert_with_parallel_safe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml safe;
+create table para_insert_with_parallel_auto(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml default;
+-- Check FK trigger
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('para_insert_f1');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | r
+ pg_trigger | r
+ pg_proc | r
+ pg_trigger | r
+(4 rows)
+
+select pg_get_table_max_parallel_dml_hazard('para_insert_f1');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+--
+-- Test INSERT with underlying query.
+-- Set parallel dml safe.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+alter table para_insert_p1 parallel dml safe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into para_insert_p1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+ count
+-------
+ 1
+(1 row)
+
+explain (costs off) insert into para_insert_with_parallel_safe select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------
+ Insert on para_insert_with_parallel_safe
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Set parallel dml unsafe.
+-- (should not create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml unsafe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+--------------------------
+ Insert on para_insert_p1
+ -> Seq Scan on tenk1
+(2 rows)
+
+explain (costs off) insert into para_insert_with_parallel_unsafe select unique1, stringu1 from tenk1;
+ QUERY PLAN
+--------------------------------------------
+ Insert on para_insert_with_parallel_unsafe
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Set parallel dml restricted.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml restricted;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+explain (costs off) insert into para_insert_with_parallel_restricted select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------------
+ Insert on para_insert_with_parallel_restricted
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Reset parallel dml.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml default;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+explain (costs off) insert into para_insert_with_parallel_auto select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------
+ Insert on para_insert_with_parallel_auto
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Test INSERT with ordered underlying query.
+-- (should create plan with parallel SELECT, GatherMerge parent node)
+--
+truncate para_insert_p1 cascade;
+NOTICE: truncate cascades to table "para_insert_f1"
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+ QUERY PLAN
+----------------------------------------------
+ Insert on para_insert_p1
+ -> Gather Merge
+ Workers Planned: 4
+ -> Sort
+ Sort Key: tenk1.unique1
+ -> Parallel Seq Scan on tenk1
+(6 rows)
+
+insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+ count
+-------
+ 1
+(1 row)
+
+--
+-- Test INSERT with RETURNING clause.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+create table test_data1(like test_data);
+explain (costs off) insert into test_data1 select * from test_data where a = 10 returning a as data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on test_data1
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+ Filter: (a = 10)
+(5 rows)
+
+insert into test_data1 select * from test_data where a = 10 returning a as data;
+ data
+------
+ 10
+(1 row)
+
+--
+-- Test INSERT into a table with a foreign key.
+-- (Insert into a table with a foreign key is parallel-restricted,
+-- as doing this in a parallel worker would create a new commandId
+-- and within a worker this is not currently supported)
+--
+explain (costs off) insert into para_insert_f1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_f1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into para_insert_f1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the insert worked
+select count(*), sum(unique1) from para_insert_f1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+--
+-- Test INSERT with ON CONFLICT ... DO UPDATE ...
+-- (should not create a parallel plan)
+--
+create table test_conflict_table(id serial primary key, somedata int);
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on test_conflict_table
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+(4 rows)
+
+insert into test_conflict_table(id, somedata) select a, a from test_data;
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data ON CONFLICT(id) DO UPDATE SET somedata = EXCLUDED.somedata + 1;
+ QUERY PLAN
+------------------------------------------------------
+ Insert on test_conflict_table
+ Conflict Resolution: UPDATE
+ Conflict Arbiter Indexes: test_conflict_table_pkey
+ -> Seq Scan on test_data
+(4 rows)
+
+--
+-- Test INSERT with parallel-unsafe index expression
+-- (should not create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names2');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_index | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names2');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into names2 select * from names;
+ QUERY PLAN
+-------------------------
+ Insert on names2
+ -> Seq Scan on names
+(2 rows)
+
+--
+-- Test INSERT with parallel-restricted index expression
+-- (should create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names4');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | r
+ pg_index | r
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names4');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+explain (costs off) insert into names4 select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on names4
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+--
+-- Test INSERT with underlying query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names5 (like names);
+explain (costs off) insert into names5 select * from names returning *;
+ QUERY PLAN
+----------------------------------------
+ Insert on names5
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names6 (like names);
+explain (costs off) insert into names6 select * from names order by last_name returning *;
+ QUERY PLAN
+----------------------------------------------
+ Insert on names6
+ -> Gather Merge
+ Workers Planned: 3
+ -> Sort
+ Sort Key: names.last_name
+ -> Parallel Seq Scan on names
+(6 rows)
+
+insert into names6 select * from names order by last_name returning *;
+ index | first_name | last_name
+-------+------------+-------------
+ 2 | niels | bohr
+ 1 | albert | einstein
+ 4 | leonhard | euler
+ 8 | richard | feynman
+ 5 | stephen | hawking
+ 6 | isaac | newton
+ 3 | erwin | schrodinger
+ 7 | alan | turing
+(8 rows)
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (with projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names7 (like names);
+explain (costs off) insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+ QUERY PLAN
+----------------------------------------------
+ Insert on names7
+ -> Gather Merge
+ Workers Planned: 3
+ -> Sort
+ Sort Key: names.last_name
+ -> Parallel Seq Scan on names
+(6 rows)
+
+insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+ last_name_then_first_name
+---------------------------
+ bohr, niels
+ einstein, albert
+ euler, leonhard
+ feynman, richard
+ hawking, stephen
+ newton, isaac
+ schrodinger, erwin
+ turing, alan
+(8 rows)
+
+--
+-- Test INSERT into temporary table with underlying query.
+-- (Insert into a temp table is parallel-restricted;
+-- should create a parallel plan; parallel SELECT)
+--
+create temporary table temp_names (like names);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('temp_names');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_class | r
+(1 row)
+
+select pg_get_table_max_parallel_dml_hazard('temp_names');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+explain (costs off) insert into temp_names select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on temp_names
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+insert into temp_names select * from names;
+--
+-- Test INSERT with column defaults
+--
+--
+--
+-- Parallel INSERT with unsafe column default, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,c,d) select a,a*4,a*8 from test_data;
+ QUERY PLAN
+-----------------------------
+ Insert on testdef
+ -> Seq Scan on test_data
+(2 rows)
+
+--
+-- Parallel INSERT with restricted column default, should use parallel SELECT
+--
+explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on testdef
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+(4 rows)
+
+insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+select * from testdef order by a;
+ a | b | c | d
+----+----+----+----
+ 1 | 2 | 10 | 8
+ 2 | 4 | 10 | 16
+ 3 | 6 | 10 | 24
+ 4 | 8 | 10 | 32
+ 5 | 10 | 10 | 40
+ 6 | 12 | 10 | 48
+ 7 | 14 | 10 | 56
+ 8 | 16 | 10 | 64
+ 9 | 18 | 10 | 72
+ 10 | 20 | 10 | 80
+(10 rows)
+
+truncate testdef;
+--
+-- Parallel INSERT with restricted and unsafe column defaults, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,d) select a,a*8 from test_data;
+ QUERY PLAN
+-----------------------------
+ Insert on testdef
+ -> Seq Scan on test_data
+(2 rows)
+
+--
+-- Test INSERT into partition with underlying query.
+--
+create table parttable1 (a int, b name) partition by range (a);
+create table parttable1_1 partition of parttable1 for values from (0) to (5000);
+create table parttable1_2 partition of parttable1 for values from (5000) to (10000);
+alter table parttable1 parallel dml safe;
+explain (costs off) insert into parttable1 select unique1,stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on parttable1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into parttable1 select unique1,stringu1 from tenk1;
+select count(*) from parttable1_1;
+ count
+-------
+ 5000
+(1 row)
+
+select count(*) from parttable1_2;
+ count
+-------
+ 5000
+(1 row)
+
+--
+-- Test table with parallel-unsafe check constraint
+--
+create or replace function check_b_unsafe(b name) returns boolean as $$
+ begin
+ return (b <> 'XXXXXX');
+ end;
+$$ language plpgsql parallel unsafe;
+create table table_check_b(a int4, b name check (check_b_unsafe(b)), c name);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('table_check_b');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_constraint | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('table_check_b');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into table_check_b(a,b,c) select unique1, unique2, stringu1 from tenk1;
+ QUERY PLAN
+-------------------------
+ Insert on table_check_b
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Test table with parallel-safe before stmt-level triggers
+-- (should create a parallel SELECT plan; triggers should fire)
+--
+create table names_with_safe_trigger (like names);
+create or replace function insert_before_trigger_safe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_safe';
+ return new;
+ end;
+$$ language plpgsql parallel safe;
+create trigger insert_before_trigger_safe before insert on names_with_safe_trigger
+ for each statement execute procedure insert_before_trigger_safe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_safe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+(0 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names_with_safe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ s
+(1 row)
+
+explain (costs off) insert into names_with_safe_trigger select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on names_with_safe_trigger
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+insert into names_with_safe_trigger select * from names;
+NOTICE: hello from insert_before_trigger_safe
+--
+-- Test table with parallel-unsafe before stmt-level triggers
+-- (should not create a parallel plan; triggers should fire)
+--
+create table names_with_unsafe_trigger (like names);
+create or replace function insert_before_trigger_unsafe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_unsafe';
+ return new;
+ end;
+$$ language plpgsql parallel unsafe;
+create trigger insert_before_trigger_unsafe before insert on names_with_unsafe_trigger
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_unsafe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_trigger | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names_with_unsafe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into names_with_unsafe_trigger select * from names;
+ QUERY PLAN
+-------------------------------------
+ Insert on names_with_unsafe_trigger
+ -> Seq Scan on names
+(2 rows)
+
+insert into names_with_unsafe_trigger select * from names;
+NOTICE: hello from insert_before_trigger_unsafe
+--
+-- Test partition with parallel-unsafe trigger
+-- (should not create a parallel plan)
+--
+create table part_unsafe_trigger (a int4, b name) partition by range (a);
+create table part_unsafe_trigger_1 partition of part_unsafe_trigger for values from (0) to (5000);
+create table part_unsafe_trigger_2 partition of part_unsafe_trigger for values from (5000) to (10000);
+create trigger part_insert_before_trigger_unsafe before insert on part_unsafe_trigger_1
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('part_unsafe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_trigger | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('part_unsafe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into part_unsafe_trigger select unique1, stringu1 from tenk1;
+ QUERY PLAN
+-------------------------------
+ Insert on part_unsafe_trigger
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Test DOMAIN column with a CHECK constraint
+--
+create function sql_is_distinct_from_u(anyelement, anyelement)
+returns boolean language sql parallel unsafe
+as 'select $1 is distinct from $2 limit 1';
+create domain inotnull_u int
+ check (sql_is_distinct_from_u(value, null));
+create table dom_table_u (x inotnull_u, y int);
+-- Test DOMAIN column with parallel-unsafe CHECK constraint
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('dom_table_u');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_constraint | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('dom_table_u');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into dom_table_u select unique1, unique2 from tenk1;
+ QUERY PLAN
+-------------------------
+ Insert on dom_table_u
+ -> Seq Scan on tenk1
+(2 rows)
+
+rollback;
+--
+-- Clean up anything not created in the transaction
+--
+drop table names;
+drop index names2_fullname_idx;
+drop table names2;
+drop index names4_fullname_idx;
+drop table names4;
+drop table testdef;
+drop table test_data;
+drop function bdefault_unsafe;
+drop function cdefault_restricted;
+drop function ddefault_safe;
+drop function fullname_parallel_unsafe;
+drop function fullname_parallel_restricted;
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 1b2f6bc418..1fedebcd9b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -2818,6 +2818,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
--------+----------------+-----------+----------+---------+----------+--------------+-------------
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
+Parallel DML: default
\d+ tbl_heap
Table "tableam_display.tbl_heap"
@@ -2825,6 +2826,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
--------+----------------+-----------+----------+---------+----------+--------------+-------------
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
+Parallel DML: default
\set HIDE_TABLEAM off
\d+ tbl_heap_psql
@@ -2834,6 +2836,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
Access method: heap_psql
+Parallel DML: default
\d+ tbl_heap
Table "tableam_display.tbl_heap"
@@ -2842,50 +2845,51 @@ Access method: heap_psql
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
Access method: heap
+Parallel DML: default
-- AM is displayed for tables, indexes and materialized views.
\d+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | 0 bytes |
- tableam_display | tbl_heap | table | regress_display_role | permanent | heap | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | 0 bytes |
- tableam_display | view_heap_psql | view | regress_display_role | permanent | | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | default | 0 bytes |
+ tableam_display | tbl_heap | table | regress_display_role | permanent | heap | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | default | 0 bytes |
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | | default | 0 bytes |
(4 rows)
\dt+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+---------------+-------+----------------------+-------------+---------------+---------+-------------
- tableam_display | tbl_heap | table | regress_display_role | permanent | heap | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+---------------+-------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | tbl_heap | table | regress_display_role | permanent | heap | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | default | 0 bytes |
(2 rows)
\dm+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | default | 0 bytes |
(1 row)
-- But not for views and sequences.
\dv+
- List of relations
- Schema | Name | Type | Owner | Persistence | Size | Description
------------------+----------------+------+----------------------+-------------+---------+-------------
- tableam_display | view_heap_psql | view | regress_display_role | permanent | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Parallel DML | Size | Description
+-----------------+----------------+------+----------------------+-------------+--------------+---------+-------------
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | default | 0 bytes |
(1 row)
\set HIDE_TABLEAM on
\d+
- List of relations
- Schema | Name | Type | Owner | Persistence | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | 0 bytes |
- tableam_display | tbl_heap | table | regress_display_role | permanent | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | 0 bytes |
- tableam_display | view_heap_psql | view | regress_display_role | permanent | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | tbl_heap | table | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | default | 0 bytes |
(4 rows)
RESET ROLE;
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4a5ef0bc24..f448b80856 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -85,6 +85,7 @@ Indexes:
"testpub_tbl2_pkey" PRIMARY KEY, btree (id)
Publications:
"testpub_foralltables"
+Parallel DML: default
\dRp+ testpub_foralltables
Publication testpub_foralltables
@@ -198,6 +199,7 @@ Publications:
"testpib_ins_trunct"
"testpub_default"
"testpub_fortbl"
+Parallel DML: default
\d+ testpub_tbl1
Table "public.testpub_tbl1"
@@ -211,6 +213,7 @@ Publications:
"testpib_ins_trunct"
"testpub_default"
"testpub_fortbl"
+Parallel DML: default
\dRp+ testpub_default
Publication testpub_default
@@ -236,6 +239,7 @@ Indexes:
Publications:
"testpib_ins_trunct"
"testpub_fortbl"
+Parallel DML: default
-- permissions
SET ROLE regress_publication_user2;
diff --git a/src/test/regress/expected/replica_identity.out b/src/test/regress/expected/replica_identity.out
index 79002197a7..8fce774332 100644
--- a/src/test/regress/expected/replica_identity.out
+++ b/src/test/regress/expected/replica_identity.out
@@ -171,6 +171,7 @@ Indexes:
"test_replica_identity_unique_defer" UNIQUE CONSTRAINT, btree (keya, keyb) DEFERRABLE
"test_replica_identity_unique_nondefer" UNIQUE CONSTRAINT, btree (keya, keyb)
Replica Identity: FULL
+Parallel DML: default
ALTER TABLE test_replica_identity REPLICA IDENTITY NOTHING;
SELECT relreplident FROM pg_class WHERE oid = 'test_replica_identity'::regclass;
diff --git a/src/test/regress/expected/rowsecurity.out b/src/test/regress/expected/rowsecurity.out
index 89397e41f0..5e6807f90a 100644
--- a/src/test/regress/expected/rowsecurity.out
+++ b/src/test/regress/expected/rowsecurity.out
@@ -958,6 +958,7 @@ Policies:
Partitions: part_document_fiction FOR VALUES FROM (11) TO (12),
part_document_nonfiction FOR VALUES FROM (99) TO (100),
part_document_satire FOR VALUES FROM (55) TO (56)
+Parallel DML: default
SELECT * FROM pg_policies WHERE schemaname = 'regress_rls_schema' AND tablename like '%part_document%' ORDER BY policyname;
schemaname | tablename | policyname | permissive | roles | cmd | qual | with_check
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index e5ab11275d..0ae35e1662 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -3155,6 +3155,7 @@ Rules:
r3 AS
ON DELETE TO rules_src DO
NOTIFY rules_src_deletion
+Parallel DML: default
--
-- Ensure an aliased target relation for insert is correctly deparsed.
@@ -3183,6 +3184,7 @@ Rules:
r5 AS
ON UPDATE TO rules_src DO INSTEAD UPDATE rules_log trgt SET tag = 'updated'::text
WHERE trgt.f1 = new.f1
+Parallel DML: default
--
-- Also check multiassignment deparsing.
@@ -3206,6 +3208,7 @@ Rules:
WHERE trgt.f1 = new.f1
RETURNING new.f1,
new.f2
+Parallel DML: default
drop table rule_t1, rule_dest;
--
diff --git a/src/test/regress/expected/stats_ext.out b/src/test/regress/expected/stats_ext.out
index 7fb54de53d..e4fa545c8c 100644
--- a/src/test/regress/expected/stats_ext.out
+++ b/src/test/regress/expected/stats_ext.out
@@ -145,6 +145,7 @@ ALTER STATISTICS ab1_a_b_stats SET STATISTICS -1;
b | integer | | | | plain | |
Statistics objects:
"public"."ab1_a_b_stats" ON a, b FROM ab1
+Parallel DML: default
-- partial analyze doesn't build stats either
ANALYZE ab1 (a);
diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out
index 5d124cf96f..9d39fad795 100644
--- a/src/test/regress/expected/triggers.out
+++ b/src/test/regress/expected/triggers.out
@@ -3483,6 +3483,7 @@ alter trigger parenttrig on parent rename to anothertrig;
Triggers:
parenttrig AFTER INSERT ON child FOR EACH ROW EXECUTE FUNCTION f()
Inherits: parent
+Parallel DML: default
drop table parent, child;
drop function f();
diff --git a/src/test/regress/expected/update.out b/src/test/regress/expected/update.out
index c809f88f54..d99b133644 100644
--- a/src/test/regress/expected/update.out
+++ b/src/test/regress/expected/update.out
@@ -753,6 +753,7 @@ create table part_def partition of range_parted default;
e | character varying | | | | extended | |
Partition of: range_parted DEFAULT
Partition constraint: (NOT ((a IS NOT NULL) AND (b IS NOT NULL) AND (((a = 'a'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'a'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'b'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '20'::bigint) AND (b < '30'::bigint)))))
+Parallel DML: default
insert into range_parted values ('c', 9);
-- ok
diff --git a/src/test/regress/output/tablespace.source b/src/test/regress/output/tablespace.source
index 1bbe7e0323..19c65ce435 100644
--- a/src/test/regress/output/tablespace.source
+++ b/src/test/regress/output/tablespace.source
@@ -339,6 +339,7 @@ Indexes:
"part_a_idx" btree (a), tablespace "regress_tblspace"
Partitions: testschema.part1 FOR VALUES IN (1),
testschema.part2 FOR VALUES IN (2)
+Parallel DML: default
\d testschema.part1
Table "testschema.part1"
@@ -358,6 +359,7 @@ Partition of: testschema.part FOR VALUES IN (1)
Partition constraint: ((a IS NOT NULL) AND (a = 1))
Indexes:
"part1_a_idx" btree (a), tablespace "regress_tblspace"
+Parallel DML: default
\d testschema.part_a_idx
Partitioned index "testschema.part_a_idx"
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 7be89178f0..daf0bad4d5 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -96,6 +96,7 @@ test: rules psql psql_crosstab amutils stats_ext collate.linux.utf8
# run by itself so it can run parallel workers
test: select_parallel
test: write_parallel
+test: insert_parallel
# no relation related tests can be put in this group
test: publication subscription
diff --git a/src/test/regress/sql/insert_parallel.sql b/src/test/regress/sql/insert_parallel.sql
new file mode 100644
index 0000000000..9bf1809ccd
--- /dev/null
+++ b/src/test/regress/sql/insert_parallel.sql
@@ -0,0 +1,381 @@
+--
+-- PARALLEL
+--
+
+--
+-- START: setup some tables and data needed by the tests.
+--
+
+-- Setup - index expressions test
+
+create function pg_class_relname(Oid)
+returns name language sql parallel unsafe
+as 'select relname from pg_class where $1 = oid';
+
+-- For testing purposes, we'll mark this function as parallel-unsafe
+create or replace function fullname_parallel_unsafe(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel unsafe;
+
+create or replace function fullname_parallel_restricted(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel restricted;
+
+create table names(index int, first_name text, last_name text);
+create table names2(index int, first_name text, last_name text);
+create index names2_fullname_idx on names2 (fullname_parallel_unsafe(first_name, last_name));
+create table names4(index int, first_name text, last_name text);
+create index names4_fullname_idx on names4 (fullname_parallel_restricted(first_name, last_name));
+
+
+insert into names values
+ (1, 'albert', 'einstein'),
+ (2, 'niels', 'bohr'),
+ (3, 'erwin', 'schrodinger'),
+ (4, 'leonhard', 'euler'),
+ (5, 'stephen', 'hawking'),
+ (6, 'isaac', 'newton'),
+ (7, 'alan', 'turing'),
+ (8, 'richard', 'feynman');
+
+-- Setup - column default tests
+
+create or replace function bdefault_unsafe ()
+returns int language plpgsql parallel unsafe as $$
+begin
+ RETURN 5;
+end $$;
+
+create or replace function cdefault_restricted ()
+returns int language plpgsql parallel restricted as $$
+begin
+ RETURN 10;
+end $$;
+
+create or replace function ddefault_safe ()
+returns int language plpgsql parallel safe as $$
+begin
+ RETURN 20;
+end $$;
+
+create table testdef(a int, b int default bdefault_unsafe(), c int default cdefault_restricted(), d int default ddefault_safe());
+create table test_data(a int);
+insert into test_data select * from generate_series(1,10);
+
+--
+-- END: setup some tables and data needed by the tests.
+--
+
+begin;
+
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+
+create table para_insert_p1 (
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+);
+
+create table para_insert_f1 (
+ unique1 int4 REFERENCES para_insert_p1(unique1),
+ stringu1 name
+);
+
+create table para_insert_with_parallel_unsafe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml unsafe;
+
+create table para_insert_with_parallel_restricted(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml restricted;
+
+create table para_insert_with_parallel_safe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml safe;
+
+create table para_insert_with_parallel_auto(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml default;
+
+-- Check FK trigger
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('para_insert_f1');
+select pg_get_table_max_parallel_dml_hazard('para_insert_f1');
+
+--
+-- Test INSERT with underlying query.
+-- Set parallel dml safe.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+alter table para_insert_p1 parallel dml safe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+insert into para_insert_p1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+explain (costs off) insert into para_insert_with_parallel_safe select unique1, stringu1 from tenk1;
+
+--
+-- Set parallel dml unsafe.
+-- (should not create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml unsafe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_unsafe select unique1, stringu1 from tenk1;
+
+--
+-- Set parallel dml restricted.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml restricted;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_restricted select unique1, stringu1 from tenk1;
+
+--
+-- Reset parallel dml.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml default;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_auto select unique1, stringu1 from tenk1;
+
+--
+-- Test INSERT with ordered underlying query.
+-- (should create plan with parallel SELECT, GatherMerge parent node)
+--
+truncate para_insert_p1 cascade;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+
+--
+-- Test INSERT with RETURNING clause.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+create table test_data1(like test_data);
+explain (costs off) insert into test_data1 select * from test_data where a = 10 returning a as data;
+insert into test_data1 select * from test_data where a = 10 returning a as data;
+
+--
+-- Test INSERT into a table with a foreign key.
+-- (Insert into a table with a foreign key is parallel-restricted,
+-- as doing this in a parallel worker would create a new commandId
+-- and within a worker this is not currently supported)
+--
+explain (costs off) insert into para_insert_f1 select unique1, stringu1 from tenk1;
+insert into para_insert_f1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the insert worked
+select count(*), sum(unique1) from para_insert_f1;
+
+--
+-- Test INSERT with ON CONFLICT ... DO UPDATE ...
+-- (should not create a parallel plan)
+--
+create table test_conflict_table(id serial primary key, somedata int);
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data;
+insert into test_conflict_table(id, somedata) select a, a from test_data;
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data ON CONFLICT(id) DO UPDATE SET somedata = EXCLUDED.somedata + 1;
+
+--
+-- Test INSERT with parallel-unsafe index expression
+-- (should not create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names2');
+select pg_get_table_max_parallel_dml_hazard('names2');
+explain (costs off) insert into names2 select * from names;
+
+--
+-- Test INSERT with parallel-restricted index expression
+-- (should create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names4');
+select pg_get_table_max_parallel_dml_hazard('names4');
+explain (costs off) insert into names4 select * from names;
+
+--
+-- Test INSERT with underlying query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names5 (like names);
+explain (costs off) insert into names5 select * from names returning *;
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names6 (like names);
+explain (costs off) insert into names6 select * from names order by last_name returning *;
+insert into names6 select * from names order by last_name returning *;
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (with projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names7 (like names);
+explain (costs off) insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+
+
+--
+-- Test INSERT into temporary table with underlying query.
+-- (Insert into a temp table is parallel-restricted;
+-- should create a parallel plan; parallel SELECT)
+--
+create temporary table temp_names (like names);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('temp_names');
+select pg_get_table_max_parallel_dml_hazard('temp_names');
+explain (costs off) insert into temp_names select * from names;
+insert into temp_names select * from names;
+
+--
+-- Test INSERT with column defaults
+--
+--
+
+--
+-- Parallel INSERT with unsafe column default, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,c,d) select a,a*4,a*8 from test_data;
+
+--
+-- Parallel INSERT with restricted column default, should use parallel SELECT
+--
+explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+select * from testdef order by a;
+truncate testdef;
+
+--
+-- Parallel INSERT with restricted and unsafe column defaults, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,d) select a,a*8 from test_data;
+
+--
+-- Test INSERT into partition with underlying query.
+--
+create table parttable1 (a int, b name) partition by range (a);
+create table parttable1_1 partition of parttable1 for values from (0) to (5000);
+create table parttable1_2 partition of parttable1 for values from (5000) to (10000);
+
+alter table parttable1 parallel dml safe;
+
+explain (costs off) insert into parttable1 select unique1,stringu1 from tenk1;
+insert into parttable1 select unique1,stringu1 from tenk1;
+select count(*) from parttable1_1;
+select count(*) from parttable1_2;
+
+--
+-- Test table with parallel-unsafe check constraint
+--
+create or replace function check_b_unsafe(b name) returns boolean as $$
+ begin
+ return (b <> 'XXXXXX');
+ end;
+$$ language plpgsql parallel unsafe;
+
+create table table_check_b(a int4, b name check (check_b_unsafe(b)), c name);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('table_check_b');
+select pg_get_table_max_parallel_dml_hazard('table_check_b');
+explain (costs off) insert into table_check_b(a,b,c) select unique1, unique2, stringu1 from tenk1;
+
+--
+-- Test table with parallel-safe before stmt-level triggers
+-- (should create a parallel SELECT plan; triggers should fire)
+--
+create table names_with_safe_trigger (like names);
+
+create or replace function insert_before_trigger_safe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_safe';
+ return new;
+ end;
+$$ language plpgsql parallel safe;
+create trigger insert_before_trigger_safe before insert on names_with_safe_trigger
+ for each statement execute procedure insert_before_trigger_safe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_safe_trigger');
+select pg_get_table_max_parallel_dml_hazard('names_with_safe_trigger');
+explain (costs off) insert into names_with_safe_trigger select * from names;
+insert into names_with_safe_trigger select * from names;
+
+--
+-- Test table with parallel-unsafe before stmt-level triggers
+-- (should not create a parallel plan; triggers should fire)
+--
+create table names_with_unsafe_trigger (like names);
+create or replace function insert_before_trigger_unsafe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_unsafe';
+ return new;
+ end;
+$$ language plpgsql parallel unsafe;
+create trigger insert_before_trigger_unsafe before insert on names_with_unsafe_trigger
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_unsafe_trigger');
+select pg_get_table_max_parallel_dml_hazard('names_with_unsafe_trigger');
+explain (costs off) insert into names_with_unsafe_trigger select * from names;
+insert into names_with_unsafe_trigger select * from names;
+
+--
+-- Test partition with parallel-unsafe trigger
+-- (should not create a parallel plan)
+--
+create table part_unsafe_trigger (a int4, b name) partition by range (a);
+create table part_unsafe_trigger_1 partition of part_unsafe_trigger for values from (0) to (5000);
+create table part_unsafe_trigger_2 partition of part_unsafe_trigger for values from (5000) to (10000);
+create trigger part_insert_before_trigger_unsafe before insert on part_unsafe_trigger_1
+ for each statement execute procedure insert_before_trigger_unsafe();
+
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('part_unsafe_trigger');
+select pg_get_table_max_parallel_dml_hazard('part_unsafe_trigger');
+explain (costs off) insert into part_unsafe_trigger select unique1, stringu1 from tenk1;
+
+--
+-- Test DOMAIN column with a CHECK constraint
+--
+create function sql_is_distinct_from_u(anyelement, anyelement)
+returns boolean language sql parallel unsafe
+as 'select $1 is distinct from $2 limit 1';
+
+create domain inotnull_u int
+ check (sql_is_distinct_from_u(value, null));
+
+create table dom_table_u (x inotnull_u, y int);
+
+-- Test DOMAIN column with parallel-unsafe CHECK constraint
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('dom_table_u');
+select pg_get_table_max_parallel_dml_hazard('dom_table_u');
+explain (costs off) insert into dom_table_u select unique1, unique2 from tenk1;
+
+rollback;
+
+--
+-- Clean up anything not created in the transaction
+--
+
+drop table names;
+drop index names2_fullname_idx;
+drop table names2;
+drop index names4_fullname_idx;
+drop table names4;
+drop table testdef;
+drop table test_data;
+
+drop function bdefault_unsafe;
+drop function cdefault_restricted;
+drop function ddefault_safe;
+drop function fullname_parallel_unsafe;
+drop function fullname_parallel_restricted;
--
2.27.0
On Tues, August 3, 2021 3:40 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:
Based on the discussion here, I implemented the auto-safety-check feature.
Since most of the technical discussion happened here,I attatched the patches in
this thread.The patches allow users to specify a parallel-safety option for both partitioned
and non-partitioned relations, and for non-partitioned relations if users didn't
specify, it would be computed automatically. If the user has specified
parallel-safety option then we would consider that instead of computing the
value by ourselves. But for partitioned table, if users didn't specify the parallel
dml safety, it will treat is as unsafe.For non-partitioned relations, after computing the parallel-safety of relation
during the planning, we save it in the relation cache entry and invalidate the
cached parallel-safety for all relations in relcache for a particular database
whenever any function's parallel-safety is changed.To make it possible for user to alter the safety to a not specified value to get the
automatic safety check, add a new default option(temporarily named 'DEFAULT'
in addition to safe/unsafe/restricted) about parallel dml safety.To facilitate users for providing a parallel-safety option, provide a utility
functionr "pg_get_table_parallel_dml_safety(regclass)" that returns records of
(objid, classid, parallel_safety) for all parallel unsafe/restricted table-related
objects from which the table's parallel DML safety is determined.
This will allow user to identify unsafe objects and if the required user can change
the parallel safety of required functions and then use the parallel safety option
for the table.
Update the commit message in patches to make it easier for others to review.
Best regards,
Houzj
Attachments:
v16-0002-Parallel-SELECT-for-INSERT.patchapplication/octet-stream; name=v16-0002-Parallel-SELECT-for-INSERT.patchDownload
From 7cad3cf052856ec9f5e087f1edec1c24b920dc74 Mon Sep 17 00:00:00 2001
From: houzj <houzj.fnst@fujitsu.com>
Date: Mon, 31 May 2021 09:32:54 +0800
Subject: [PATCH v14 2/4] parallel-SELECT-for-INSERT
Enable parallel select for insert.
Prepare for entering parallel mode by assigning a TransactionId.
---
src/backend/access/transam/xact.c | 26 +++++++++
src/backend/executor/execMain.c | 3 +
src/backend/optimizer/plan/planner.c | 21 +++----
src/backend/optimizer/util/clauses.c | 87 +++++++++++++++++++++++++++-
src/include/access/xact.h | 15 +++++
src/include/optimizer/clauses.h | 2 +
6 files changed, 143 insertions(+), 11 deletions(-)
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 441445927e..2d68e4633a 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -1014,6 +1014,32 @@ IsInParallelMode(void)
return CurrentTransactionState->parallelModeLevel != 0;
}
+/*
+ * PrepareParallelModePlanExec
+ *
+ * Prepare for entering parallel mode plan execution, based on command-type.
+ */
+void
+PrepareParallelModePlanExec(CmdType commandType)
+{
+ if (IsModifySupportedInParallelMode(commandType))
+ {
+ Assert(!IsInParallelMode());
+
+ /*
+ * Prepare for entering parallel mode by assigning a TransactionId.
+ * Failure to do this now would result in heap_insert() subsequently
+ * attempting to assign a TransactionId whilst in parallel-mode, which
+ * is not allowed.
+ *
+ * This approach has a disadvantage in that if the underlying SELECT
+ * does not return any rows, then the TransactionId is not used,
+ * however that shouldn't happen in practice in many cases.
+ */
+ (void) GetCurrentTransactionId();
+ }
+}
+
/*
* CommandCounterIncrement
*/
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index b3ce4bae53..ea685f0846 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -1535,7 +1535,10 @@ ExecutePlan(EState *estate,
estate->es_use_parallel_mode = use_parallel_mode;
if (use_parallel_mode)
+ {
+ PrepareParallelModePlanExec(estate->es_plannedstmt->commandType);
EnterParallelMode();
+ }
/*
* Loop until we've processed the proper number of tuples from the plan.
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 1868c4eff4..7736813230 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -314,16 +314,16 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
/*
* Assess whether it's feasible to use parallel mode for this query. We
* can't do this in a standalone backend, or if the command will try to
- * modify any data, or if this is a cursor operation, or if GUCs are set
- * to values that don't permit parallelism, or if parallel-unsafe
- * functions are present in the query tree.
+ * modify any data (except for Insert), or if this is a cursor operation,
+ * or if GUCs are set to values that don't permit parallelism, or if
+ * parallel-unsafe functions are present in the query tree.
*
- * (Note that we do allow CREATE TABLE AS, SELECT INTO, and CREATE
- * MATERIALIZED VIEW to use parallel plans, but as of now, only the leader
- * backend writes into a completely new table. In the future, we can
- * extend it to allow workers to write into the table. However, to allow
- * parallel updates and deletes, we have to solve other problems,
- * especially around combo CIDs.)
+ * (Note that we do allow CREATE TABLE AS, INSERT INTO...SELECT, SELECT
+ * INTO, and CREATE MATERIALIZED VIEW to use parallel plans. However, as
+ * of now, only the leader backend writes into a completely new table. In
+ * the future, we can extend it to allow workers to write into the table.
+ * However, to allow parallel updates and deletes, we have to solve other
+ * problems, especially around combo CIDs.)
*
* For now, we don't try to use parallel mode if we're running inside a
* parallel worker. We might eventually be able to relax this
@@ -332,7 +332,8 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
*/
if ((cursorOptions & CURSOR_OPT_PARALLEL_OK) != 0 &&
IsUnderPostmaster &&
- parse->commandType == CMD_SELECT &&
+ (parse->commandType == CMD_SELECT ||
+ is_parallel_allowed_for_modify(parse)) &&
!parse->hasModifyingCTE &&
max_parallel_workers_per_gather > 0 &&
!IsParallelWorker())
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 7187f17da5..ac0f243bf1 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -20,6 +20,8 @@
#include "postgres.h"
#include "access/htup_details.h"
+#include "access/table.h"
+#include "access/xact.h"
#include "catalog/pg_aggregate.h"
#include "catalog/pg_class.h"
#include "catalog/pg_language.h"
@@ -43,6 +45,7 @@
#include "parser/parse_agg.h"
#include "parser/parse_coerce.h"
#include "parser/parse_func.h"
+#include "parser/parsetree.h"
#include "rewrite/rewriteManip.h"
#include "tcop/tcopprot.h"
#include "utils/acl.h"
@@ -51,6 +54,7 @@
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
+#include "utils/rel.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
@@ -151,6 +155,7 @@ static Query *substitute_actual_srf_parameters(Query *expr,
int nargs, List *args);
static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
+static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
/*****************************************************************************
@@ -618,12 +623,34 @@ contain_volatile_functions_not_nextval_walker(Node *node, void *context)
char
max_parallel_hazard(Query *parse)
{
+ bool max_hazard_found;
max_parallel_hazard_context context;
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_UNSAFE;
context.safe_param_ids = NIL;
- (void) max_parallel_hazard_walker((Node *) parse, &context);
+
+ max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
+
+ if (!max_hazard_found &&
+ IsModifySupportedInParallelMode(parse->commandType))
+ {
+ RangeTblEntry *rte;
+ Relation target_rel;
+
+ rte = rt_fetch(parse->resultRelation, parse->rtable);
+
+ /*
+ * The target table is already locked by the caller (this is done in the
+ * parse/analyze phase), and remains locked until end-of-transaction.
+ */
+ target_rel = table_open(rte->relid, NoLock);
+
+ (void) max_parallel_hazard_test(target_rel->rd_rel->relparalleldml,
+ &context);
+ table_close(target_rel, NoLock);
+ }
+
return context.max_hazard;
}
@@ -857,6 +884,64 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
context);
}
+/*
+ * is_parallel_allowed_for_modify
+ *
+ * Check at a high-level if parallel mode is able to be used for the specified
+ * table-modification statement. Currently, we support only Inserts.
+ *
+ * It's not possible in the following cases:
+ *
+ * 1) INSERT...ON CONFLICT...DO UPDATE
+ * 2) INSERT without SELECT
+ *
+ * (Note: we don't do in-depth parallel-safety checks here, we do only the
+ * cheaper tests that can quickly exclude obvious cases for which
+ * parallelism isn't supported, to avoid having to do further parallel-safety
+ * checks for these)
+ */
+bool
+is_parallel_allowed_for_modify(Query *parse)
+{
+ bool hasSubQuery;
+ RangeTblEntry *rte;
+ ListCell *lc;
+
+ if (!IsModifySupportedInParallelMode(parse->commandType))
+ return false;
+
+ /*
+ * UPDATE is not currently supported in parallel-mode, so prohibit
+ * INSERT...ON CONFLICT...DO UPDATE...
+ *
+ * In order to support update, even if only in the leader, some further
+ * work would need to be done. A mechanism would be needed for sharing
+ * combo-cids between leader and workers during parallel-mode, since for
+ * example, the leader might generate a combo-cid and it needs to be
+ * propagated to the workers.
+ */
+ if (parse->commandType == CMD_INSERT &&
+ parse->onConflict != NULL &&
+ parse->onConflict->action == ONCONFLICT_UPDATE)
+ return false;
+
+ /*
+ * If there is no underlying SELECT, a parallel insert operation is not
+ * desirable.
+ */
+ hasSubQuery = false;
+ foreach(lc, parse->rtable)
+ {
+ rte = lfirst_node(RangeTblEntry, lc);
+ if (rte->rtekind == RTE_SUBQUERY)
+ {
+ hasSubQuery = true;
+ break;
+ }
+ }
+
+ return hasSubQuery;
+}
/*****************************************************************************
* Check clauses for nonstrict functions
diff --git a/src/include/access/xact.h b/src/include/access/xact.h
index 134f6862da..fd3f86bf7c 100644
--- a/src/include/access/xact.h
+++ b/src/include/access/xact.h
@@ -466,5 +466,20 @@ extern void ParsePrepareRecord(uint8 info, xl_xact_prepare *xlrec, xl_xact_parse
extern void EnterParallelMode(void);
extern void ExitParallelMode(void);
extern bool IsInParallelMode(void);
+extern void PrepareParallelModePlanExec(CmdType commandType);
+
+/*
+ * IsModifySupportedInParallelMode
+ *
+ * Indicates whether execution of the specified table-modification command
+ * (INSERT/UPDATE/DELETE) in parallel-mode is supported, subject to certain
+ * parallel-safety conditions.
+ */
+static inline bool
+IsModifySupportedInParallelMode(CmdType commandType)
+{
+ /* Currently only INSERT is supported */
+ return (commandType == CMD_INSERT);
+}
#endif /* XACT_H */
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 0673887a85..32b56565e5 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -53,4 +53,6 @@ extern void CommuteOpExpr(OpExpr *clause);
extern Query *inline_set_returning_function(PlannerInfo *root,
RangeTblEntry *rte);
+extern bool is_parallel_allowed_for_modify(Query *parse);
+
#endif /* CLAUSES_H */
--
2.27.0
v16-0003-Get-parallel-safety-functions.patchapplication/octet-stream; name=v16-0003-Get-parallel-safety-functions.patchDownload
From d93281fdbeef47af1b16bf6803d80c18e592fc13 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Fri, 30 Jul 2021 11:50:55 +0800
Subject: [PATCH] get-parallel-safety-functions
Parallel SELECT can't be utilized for INSERT when target table has a
parallel-unsafe: trigger, index expression or predicate, column default
expression, partition key expression or check constraint.
Provide a utility function "pg_get_table_parallel_dml_safety(regclass)" that
returns records of (objid, classid, parallel_safety) for all
parallel unsafe/restricted table-related objects from which the
table's parallel DML safety is determined. The user can use this
information during development in order to accurately declare a
table's parallel DML safety. Or to identify any problematic objects
if a parallel DML fails or behaves unexpectedly.
When the use of an index-related parallel unsafe/restricted function
is detected, both the function oid and the index oid are returned.
Provide a utility function "pg_get_table_max_parallel_dml_hazard(regclass)" that
returns the worst parallel DML safety hazard that can be found in the
given relation. Users can use this function to do a quick check without
caring about specific parallel-related objects.
---
src/backend/optimizer/util/clauses.c | 658 ++++++++++++++++++++++++++++++++++-
src/backend/utils/adt/misc.c | 94 +++++
src/backend/utils/cache/typcache.c | 17 +
src/include/catalog/pg_proc.dat | 22 +-
src/include/optimizer/clauses.h | 14 +
src/include/utils/typcache.h | 2 +
src/tools/pgindent/typedefs.list | 1 +
7 files changed, 803 insertions(+), 5 deletions(-)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index ac0f243..749cb0d 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -19,15 +19,20 @@
#include "postgres.h"
+#include "access/amapi.h"
+#include "access/genam.h"
#include "access/htup_details.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/pg_aggregate.h"
#include "catalog/pg_class.h"
+#include "catalog/pg_constraint.h"
#include "catalog/pg_language.h"
#include "catalog/pg_operator.h"
#include "catalog/pg_proc.h"
+#include "catalog/pg_trigger.h"
#include "catalog/pg_type.h"
+#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/functions.h"
#include "funcapi.h"
@@ -46,6 +51,8 @@
#include "parser/parse_coerce.h"
#include "parser/parse_func.h"
#include "parser/parsetree.h"
+#include "partitioning/partdesc.h"
+#include "rewrite/rewriteHandler.h"
#include "rewrite/rewriteManip.h"
#include "tcop/tcopprot.h"
#include "utils/acl.h"
@@ -54,6 +61,7 @@
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
+#include "utils/partcache.h"
#include "utils/rel.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
@@ -92,6 +100,9 @@ typedef struct
char max_hazard; /* worst proparallel hazard found so far */
char max_interesting; /* worst proparallel hazard of interest */
List *safe_param_ids; /* PARAM_EXEC Param IDs to treat as safe */
+ bool check_all; /* whether collect all the unsafe/restricted objects */
+ List *objects; /* parallel unsafe/restricted objects */
+ PartitionDirectory partition_directory; /* partition descriptors */
} max_parallel_hazard_context;
static bool contain_agg_clause_walker(Node *node, void *context);
@@ -102,6 +113,25 @@ static bool contain_volatile_functions_walker(Node *node, void *context);
static bool contain_volatile_functions_not_nextval_walker(Node *node, void *context);
static bool max_parallel_hazard_walker(Node *node,
max_parallel_hazard_context *context);
+static bool target_rel_parallel_hazard_recurse(Relation relation,
+ max_parallel_hazard_context *context,
+ bool is_partition,
+ bool check_column_default);
+static bool target_rel_trigger_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
+static bool index_expr_parallel_hazard(Relation index_rel,
+ List *ii_Expressions,
+ List *ii_Predicate,
+ max_parallel_hazard_context *context);
+static bool target_rel_index_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
+static bool target_rel_domain_parallel_hazard(Oid typid,
+ max_parallel_hazard_context *context);
+static bool target_rel_partitions_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition);
+static bool target_rel_chk_constr_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
static bool contain_nonstrict_functions_walker(Node *node, void *context);
static bool contain_exec_param_walker(Node *node, List *param_ids);
static bool contain_context_dependent_node(Node *clause);
@@ -156,6 +186,7 @@ static Query *substitute_actual_srf_parameters(Query *expr,
static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
+static safety_object *make_safety_object(Oid objid, Oid classid, char proparallel);
/*****************************************************************************
@@ -629,6 +660,9 @@ max_parallel_hazard(Query *parse)
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_UNSAFE;
context.safe_param_ids = NIL;
+ context.check_all = false;
+ context.objects = NIL;
+ context.partition_directory = NULL;
max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
@@ -681,6 +715,9 @@ is_parallel_safe(PlannerInfo *root, Node *node)
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_RESTRICTED;
context.safe_param_ids = NIL;
+ context.check_all = false;
+ context.objects = NIL;
+ context.partition_directory = NULL;
/*
* The params that refer to the same or parent query level are considered
@@ -712,7 +749,7 @@ max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context)
break;
case PROPARALLEL_RESTRICTED:
/* increase max_hazard to RESTRICTED */
- Assert(context->max_hazard != PROPARALLEL_UNSAFE);
+ Assert(context->check_all || context->max_hazard != PROPARALLEL_UNSAFE);
context->max_hazard = proparallel;
/* done if we are not expecting any unsafe functions */
if (context->max_interesting == proparallel)
@@ -729,6 +766,82 @@ max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context)
return false;
}
+/*
+ * make_safety_object
+ *
+ * Creates a safety_object, given object id, class id and parallel safety.
+ */
+static safety_object *
+make_safety_object(Oid objid, Oid classid, char proparallel)
+{
+ safety_object *object = (safety_object *) palloc(sizeof(safety_object));
+
+ object->objid = objid;
+ object->classid = classid;
+ object->proparallel = proparallel;
+
+ return object;
+}
+
+/* check_functions_in_node callback */
+static bool
+parallel_hazard_checker(Oid func_id, void *context)
+{
+ char proparallel;
+ max_parallel_hazard_context *cont = (max_parallel_hazard_context *) context;
+
+ proparallel = func_parallel(func_id);
+
+ if (max_parallel_hazard_test(proparallel, cont) && !cont->check_all)
+ return true;
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object = make_safety_object(func_id,
+ ProcedureRelationId,
+ proparallel);
+ cont->objects = lappend(cont->objects, object);
+ }
+
+ return false;
+}
+
+/*
+ * parallel_hazard_walker
+ *
+ * Recursively search an expression tree which is defined as partition key or
+ * index or constraint or column default expression for PARALLEL
+ * UNSAFE/RESTRICTED table-related objects.
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
+{
+ if (node == NULL)
+ return false;
+
+ /* Check for hazardous functions in node itself */
+ if (check_functions_in_node(node, parallel_hazard_checker,
+ context))
+ return true;
+
+ if (IsA(node, CoerceToDomain))
+ {
+ CoerceToDomain *domain = (CoerceToDomain *) node;
+
+ if (target_rel_domain_parallel_hazard(domain->resulttype, context))
+ return true;
+ }
+
+ /* Recurse to check arguments */
+ return expression_tree_walker(node,
+ parallel_hazard_walker,
+ context);
+}
+
/* check_functions_in_node callback */
static bool
max_parallel_hazard_checker(Oid func_id, void *context)
@@ -885,6 +998,549 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
}
/*
+ * target_rel_parallel_hazard
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+List*
+target_rel_parallel_hazard(Oid relOid, bool findall,
+ char max_interesting, char *max_hazard)
+{
+ max_parallel_hazard_context context;
+ Relation targetRel;
+
+ context.check_all = findall;
+ context.objects = NIL;
+ context.max_hazard = PROPARALLEL_SAFE;
+ context.max_interesting = max_interesting;
+ context.safe_param_ids = NIL;
+ context.partition_directory = NULL;
+
+ targetRel = table_open(relOid, AccessShareLock);
+
+ (void) target_rel_parallel_hazard_recurse(targetRel, &context, false, true);
+ if (context.partition_directory)
+ DestroyPartitionDirectory(context.partition_directory);
+
+ table_close(targetRel, AccessShareLock);
+
+ *max_hazard = context.max_hazard;
+
+ return context.objects;
+}
+
+/*
+ * target_rel_parallel_hazard_recurse
+ *
+ * Recursively search all table-related objects for PARALLEL UNSAFE/RESTRICTED
+ * objects.
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_parallel_hazard_recurse(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition,
+ bool check_column_default)
+{
+ TupleDesc tupdesc;
+ int attnum;
+
+ /*
+ * We can't support table modification in a parallel worker if it's a
+ * foreign table/partition (no FDW API for supporting parallel access) or
+ * a temporary table.
+ */
+ if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||
+ RelationUsesLocalBuffers(rel))
+ {
+ if (max_parallel_hazard_test(PROPARALLEL_RESTRICTED, context) &&
+ !context->check_all)
+ return true;
+ else
+ {
+ safety_object *object = make_safety_object(rel->rd_rel->oid,
+ RelationRelationId,
+ PROPARALLEL_RESTRICTED);
+ context->objects = lappend(context->objects, object);
+ }
+ }
+
+ /*
+ * If a partitioned table, check that each partition is safe for
+ * modification in parallel-mode.
+ */
+ if (target_rel_partitions_parallel_hazard(rel, context, is_partition))
+ return true;
+
+ /*
+ * If there are any index expressions or index predicate, check that they
+ * are parallel-mode safe.
+ */
+ if (target_rel_index_parallel_hazard(rel, context))
+ return true;
+
+ /*
+ * If any triggers exist, check that they are parallel-safe.
+ */
+ if (target_rel_trigger_parallel_hazard(rel, context))
+ return true;
+
+ /*
+ * Column default expressions are only applicable to INSERT and UPDATE.
+ * Note that even though column defaults may be specified separately for
+ * each partition in a partitioned table, a partition's default value is
+ * not applied when inserting a tuple through a partitioned table.
+ */
+
+ tupdesc = RelationGetDescr(rel);
+ for (attnum = 0; attnum < tupdesc->natts; attnum++)
+ {
+ Form_pg_attribute att = TupleDescAttr(tupdesc, attnum);
+
+ /* We don't need info for dropped or generated attributes */
+ if (att->attisdropped || att->attgenerated)
+ continue;
+
+ if (att->atthasdef && check_column_default)
+ {
+ Node *defaultexpr;
+
+ defaultexpr = build_column_default(rel, attnum + 1);
+ if (parallel_hazard_walker((Node *) defaultexpr, context))
+ return true;
+ }
+
+ /*
+ * If the column is of a DOMAIN type, determine whether that
+ * domain has any CHECK expressions that are not parallel-mode
+ * safe.
+ */
+ if (get_typtype(att->atttypid) == TYPTYPE_DOMAIN)
+ {
+ if (target_rel_domain_parallel_hazard(att->atttypid, context))
+ return true;
+ }
+ }
+
+ /*
+ * CHECK constraints are only applicable to INSERT and UPDATE. If any
+ * CHECK constraints exist, determine if they are parallel-safe.
+ */
+ if (target_rel_chk_constr_parallel_hazard(rel, context))
+ return true;
+
+ return false;
+}
+
+/*
+ * target_rel_trigger_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the specified relation's trigger data.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_trigger_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ int i;
+ char proparallel;
+
+ if (rel->trigdesc == NULL)
+ return false;
+
+ /*
+ * Care is needed here to avoid using the same relcache TriggerDesc field
+ * across other cache accesses, because relcache doesn't guarantee that it
+ * won't move.
+ */
+ for (i = 0; i < rel->trigdesc->numtriggers; i++)
+ {
+ Oid tgfoid = rel->trigdesc->triggers[i].tgfoid;
+ Oid tgoid = rel->trigdesc->triggers[i].tgoid;
+
+ proparallel = func_parallel(tgfoid);
+
+ if (max_parallel_hazard_test(proparallel, context) &&
+ !context->check_all)
+ return true;
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object,
+ *parent_object;
+
+ object = make_safety_object(tgfoid, ProcedureRelationId,
+ proparallel);
+ parent_object = make_safety_object(tgoid, TriggerRelationId,
+ proparallel);
+
+ context->objects = lappend(context->objects, object);
+ context->objects = lappend(context->objects, parent_object);
+ }
+ }
+
+ return false;
+}
+
+/*
+ * index_expr_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the input index expression and index predicate.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+index_expr_parallel_hazard(Relation index_rel,
+ List *ii_Expressions,
+ List *ii_Predicate,
+ max_parallel_hazard_context *context)
+{
+ int i;
+ Form_pg_index indexStruct;
+ ListCell *index_expr_item;
+
+ indexStruct = index_rel->rd_index;
+ index_expr_item = list_head(ii_Expressions);
+
+ /* Check parallel-safety of index expression */
+ for (i = 0; i < indexStruct->indnatts; i++)
+ {
+ int keycol = indexStruct->indkey.values[i];
+
+ if (keycol == 0)
+ {
+ /* Found an index expression */
+ Node *index_expr;
+
+ Assert(index_expr_item != NULL);
+ if (index_expr_item == NULL) /* shouldn't happen */
+ elog(ERROR, "too few entries in indexprs list");
+
+ index_expr = (Node *) lfirst(index_expr_item);
+
+ if (parallel_hazard_walker(index_expr, context))
+ return true;
+
+ index_expr_item = lnext(ii_Expressions, index_expr_item);
+ }
+ }
+
+ /* Check parallel-safety of index predicate */
+ if (parallel_hazard_walker((Node *) ii_Predicate, context))
+ return true;
+
+ return false;
+}
+
+/*
+ * target_rel_index_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any existing index expressions or index predicate of a specified
+ * relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_index_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ List *index_oid_list;
+ ListCell *lc;
+ LOCKMODE lockmode = AccessShareLock;
+ bool max_hazard_found;
+
+ index_oid_list = RelationGetIndexList(rel);
+ foreach(lc, index_oid_list)
+ {
+ Relation index_rel;
+ List *ii_Expressions;
+ List *ii_Predicate;
+ List *temp_objects;
+ char temp_hazard;
+ Oid index_oid = lfirst_oid(lc);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ index_rel = index_open(index_oid, lockmode);
+
+ /* Check index expression */
+ ii_Expressions = RelationGetIndexExpressions(index_rel);
+ ii_Predicate = RelationGetIndexPredicate(index_rel);
+
+ max_hazard_found = index_expr_parallel_hazard(index_rel,
+ ii_Expressions,
+ ii_Predicate,
+ context);
+
+ index_close(index_rel, lockmode);
+
+ if (max_hazard_found)
+ return true;
+
+ /* Add the index itself to the objects list */
+ else if (context->objects != NIL)
+ {
+ safety_object *object;
+
+ object = make_safety_object(index_oid, IndexRelationId,
+ context->max_hazard);
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ list_free(index_oid_list);
+
+ return false;
+}
+
+/*
+ * target_rel_domain_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the specified DOMAIN type. Only any CHECK expressions are
+ * examined for parallel-safety.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_domain_parallel_hazard(Oid typid,
+ max_parallel_hazard_context *context)
+{
+ ListCell *lc;
+ List *domain_list;
+ List *temp_objects;
+ char temp_hazard;
+
+ domain_list = GetDomainConstraints(typid);
+
+ foreach(lc, domain_list)
+ {
+ DomainConstraintState *r = (DomainConstraintState *) lfirst(lc);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ if (parallel_hazard_walker((Node *) r->check_expr, context))
+ return true;
+
+ /* Add the constraint itself to the objects list */
+ else if (context->objects != NIL)
+ {
+ safety_object *object;
+ Oid constr_oid = get_domain_constraint_oid(typid,
+ r->name,
+ false);
+
+ object = make_safety_object(constr_oid,
+ ConstraintRelationId,
+ context->max_hazard);
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ return false;
+
+}
+
+/*
+ * target_rel_partitions_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any partitions of a specified relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_partitions_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition)
+{
+ int i;
+ PartitionDesc pdesc;
+ PartitionKey pkey;
+ ListCell *partexprs_item;
+ int partnatts;
+ List *partexprs,
+ *qual;
+
+ /*
+ * The partition check expression is composed of its parent table's
+ * partition key expression, we do not need to check it again for a
+ * partition because we already checked the parallel safety of its parent
+ * table's partition key expression.
+ */
+ if (!is_partition)
+ {
+ qual = RelationGetPartitionQual(rel);
+ if (parallel_hazard_walker((Node *) qual, context))
+ return true;
+ }
+
+ if (rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
+ return false;
+
+ pkey = RelationGetPartitionKey(rel);
+
+ partnatts = get_partition_natts(pkey);
+ partexprs = get_partition_exprs(pkey);
+
+ partexprs_item = list_head(partexprs);
+ for (i = 0; i < partnatts; i++)
+ {
+ Oid funcOid = pkey->partsupfunc[i].fn_oid;
+
+ if (OidIsValid(funcOid))
+ {
+ char proparallel = func_parallel(funcOid);
+
+ if (max_parallel_hazard_test(proparallel, context) &&
+ !context->check_all)
+ return true;
+
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object;
+
+ object = make_safety_object(funcOid, ProcedureRelationId,
+ proparallel);
+ context->objects = lappend(context->objects, object);
+ }
+ }
+
+ /* Check parallel-safety of any expressions in the partition key */
+ if (get_partition_col_attnum(pkey, i) == 0)
+ {
+ Node *check_expr = (Node *) lfirst(partexprs_item);
+
+ if (parallel_hazard_walker(check_expr, context))
+ return true;
+
+ partexprs_item = lnext(partexprs, partexprs_item);
+ }
+ }
+
+ /* Recursively check each partition ... */
+
+ /* Create the PartitionDirectory infrastructure if we didn't already */
+ if (context->partition_directory == NULL)
+ context->partition_directory =
+ CreatePartitionDirectory(CurrentMemoryContext, false);
+
+ pdesc = PartitionDirectoryLookup(context->partition_directory, rel);
+
+ for (i = 0; i < pdesc->nparts; i++)
+ {
+ Relation part_rel;
+ bool max_hazard_found;
+
+ part_rel = table_open(pdesc->oids[i], AccessShareLock);
+ max_hazard_found = target_rel_parallel_hazard_recurse(part_rel,
+ context,
+ true,
+ false);
+ table_close(part_rel, AccessShareLock);
+
+ if (max_hazard_found)
+ return true;
+ }
+
+ return false;
+}
+
+/*
+ * target_rel_chk_constr_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any CHECK expressions or CHECK constraints related to the
+ * specified relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_chk_constr_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ char temp_hazard;
+ int i;
+ TupleDesc tupdesc;
+ List *temp_objects;
+ ConstrCheck *check;
+
+ tupdesc = RelationGetDescr(rel);
+
+ if (tupdesc->constr == NULL)
+ return false;
+
+ check = tupdesc->constr->check;
+
+ /*
+ * Determine if there are any CHECK constraints which are not
+ * parallel-safe.
+ */
+ for (i = 0; i < tupdesc->constr->num_check; i++)
+ {
+ Expr *check_expr = stringToNode(check[i].ccbin);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ if (parallel_hazard_walker((Node *) check_expr, context))
+ return true;
+
+ /* Add the constraint itself to the objects list */
+ if (context->objects != NIL)
+ {
+ Oid constr_oid;
+ safety_object *object;
+
+ constr_oid = get_relation_constraint_oid(rel->rd_rel->oid,
+ check->ccname,
+ true);
+
+ object = make_safety_object(constr_oid,
+ ConstraintRelationId,
+ context->max_hazard);
+
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ return false;
+}
+
+/*
* is_parallel_allowed_for_modify
*
* Check at a high-level if parallel mode is able to be used for the specified
diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c
index 88faf4d..06d859c 100644
--- a/src/backend/utils/adt/misc.c
+++ b/src/backend/utils/adt/misc.c
@@ -23,6 +23,8 @@
#include "access/sysattr.h"
#include "access/table.h"
#include "catalog/catalog.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_type.h"
#include "catalog/system_fk_info.h"
@@ -31,6 +33,7 @@
#include "common/keywords.h"
#include "funcapi.h"
#include "miscadmin.h"
+#include "optimizer/clauses.h"
#include "parser/scansup.h"
#include "pgstat.h"
#include "postmaster/syslogger.h"
@@ -43,6 +46,7 @@
#include "utils/lsyscache.h"
#include "utils/ruleutils.h"
#include "utils/timestamp.h"
+#include "utils/varlena.h"
/*
* Common subroutine for num_nulls() and num_nonnulls().
@@ -605,6 +609,96 @@ pg_collation_for(PG_FUNCTION_ARGS)
PG_RETURN_TEXT_P(cstring_to_text(generate_collation_name(collid)));
}
+/*
+ * Find the worst parallel-hazard level in the given relation
+ *
+ * Returns the worst parallel hazard level (the earliest in this list:
+ * PROPARALLEL_UNSAFE, PROPARALLEL_RESTRICTED, PROPARALLEL_SAFE) that can
+ * be found in the given relation.
+ */
+Datum
+pg_get_table_max_parallel_dml_hazard(PG_FUNCTION_ARGS)
+{
+ char max_parallel_hazard;
+ Oid relOid = PG_GETARG_OID(0);
+
+ (void) target_rel_parallel_hazard(relOid, false,
+ PROPARALLEL_UNSAFE,
+ &max_parallel_hazard);
+
+ PG_RETURN_CHAR(max_parallel_hazard);
+}
+
+/*
+ * Determine whether the target relation is safe to execute parallel modification.
+ *
+ * Return all the PARALLEL RESTRICTED/UNSAFE objects.
+ */
+Datum
+pg_get_table_parallel_dml_safety(PG_FUNCTION_ARGS)
+{
+#define PG_GET_PARALLEL_SAFETY_COLS 3
+ List *objects;
+ ListCell *object;
+ TupleDesc tupdesc;
+ Tuplestorestate *tupstore;
+ MemoryContext per_query_ctx;
+ MemoryContext oldcontext;
+ ReturnSetInfo *rsinfo;
+ char max_parallel_hazard;
+ Oid relOid = PG_GETARG_OID(0);
+
+ rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+
+ /* check to see if caller supports us returning a tuplestore */
+ if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
+
+ if (!(rsinfo->allowedModes & SFRM_Materialize))
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("materialize mode required, but it is not allowed in this context")));
+
+ /* Build a tuple descriptor for our result type */
+ if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+ elog(ERROR, "return type must be a row type");
+
+ per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+ oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+ tupstore = tuplestore_begin_heap(true, false, work_mem);
+ rsinfo->returnMode = SFRM_Materialize;
+ rsinfo->setResult = tupstore;
+ rsinfo->setDesc = tupdesc;
+
+ MemoryContextSwitchTo(oldcontext);
+
+ objects = target_rel_parallel_hazard(relOid, true,
+ PROPARALLEL_UNSAFE,
+ &max_parallel_hazard);
+ foreach(object, objects)
+ {
+ Datum values[PG_GET_PARALLEL_SAFETY_COLS];
+ bool nulls[PG_GET_PARALLEL_SAFETY_COLS];
+ safety_object *sobject = (safety_object *) lfirst(object);
+
+ memset(nulls, 0, sizeof(nulls));
+
+ values[0] = sobject->objid;
+ values[1] = sobject->classid;
+ values[2] = sobject->proparallel;
+
+ tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+ }
+
+ /* clean up and return the tuplestore */
+ tuplestore_donestoring(tupstore);
+
+ return (Datum) 0;
+}
+
/*
* pg_relation_is_updatable - determine which update events the specified
diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c
index 326fae6..02a8f70 100644
--- a/src/backend/utils/cache/typcache.c
+++ b/src/backend/utils/cache/typcache.c
@@ -2535,6 +2535,23 @@ compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2)
}
/*
+ * GetDomainConstraints --- get DomainConstraintState list of specified domain type
+ */
+List *
+GetDomainConstraints(Oid type_id)
+{
+ TypeCacheEntry *typentry;
+ List *constraints = NIL;
+
+ typentry = lookup_type_cache(type_id, TYPECACHE_DOMAIN_CONSTR_INFO);
+
+ if(typentry->domainData != NULL)
+ constraints = typentry->domainData->constraints;
+
+ return constraints;
+}
+
+/*
* Load (or re-load) the enumData member of the typcache entry.
*/
static void
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8cd0252..4483cd1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3770,6 +3770,20 @@
provolatile => 's', prorettype => 'regclass', proargtypes => 'regclass',
prosrc => 'pg_get_replica_identity_index' },
+{ oid => '6122',
+ descr => 'parallel unsafe/restricted objects in the target relation',
+ proname => 'pg_get_table_parallel_dml_safety', prorows => '100',
+ proretset => 't', provolatile => 'v', proparallel => 'u',
+ prorettype => 'record', proargtypes => 'regclass',
+ proallargtypes => '{regclass,oid,oid,char}',
+ proargmodes => '{i,o,o,o}',
+ proargnames => '{table_name, objid, classid, proparallel}',
+ prosrc => 'pg_get_table_parallel_dml_safety' },
+
+{ oid => '6123', descr => 'worst parallel-hazard level in the given relation for DML',
+ proname => 'pg_get_table_max_parallel_dml_hazard', prorettype => 'char', proargtypes => 'regclass',
+ prosrc => 'pg_get_table_max_parallel_dml_hazard', provolatile => 'v', proparallel => 'u' },
+
# Deferrable unique constraint trigger
{ oid => '1250', descr => 'deferred UNIQUE constraint check',
proname => 'unique_key_recheck', provolatile => 'v', prorettype => 'trigger',
@@ -3777,11 +3791,11 @@
# Generic referential integrity constraint triggers
{ oid => '1644', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
- proname => 'RI_FKey_check_ins', provolatile => 'v', prorettype => 'trigger',
- proargtypes => '', prosrc => 'RI_FKey_check_ins' },
+ proname => 'RI_FKey_check_ins', provolatile => 'v', proparallel => 'r',
+ prorettype => 'trigger', proargtypes => '', prosrc => 'RI_FKey_check_ins' },
{ oid => '1645', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
- proname => 'RI_FKey_check_upd', provolatile => 'v', prorettype => 'trigger',
- proargtypes => '', prosrc => 'RI_FKey_check_upd' },
+ proname => 'RI_FKey_check_upd', provolatile => 'v', proparallel => 'r',
+ prorettype => 'trigger', proargtypes => '', prosrc => 'RI_FKey_check_upd' },
{ oid => '1646', descr => 'referential integrity ON DELETE CASCADE',
proname => 'RI_FKey_cascade_del', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'RI_FKey_cascade_del' },
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 32b5656..f8b2a72 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -23,6 +23,17 @@ typedef struct
List **windowFuncs; /* lists of WindowFuncs for each winref */
} WindowFuncLists;
+/*
+ * Information about a table-related object which could affect the safety of
+ * parallel data modification on table.
+ */
+typedef struct safety_object
+{
+ Oid objid; /* OID of object itself */
+ Oid classid; /* OID of its catalog */
+ char proparallel; /* parallel safety of the object */
+} safety_object;
+
extern bool contain_agg_clause(Node *clause);
extern bool contain_window_function(Node *clause);
@@ -54,5 +65,8 @@ extern Query *inline_set_returning_function(PlannerInfo *root,
RangeTblEntry *rte);
extern bool is_parallel_allowed_for_modify(Query *parse);
+extern List *target_rel_parallel_hazard(Oid relOid, bool findall,
+ char max_interesting,
+ char *max_hazard);
#endif /* CLAUSES_H */
diff --git a/src/include/utils/typcache.h b/src/include/utils/typcache.h
index 1d68a9a..28ca7d8 100644
--- a/src/include/utils/typcache.h
+++ b/src/include/utils/typcache.h
@@ -199,6 +199,8 @@ extern uint64 assign_record_type_identifier(Oid type_id, int32 typmod);
extern int compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2);
+extern List *GetDomainConstraints(Oid type_id);
+
extern size_t SharedRecordTypmodRegistryEstimate(void);
extern void SharedRecordTypmodRegistryInit(SharedRecordTypmodRegistry *,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 37cf4b2..307bb97 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3491,6 +3491,7 @@ rm_detail_t
role_auth_extra
row_security_policy_hook_type
rsv_callback
+safety_object
saophash_hash
save_buffer
scram_state
--
2.7.2.windows.1
v16-0004-Cache-parallel-dml-safety.patchapplication/octet-stream; name=v16-0004-Cache-parallel-dml-safety.patchDownload
From a97b1bf327ad665c9f71c43063a3a9e6d364716d Mon Sep 17 00:00:00 2001
From: Hou Zhijie <HouZhijie@foxmail.com>
Date: Fri, 30 Jul 2021 10:04:32 +0800
Subject: [PATCH] cache-parallel-dml-safety
The planner is updated to perform additional parallel-safety checks For
non-partitioned table if pg_class.relparalleldml is DEFAULT('d'), and cache the
parallel safety for the relation.
Whenever any function's parallel-safety is changed, invalidate the cached
parallel-safety for all relations in relcache for a particular database.
For partitioned table, If pg_class.relparalleldml is DEFAULT('d'), assume that
the table is UNSAFE to be modified in parallel mode.
If pg_class.relparalleldml is SAFE/RESTRICTED/UNSAFE, respect the specified
parallel dml safety instead of checking it again.
---
src/backend/catalog/pg_proc.c | 13 +++++
src/backend/commands/functioncmds.c | 18 ++++++-
src/backend/optimizer/util/clauses.c | 78 ++++++++++++++++++++++------
src/backend/utils/cache/inval.c | 53 +++++++++++++++++++
src/backend/utils/cache/relcache.c | 19 +++++++
src/include/storage/sinval.h | 8 +++
src/include/utils/inval.h | 2 +
src/include/utils/rel.h | 1 +
src/include/utils/relcache.h | 2 +
9 files changed, 176 insertions(+), 18 deletions(-)
diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c
index 1454d2fb67..04585dc3ef 100644
--- a/src/backend/catalog/pg_proc.c
+++ b/src/backend/catalog/pg_proc.c
@@ -39,6 +39,7 @@
#include "tcop/tcopprot.h"
#include "utils/acl.h"
#include "utils/builtins.h"
+#include "utils/inval.h"
#include "utils/lsyscache.h"
#include "utils/regproc.h"
#include "utils/rel.h"
@@ -367,6 +368,9 @@ ProcedureCreate(const char *procedureName,
Datum proargnames;
bool isnull;
const char *dropcmd;
+ char old_proparallel;
+
+ old_proparallel = oldproc->proparallel;
if (!replace)
ereport(ERROR,
@@ -559,6 +563,15 @@ ProcedureCreate(const char *procedureName,
tup = heap_modify_tuple(oldtup, tupDesc, values, nulls, replaces);
CatalogTupleUpdate(rel, &tup->t_self, tup);
+ /*
+ * If the function's parallel safety changed, the tables that depend
+ * on this function won't be safe to be modified in parallel mode
+ * anymore. So, we need to invalidate the parallel dml flag in
+ * relcache.
+ */
+ if (old_proparallel != parallel)
+ CacheInvalidateParallelDML();
+
ReleaseSysCache(oldtup);
is_update = true;
}
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index 79d875ab10..57d9ca52e5 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -70,6 +70,7 @@
#include "utils/builtins.h"
#include "utils/fmgroids.h"
#include "utils/guc.h"
+#include "utils/inval.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
#include "utils/rel.h"
@@ -1504,7 +1505,22 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
repl_val, repl_null, repl_repl);
}
if (parallel_item)
- procForm->proparallel = interpret_func_parallel(parallel_item);
+ {
+ char proparallel;
+
+ proparallel = interpret_func_parallel(parallel_item);
+
+ /*
+ * If the function's parallel safety changed, the tables that depends
+ * on this function won't be safe to be modified in parallel mode
+ * anymore. So, we need to invalidate the parallel dml flag in
+ * relcache.
+ */
+ if (proparallel != procForm->proparallel)
+ CacheInvalidateParallelDML();
+
+ procForm->proparallel = proparallel;
+ }
/* Do the update */
CatalogTupleUpdate(rel, &tup->t_self, tup);
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 749cb0dacd..f65c2fc961 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -187,7 +187,7 @@ static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
static safety_object *make_safety_object(Oid objid, Oid classid, char proparallel);
-
+static char max_parallel_dml_hazard(Query *parse, max_parallel_hazard_context *context);
/*****************************************************************************
* Aggregate-function clause manipulation
@@ -654,7 +654,6 @@ contain_volatile_functions_not_nextval_walker(Node *node, void *context)
char
max_parallel_hazard(Query *parse)
{
- bool max_hazard_found;
max_parallel_hazard_context context;
context.max_hazard = PROPARALLEL_SAFE;
@@ -664,28 +663,73 @@ max_parallel_hazard(Query *parse)
context.objects = NIL;
context.partition_directory = NULL;
- max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
+ if(!max_parallel_hazard_walker((Node *) parse, &context))
+ (void) max_parallel_dml_hazard(parse, &context);
+
+ return context.max_hazard;
+}
+
+/* Check the safety of parallel data modification */
+static char
+max_parallel_dml_hazard(Query *parse,
+ max_parallel_hazard_context *context)
+{
+ RangeTblEntry *rte;
+ Relation target_rel;
+ char hazard;
+
+ if (!IsModifySupportedInParallelMode(parse->commandType))
+ return context->max_hazard;
+
+ /*
+ * The target table is already locked by the caller (this is done in the
+ * parse/analyze phase), and remains locked until end-of-transaction.
+ */
+ rte = rt_fetch(parse->resultRelation, parse->rtable);
+ target_rel = table_open(rte->relid, NoLock);
+
+ /*
+ * If user set specific parallel dml safety safe/restricted/unsafe, we
+ * respect what user has set. If not set, for non-partitioned table, check
+ * the safety automatically, for partitioned table, consider it as unsafe.
+ */
+ hazard = target_rel->rd_rel->relparalleldml;
+ if (target_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE &&
+ hazard == PROPARALLEL_DEFAULT)
+ hazard = PROPARALLEL_UNSAFE;
+
+ if (hazard != PROPARALLEL_DEFAULT)
+ (void) max_parallel_hazard_test(hazard, context);
- if (!max_hazard_found &&
- IsModifySupportedInParallelMode(parse->commandType))
+ /* Do parallel safety check for the target relation */
+ else if (!target_rel->rd_paralleldml)
{
- RangeTblEntry *rte;
- Relation target_rel;
+ bool max_hazard_found;
+ char pre_max_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
- rte = rt_fetch(parse->resultRelation, parse->rtable);
+ max_hazard_found = target_rel_parallel_hazard_recurse(target_rel,
+ context,
+ false,
+ false);
- /*
- * The target table is already locked by the caller (this is done in the
- * parse/analyze phase), and remains locked until end-of-transaction.
- */
- target_rel = table_open(rte->relid, NoLock);
+ /* Cache the parallel dml safety of this relation */
+ target_rel->rd_paralleldml = context->max_hazard;
- (void) max_parallel_hazard_test(target_rel->rd_rel->relparalleldml,
- &context);
- table_close(target_rel, NoLock);
+ if (!max_hazard_found)
+ (void) max_parallel_hazard_test(pre_max_hazard, context);
}
- return context.max_hazard;
+ /*
+ * If we already cached the parallel dml safety of this relation, we don't
+ * need to check it again.
+ */
+ else
+ (void) max_parallel_hazard_test(target_rel->rd_paralleldml, context);
+
+ table_close(target_rel, NoLock);
+
+ return context->max_hazard;
}
/*
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 9c79775725..9459b3c204 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -385,6 +385,27 @@ AddCatalogInvalidationMessage(InvalidationListHeader *hdr,
AddInvalidationMessage(&hdr->cclist, &msg);
}
+/*
+ * Add a Parallel dml inval entry
+ */
+static void
+AddParallelDMLInvalidationMessage(InvalidationListHeader *hdr)
+{
+ SharedInvalidationMessage msg;
+
+ /* Don't add a duplicate item. */
+ ProcessMessageList(hdr->rclist,
+ if (msg->rc.id == SHAREDINVALPARALLELDML_ID)
+ return);
+
+ /* OK, add the item */
+ msg.pd.id = SHAREDINVALPARALLELDML_ID;
+ /* check AddCatcacheInvalidationMessage() for an explanation */
+ VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
+
+ AddInvalidationMessage(&hdr->rclist, &msg);
+}
+
/*
* Add a relcache inval entry
*/
@@ -539,6 +560,21 @@ RegisterRelcacheInvalidation(Oid dbId, Oid relId)
transInvalInfo->RelcacheInitFileInval = true;
}
+/*
+ * RegisterParallelDMLInvalidation
+ *
+ * As above, but register a invalidation event for paralleldml in all relcache.
+ */
+static void
+RegisterParallelDMLInvalidation()
+{
+ AddParallelDMLInvalidationMessage(&transInvalInfo->CurrentCmdInvalidMsgs);
+
+ (void) GetCurrentCommandId(true);
+
+ transInvalInfo->RelcacheInitFileInval = true;
+}
+
/*
* RegisterSnapshotInvalidation
*
@@ -631,6 +667,11 @@ LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
else if (msg->sn.dbId == MyDatabaseId)
InvalidateCatalogSnapshot();
}
+ else if (msg->id == SHAREDINVALPARALLELDML_ID)
+ {
+ /* Invalid all the relcache's parallel dml flag */
+ ParallelDMLInvalidate();
+ }
else
elog(FATAL, "unrecognized SI message ID: %d", msg->id);
}
@@ -1307,6 +1348,18 @@ CacheInvalidateRelcacheAll(void)
RegisterRelcacheInvalidation(InvalidOid, InvalidOid);
}
+/*
+ * CacheInvalidateParallelDML
+ * Register invalidation of the whole relcache at the end of command.
+ */
+void
+CacheInvalidateParallelDML(void)
+{
+ PrepareInvalidationState();
+
+ RegisterParallelDMLInvalidation();
+}
+
/*
* CacheInvalidateRelcacheByTuple
* As above, but relation is identified by passing its pg_class tuple.
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 3f38a69687..e013c4d0dc 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2934,6 +2934,25 @@ RelationCacheInvalidate(void)
list_free(rebuildList);
}
+/*
+ * ParallelDMLInvalidate
+ * Invalidate all the relcache's parallel dml flag.
+ */
+void
+ParallelDMLInvalidate(void)
+{
+ HASH_SEQ_STATUS status;
+ RelIdCacheEnt *idhentry;
+ Relation relation;
+
+ hash_seq_init(&status, RelationIdCache);
+
+ while ((idhentry = (RelIdCacheEnt *) hash_seq_search(&status)) != NULL)
+ {
+ relation = idhentry->reldesc;
+ relation->rd_paralleldml = 0;
+ }
+}
/*
* RelationCloseSmgrByOid - close a relcache entry's smgr link
*
diff --git a/src/include/storage/sinval.h b/src/include/storage/sinval.h
index f03dc23b14..9859a3bea0 100644
--- a/src/include/storage/sinval.h
+++ b/src/include/storage/sinval.h
@@ -110,6 +110,13 @@ typedef struct
Oid relId; /* relation ID */
} SharedInvalSnapshotMsg;
+#define SHAREDINVALPARALLELDML_ID (-6)
+
+typedef struct
+{
+ int8 id; /* type field --- must be first */
+} SharedInvalParallelDMLMsg;
+
typedef union
{
int8 id; /* type field --- must be first */
@@ -119,6 +126,7 @@ typedef union
SharedInvalSmgrMsg sm;
SharedInvalRelmapMsg rm;
SharedInvalSnapshotMsg sn;
+ SharedInvalParallelDMLMsg pd;
} SharedInvalidationMessage;
diff --git a/src/include/utils/inval.h b/src/include/utils/inval.h
index 770672890b..f1ce1462c1 100644
--- a/src/include/utils/inval.h
+++ b/src/include/utils/inval.h
@@ -64,4 +64,6 @@ extern void CallSyscacheCallbacks(int cacheid, uint32 hashvalue);
extern void InvalidateSystemCaches(void);
extern void LogLogicalInvalidations(void);
+
+extern void CacheInvalidateParallelDML(void);
#endif /* INVAL_H */
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index b4faa1c123..52574e9d40 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -63,6 +63,7 @@ typedef struct RelationData
bool rd_indexvalid; /* is rd_indexlist valid? (also rd_pkindex and
* rd_replidindex) */
bool rd_statvalid; /* is rd_statlist valid? */
+ char rd_paralleldml; /* parallel dml safety */
/*----------
* rd_createSubid is the ID of the highest subtransaction the rel has
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 5ea225ac2d..5813aa50a0 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -128,6 +128,8 @@ extern void RelationCacheInvalidate(void);
extern void RelationCloseSmgrByOid(Oid relationId);
+extern void ParallelDMLInvalidate(void);
+
#ifdef USE_ASSERT_CHECKING
extern void AssertPendingSyncs_RelationCache(void);
#else
--
2.27.0
v16-0005-Regression-test-and-doc-updates.patchapplication/octet-stream; name=v16-0005-Regression-test-and-doc-updates.patchDownload
From 86c0b68d9d6c2c4ec4d42b97d1f8fa4677adb475 Mon Sep 17 00:00:00 2001
From: Hou Zhijie <HouZhijie@foxmail.com>
Date: Fri, 30 Jul 2021 10:06:04 +0800
Subject: [PATCH] regression-test-and-doc-updates
---
contrib/test_decoding/expected/ddl.out | 4 +
doc/src/sgml/func.sgml | 61 ++
doc/src/sgml/ref/alter_foreign_table.sgml | 13 +
doc/src/sgml/ref/alter_function.sgml | 2 +-
doc/src/sgml/ref/alter_table.sgml | 12 +
doc/src/sgml/ref/create_foreign_table.sgml | 39 +
doc/src/sgml/ref/create_table.sgml | 44 ++
doc/src/sgml/ref/create_table_as.sgml | 38 +
src/test/regress/expected/alter_table.out | 2 +
src/test/regress/expected/compression_1.out | 9 +
src/test/regress/expected/copy2.out | 1 +
src/test/regress/expected/create_table.out | 14 +
.../regress/expected/create_table_like.out | 8 +
src/test/regress/expected/domain.out | 2 +
src/test/regress/expected/foreign_data.out | 42 ++
src/test/regress/expected/identity.out | 1 +
src/test/regress/expected/inherit.out | 13 +
src/test/regress/expected/insert.out | 12 +
src/test/regress/expected/insert_parallel.out | 713 ++++++++++++++++++
src/test/regress/expected/psql.out | 58 +-
src/test/regress/expected/publication.out | 4 +
.../regress/expected/replica_identity.out | 1 +
src/test/regress/expected/rowsecurity.out | 1 +
src/test/regress/expected/rules.out | 3 +
src/test/regress/expected/stats_ext.out | 1 +
src/test/regress/expected/triggers.out | 1 +
src/test/regress/expected/update.out | 1 +
src/test/regress/output/tablespace.source | 2 +
src/test/regress/parallel_schedule | 1 +
src/test/regress/sql/insert_parallel.sql | 381 ++++++++++
30 files changed, 1456 insertions(+), 28 deletions(-)
create mode 100644 src/test/regress/expected/insert_parallel.out
create mode 100644 src/test/regress/sql/insert_parallel.sql
diff --git a/contrib/test_decoding/expected/ddl.out b/contrib/test_decoding/expected/ddl.out
index 4ff0044c78..5c9b5ea3b9 100644
--- a/contrib/test_decoding/expected/ddl.out
+++ b/contrib/test_decoding/expected/ddl.out
@@ -446,6 +446,7 @@ WITH (user_catalog_table = true)
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=true
INSERT INTO replication_metadata(relation, options)
@@ -460,6 +461,7 @@ ALTER TABLE replication_metadata RESET (user_catalog_table);
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
INSERT INTO replication_metadata(relation, options)
VALUES ('bar', ARRAY['a', 'b']);
@@ -473,6 +475,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = true);
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=true
INSERT INTO replication_metadata(relation, options)
@@ -492,6 +495,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = false);
rewritemeornot | integer | | | | plain | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=false
INSERT INTO replication_metadata(relation, options)
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index d83f39f7cd..6679ad9974 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -23940,6 +23940,67 @@ SELECT collation for ('foo' COLLATE "de_DE");
Undefined objects are identified with <literal>NULL</literal> values.
</para></entry>
</row>
+
+ <row>
+ <entry role="func_table_entry"><para role="func_signature">
+ <indexterm>
+ <primary>pg_get_table_parallel_dml_safety</primary>
+ </indexterm>
+ <function>pg_get_table_parallel_dml_safety</function> ( <parameter>table_name</parameter> <type>regclass</type> )
+ <returnvalue>record</returnvalue>
+ ( <parameter>objid</parameter> <type>oid</type>,
+ <parameter>classid</parameter> <type>oid</type>,
+ <parameter>proparallel</parameter> <type>char</type> )
+ </para>
+ <para>
+ Returns a row containing enough information to uniquely identify the
+ parallel unsafe/restricted table-related objects from which the
+ table's parallel DML safety is determined. The user can use this
+ information during development in order to accurately declare a
+ table's parallel DML safety, or to identify any problematic objects
+ if parallel DML fails or behaves unexpectedly. Note that when the
+ use of an object-related parallel unsafe/restricted function is
+ detected, both the function OID and the object OID are returned.
+ <parameter>classid</parameter> is the OID of the system catalog
+ containing the object;
+ <parameter>objid</parameter> is the OID of the object itself.
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="func_table_entry"><para role="func_signature">
+ <indexterm>
+ <primary>pg_get_table_max_parallel_dml_hazard</primary>
+ </indexterm>
+ <function>pg_get_table_max_parallel_dml_hazard</function> ( <type>regclass</type> )
+ <returnvalue>char</returnvalue>
+ </para>
+ <para>
+ Returns the worst parallel DML safety hazard that can be found in the
+ given relation:
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal>s</literal> safe
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>r</literal> restricted
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>u</literal> unsafe
+ </para>
+ </listitem>
+ </itemizedlist>
+ </para>
+ <para>
+ Users can use this function to do a quick check without caring about
+ specific parallel-related objects.
+ </para></entry>
+ </row>
</tbody>
</tgroup>
</table>
diff --git a/doc/src/sgml/ref/alter_foreign_table.sgml b/doc/src/sgml/ref/alter_foreign_table.sgml
index 7ca03f3ac9..58f1c0d567 100644
--- a/doc/src/sgml/ref/alter_foreign_table.sgml
+++ b/doc/src/sgml/ref/alter_foreign_table.sgml
@@ -29,6 +29,8 @@ ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceab
RENAME TO <replaceable class="parameter">new_name</replaceable>
ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
SET SCHEMA <replaceable class="parameter">new_schema</replaceable>
+ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
<phrase>where <replaceable class="parameter">action</replaceable> is one of:</phrase>
@@ -299,6 +301,17 @@ ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceab
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML</literal></term>
+ <listitem>
+ <para>
+ Change whether the data in the table can be modified in parallel mode.
+ See the similar form of <link linkend="sql-altertable"><command>ALTER TABLE</command></link>
+ for more details.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</para>
diff --git a/doc/src/sgml/ref/alter_function.sgml b/doc/src/sgml/ref/alter_function.sgml
index 0ee756a94d..1a0fd3cd88 100644
--- a/doc/src/sgml/ref/alter_function.sgml
+++ b/doc/src/sgml/ref/alter_function.sgml
@@ -38,7 +38,7 @@ ALTER FUNCTION <replaceable>name</replaceable> [ ( [ [ <replaceable class="param
IMMUTABLE | STABLE | VOLATILE
[ NOT ] LEAKPROOF
[ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER
- PARALLEL { UNSAFE | RESTRICTED | SAFE }
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
COST <replaceable class="parameter">execution_cost</replaceable>
ROWS <replaceable class="parameter">result_rows</replaceable>
SUPPORT <replaceable class="parameter">support_function</replaceable>
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index 81291577f8..99bd75648f 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -37,6 +37,8 @@ ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
ATTACH PARTITION <replaceable class="parameter">partition_name</replaceable> { FOR VALUES <replaceable class="parameter">partition_bound_spec</replaceable> | DEFAULT }
ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
DETACH PARTITION <replaceable class="parameter">partition_name</replaceable> [ CONCURRENTLY | FINALIZE ]
+ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
<phrase>where <replaceable class="parameter">action</replaceable> is one of:</phrase>
@@ -1030,6 +1032,16 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML</literal></term>
+ <listitem>
+ <para>
+ Change whether the data in the table can be modified in parallel mode.
+ See <link linkend="sql-createtable"><command>CREATE TABLE</command></link> for details.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</para>
diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml
index f9477efe58..7a8a7ddbec 100644
--- a/doc/src/sgml/ref/create_foreign_table.sgml
+++ b/doc/src/sgml/ref/create_foreign_table.sgml
@@ -27,6 +27,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name
[, ... ]
] )
[ INHERITS ( <replaceable>parent_table</replaceable> [, ... ] ) ]
+[ PARALLEL DML { NOTESET | UNSAFE | RESTRICTED | SAFE } ]
SERVER <replaceable class="parameter">server_name</replaceable>
[ OPTIONS ( <replaceable class="parameter">option</replaceable> '<replaceable class="parameter">value</replaceable>' [, ... ] ) ]
@@ -36,6 +37,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name
| <replaceable>table_constraint</replaceable> }
[, ... ]
) ] <replaceable class="parameter">partition_bound_spec</replaceable>
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
SERVER <replaceable class="parameter">server_name</replaceable>
[ OPTIONS ( <replaceable class="parameter">option</replaceable> '<replaceable class="parameter">value</replaceable>' [, ... ] ) ]
@@ -290,6 +292,43 @@ CHECK ( <replaceable class="parameter">expression</replaceable> ) [ NO INHERIT ]
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML DEFAULT</literal> indicates that the safety of
+ parallel modification will be checked automatically. This is default.
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the
+ table can't be modified in parallel mode, and this forces a serial
+ execution plan for DML statements operating on the table.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader. <literal>PARALLEL DML
+ SAFE</literal> indicates that the data in the table can be modified in
+ parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ the table (e.g., functions in triggers/index expression/constraints etc.).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable class="parameter">server_name</replaceable></term>
<listitem>
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 15aed2f251..7abc527bf9 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -33,6 +33,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name</replaceable>
OF <replaceable class="parameter">type_name</replaceable> [ (
@@ -45,6 +46,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name</replaceable>
PARTITION OF <replaceable class="parameter">parent_table</replaceable> [ (
@@ -57,6 +59,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
<phrase>where <replaceable class="parameter">column_constraint</replaceable> is:</phrase>
@@ -1336,6 +1339,47 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry id="sql-createtable-paralleldmlsafety">
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the table
+ can't be modified in parallel mode, and this forces a serial execution plan
+ for DML statements operating on the table. This is the default.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader.
+ <literal>PARALLEL DML SAFE</literal> indicates that the data in the table
+ can be modified in parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Note that for partitioned table, <literal>PARALLEL DML DEFAULT</literal>
+ is the same as <literal>PARALLEL DML UNSAFE</literal> which indicates
+ that the data in the table can't be modified in parallel mode.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ the table
+ (e.g., functions in triggers/index expressions/constraints etc.).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>USING INDEX TABLESPACE <replaceable class="parameter">tablespace_name</replaceable></literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml
index 07558ab56c..2e7851db44 100644
--- a/doc/src/sgml/ref/create_table_as.sgml
+++ b/doc/src/sgml/ref/create_table_as.sgml
@@ -27,6 +27,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+ [ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
AS <replaceable>query</replaceable>
[ WITH [ NO ] DATA ]
</synopsis>
@@ -223,6 +224,43 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML DEFAULT</literal> indicates that the safety of
+ parallel modification will be checked automatically. This is default.
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the
+ table can't be modified in parallel mode, and this forces a serial
+ execution plan for DML statements operating on the table.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader. <literal>PARALLEL DML
+ SAFE</literal> indicates that the data in the table can be modified in
+ parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ table (e.g., functions in trigger/index expression/constraints ...).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable>query</replaceable></term>
<listitem>
diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out
index 8dcb00ac67..1c360e04bf 100644
--- a/src/test/regress/expected/alter_table.out
+++ b/src/test/regress/expected/alter_table.out
@@ -2206,6 +2206,7 @@ alter table test_storage alter column a set storage external;
b | integer | | | 0 | plain | |
Indexes:
"test_storage_idx" btree (b, a)
+Parallel DML: default
\d+ test_storage_idx
Index "public.test_storage_idx"
@@ -4193,6 +4194,7 @@ ALTER TABLE range_parted2 DETACH PARTITION part_rp CONCURRENTLY;
a | integer | | | | plain | |
Partition key: RANGE (a)
Number of partitions: 0
+Parallel DML: default
-- constraint should be created
\d part_rp
diff --git a/src/test/regress/expected/compression_1.out b/src/test/regress/expected/compression_1.out
index 1ce2962d55..8559e94226 100644
--- a/src/test/regress/expected/compression_1.out
+++ b/src/test/regress/expected/compression_1.out
@@ -12,6 +12,7 @@ INSERT INTO cmdata VALUES(repeat('1234567890', 1000));
f1 | text | | | | extended | pglz | |
Indexes:
"idx" btree (f1)
+Parallel DML: default
CREATE TABLE cmdata1(f1 TEXT COMPRESSION lz4);
ERROR: compression method lz4 not supported
@@ -51,6 +52,7 @@ SELECT * INTO cmmove1 FROM cmdata;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | text | | | | extended | | |
+Parallel DML: default
SELECT pg_column_compression(f1) FROM cmmove1;
pg_column_compression
@@ -138,6 +140,7 @@ CREATE TABLE cmdata2 (f1 int);
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | integer | | | | plain | | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;
\d+ cmdata2
@@ -145,6 +148,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | character varying | | | | extended | | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE int USING f1::integer;
\d+ cmdata2
@@ -152,6 +156,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE int USING f1::integer;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | integer | | | | plain | | |
+Parallel DML: default
--changing column storage should not impact the compression method
--but the data should not be compressed
@@ -162,6 +167,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION pglz;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | character varying | | | | extended | pglz | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 SET STORAGE plain;
\d+ cmdata2
@@ -169,6 +175,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET STORAGE plain;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | character varying | | | | plain | pglz | |
+Parallel DML: default
INSERT INTO cmdata2 VALUES (repeat('123456789', 800));
SELECT pg_column_compression(f1) FROM cmdata2;
@@ -249,6 +256,7 @@ INSERT INTO cmdata VALUES (repeat('123456789', 4004));
f1 | text | | | | extended | pglz | |
Indexes:
"idx" btree (f1)
+Parallel DML: default
SELECT pg_column_compression(f1) FROM cmdata;
pg_column_compression
@@ -263,6 +271,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION default;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | character varying | | | | plain | | |
+Parallel DML: default
-- test alter compression method for materialized views
ALTER MATERIALIZED VIEW compressmv ALTER COLUMN x SET COMPRESSION lz4;
diff --git a/src/test/regress/expected/copy2.out b/src/test/regress/expected/copy2.out
index 5f3685e9ef..46f817417a 100644
--- a/src/test/regress/expected/copy2.out
+++ b/src/test/regress/expected/copy2.out
@@ -519,6 +519,7 @@ alter table check_con_tbl add check (check_con_function(check_con_tbl.*));
f1 | integer | | | | plain | |
Check constraints:
"check_con_tbl_check" CHECK (check_con_function(check_con_tbl.*))
+Parallel DML: default
copy check_con_tbl from stdin;
NOTICE: input = {"f1":1}
diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out
index 96bf426d98..b7e2a535cd 100644
--- a/src/test/regress/expected/create_table.out
+++ b/src/test/regress/expected/create_table.out
@@ -505,6 +505,7 @@ Number of partitions: 0
b | text | | | | extended | |
Partition key: RANGE (((a + 1)), substr(b, 1, 5))
Number of partitions: 0
+Parallel DML: default
INSERT INTO partitioned2 VALUES (1, 'hello');
ERROR: no partition of relation "partitioned2" found for row
@@ -518,6 +519,7 @@ CREATE TABLE part2_1 PARTITION OF partitioned2 FOR VALUES FROM (-1, 'aaaaa') TO
b | text | | | | extended | |
Partition of: partitioned2 FOR VALUES FROM ('-1', 'aaaaa') TO (100, 'ccccc')
Partition constraint: (((a + 1) IS NOT NULL) AND (substr(b, 1, 5) IS NOT NULL) AND (((a + 1) > '-1'::integer) OR (((a + 1) = '-1'::integer) AND (substr(b, 1, 5) >= 'aaaaa'::text))) AND (((a + 1) < 100) OR (((a + 1) = 100) AND (substr(b, 1, 5) < 'ccccc'::text))))
+Parallel DML: default
DROP TABLE partitioned, partitioned2;
-- check reference to partitioned table's rowtype in partition descriptor
@@ -559,6 +561,7 @@ select * from partitioned where partitioned = '(1,2)'::partitioned;
b | integer | | | | plain | |
Partition of: partitioned FOR VALUES IN ('(1,2)')
Partition constraint: (((partitioned1.*)::partitioned IS DISTINCT FROM NULL) AND ((partitioned1.*)::partitioned = '(1,2)'::partitioned))
+Parallel DML: default
drop table partitioned;
-- check that dependencies of partition columns are handled correctly
@@ -618,6 +621,7 @@ Partitions: part_null FOR VALUES IN (NULL),
part_p1 FOR VALUES IN (1),
part_p2 FOR VALUES IN (2),
part_p3 FOR VALUES IN (3)
+Parallel DML: default
-- forbidden expressions for partition bound with list partitioned table
CREATE TABLE part_bogus_expr_fail PARTITION OF list_parted FOR VALUES IN (somename);
@@ -1064,6 +1068,7 @@ drop table test_part_coll_posix;
b | integer | | not null | 1 | plain | |
Partition of: parted FOR VALUES IN ('b')
Partition constraint: ((a IS NOT NULL) AND (a = 'b'::text))
+Parallel DML: default
-- Both partition bound and partition key in describe output
\d+ part_c
@@ -1076,6 +1081,7 @@ Partition of: parted FOR VALUES IN ('c')
Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text))
Partition key: RANGE (b)
Partitions: part_c_1_10 FOR VALUES FROM (1) TO (10)
+Parallel DML: default
-- a level-2 partition's constraint will include the parent's expressions
\d+ part_c_1_10
@@ -1086,6 +1092,7 @@ Partitions: part_c_1_10 FOR VALUES FROM (1) TO (10)
b | integer | | not null | 0 | plain | |
Partition of: part_c FOR VALUES FROM (1) TO (10)
Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text) AND (b IS NOT NULL) AND (b >= 1) AND (b < 10))
+Parallel DML: default
-- Show partition count in the parent's describe output
-- Tempted to include \d+ output listing partitions with bound info but
@@ -1120,6 +1127,7 @@ CREATE TABLE unbounded_range_part PARTITION OF range_parted4 FOR VALUES FROM (MI
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (MAXVALUE, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL))
+Parallel DML: default
DROP TABLE unbounded_range_part;
CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE);
@@ -1132,6 +1140,7 @@ CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALU
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND (abs(a) <= 1))
+Parallel DML: default
CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE);
\d+ range_parted4_2
@@ -1143,6 +1152,7 @@ CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 3) OR ((abs(a) = 3) AND (abs(b) > 4)) OR ((abs(a) = 3) AND (abs(b) = 4) AND (c >= 5))) AND ((abs(a) < 6) OR ((abs(a) = 6) AND (abs(b) <= 7))))
+Parallel DML: default
CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE);
\d+ range_parted4_3
@@ -1154,6 +1164,7 @@ CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, M
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 6) OR ((abs(a) = 6) AND (abs(b) >= 8))) AND (abs(a) <= 9))
+Parallel DML: default
DROP TABLE range_parted4;
-- user-defined operator class in partition key
@@ -1190,6 +1201,7 @@ SELECT obj_description('parted_col_comment'::regclass);
b | text | | | | extended | |
Partition key: LIST (a)
Number of partitions: 0
+Parallel DML: default
DROP TABLE parted_col_comment;
-- list partitioning on array type column
@@ -1202,6 +1214,7 @@ CREATE TABLE arrlp12 PARTITION OF arrlp FOR VALUES IN ('{1}', '{2}');
a | integer[] | | | | extended | |
Partition of: arrlp FOR VALUES IN ('{1}', '{2}')
Partition constraint: ((a IS NOT NULL) AND ((a = '{1}'::integer[]) OR (a = '{2}'::integer[])))
+Parallel DML: default
DROP TABLE arrlp;
-- partition on boolean column
@@ -1216,6 +1229,7 @@ create table boolspart_f partition of boolspart for values in (false);
Partition key: LIST (a)
Partitions: boolspart_f FOR VALUES IN (false),
boolspart_t FOR VALUES IN (true)
+Parallel DML: default
drop table boolspart;
-- partitions mixing temporary and permanent relations
diff --git a/src/test/regress/expected/create_table_like.out b/src/test/regress/expected/create_table_like.out
index 7ad5fafe93..da59d8b3c2 100644
--- a/src/test/regress/expected/create_table_like.out
+++ b/src/test/regress/expected/create_table_like.out
@@ -333,6 +333,7 @@ CREATE TABLE ctlt12_storage (LIKE ctlt1 INCLUDING STORAGE, LIKE ctlt2 INCLUDING
a | text | | not null | | main | |
b | text | | | | extended | |
c | text | | | | external | |
+Parallel DML: default
CREATE TABLE ctlt12_comments (LIKE ctlt1 INCLUDING COMMENTS, LIKE ctlt2 INCLUDING COMMENTS);
\d+ ctlt12_comments
@@ -342,6 +343,7 @@ CREATE TABLE ctlt12_comments (LIKE ctlt1 INCLUDING COMMENTS, LIKE ctlt2 INCLUDIN
a | text | | not null | | extended | | A
b | text | | | | extended | | B
c | text | | | | extended | | C
+Parallel DML: default
CREATE TABLE ctlt1_inh (LIKE ctlt1 INCLUDING CONSTRAINTS INCLUDING COMMENTS) INHERITS (ctlt1);
NOTICE: merging column "a" with inherited definition
@@ -356,6 +358,7 @@ NOTICE: merging constraint "ctlt1_a_check" with inherited definition
Check constraints:
"ctlt1_a_check" CHECK (length(a) > 2)
Inherits: ctlt1
+Parallel DML: default
SELECT description FROM pg_description, pg_constraint c WHERE classoid = 'pg_constraint'::regclass AND objoid = c.oid AND c.conrelid = 'ctlt1_inh'::regclass;
description
@@ -378,6 +381,7 @@ Check constraints:
"ctlt3_c_check" CHECK (length(c) < 7)
Inherits: ctlt1,
ctlt3
+Parallel DML: default
CREATE TABLE ctlt13_like (LIKE ctlt3 INCLUDING CONSTRAINTS INCLUDING INDEXES INCLUDING COMMENTS INCLUDING STORAGE) INHERITS (ctlt1);
NOTICE: merging column "a" with inherited definition
@@ -395,6 +399,7 @@ Check constraints:
"ctlt3_a_check" CHECK (length(a) < 5)
"ctlt3_c_check" CHECK (length(c) < 7)
Inherits: ctlt1
+Parallel DML: default
SELECT description FROM pg_description, pg_constraint c WHERE classoid = 'pg_constraint'::regclass AND objoid = c.oid AND c.conrelid = 'ctlt13_like'::regclass;
description
@@ -418,6 +423,7 @@ Check constraints:
Statistics objects:
"public"."ctlt_all_a_b_stat" ON a, b FROM ctlt_all
"public"."ctlt_all_expr_stat" ON ((a || b)) FROM ctlt_all
+Parallel DML: default
SELECT c.relname, objsubid, description FROM pg_description, pg_index i, pg_class c WHERE classoid = 'pg_class'::regclass AND objoid = i.indexrelid AND c.oid = i.indexrelid AND i.indrelid = 'ctlt_all'::regclass ORDER BY c.relname, objsubid;
relname | objsubid | description
@@ -458,6 +464,7 @@ Check constraints:
Statistics objects:
"public"."pg_attrdef_a_b_stat" ON a, b FROM public.pg_attrdef
"public"."pg_attrdef_expr_stat" ON ((a || b)) FROM public.pg_attrdef
+Parallel DML: default
DROP TABLE public.pg_attrdef;
-- Check that LIKE isn't confused when new table masks the old, either
@@ -480,6 +487,7 @@ Check constraints:
Statistics objects:
"ctl_schema"."ctlt1_a_b_stat" ON a, b FROM ctlt1
"ctl_schema"."ctlt1_expr_stat" ON ((a || b)) FROM ctlt1
+Parallel DML: default
ROLLBACK;
DROP TABLE ctlt1, ctlt2, ctlt3, ctlt4, ctlt12_storage, ctlt12_comments, ctlt1_inh, ctlt13_inh, ctlt13_like, ctlt_all, ctla, ctlb CASCADE;
diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out
index 411d5c003e..342e9d234d 100644
--- a/src/test/regress/expected/domain.out
+++ b/src/test/regress/expected/domain.out
@@ -276,6 +276,7 @@ Rules:
silly AS
ON DELETE TO dcomptable DO INSTEAD UPDATE dcomptable SET d1.r = (dcomptable.d1).r - 1::double precision, d1.i = (dcomptable.d1).i + 1::double precision
WHERE (dcomptable.d1).i > 0::double precision
+Parallel DML: default
drop table dcomptable;
drop type comptype cascade;
@@ -413,6 +414,7 @@ Rules:
silly AS
ON DELETE TO dcomptable DO INSTEAD UPDATE dcomptable SET d1[1].r = dcomptable.d1[1].r - 1::double precision, d1[1].i = dcomptable.d1[1].i + 1::double precision
WHERE dcomptable.d1[1].i > 0::double precision
+Parallel DML: default
drop table dcomptable;
drop type comptype cascade;
diff --git a/src/test/regress/expected/foreign_data.out b/src/test/regress/expected/foreign_data.out
index 426080ae39..330f25ea9e 100644
--- a/src/test/regress/expected/foreign_data.out
+++ b/src/test/regress/expected/foreign_data.out
@@ -735,6 +735,7 @@ Check constraints:
"ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
\det+
List of foreign tables
@@ -857,6 +858,7 @@ Check constraints:
"ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- can't change the column type if it's used elsewhere
CREATE TABLE use_ft1_column_type (x ft1);
@@ -1396,6 +1398,7 @@ CREATE FOREIGN TABLE ft2 () INHERITS (fd_pt1)
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1407,6 +1410,7 @@ Child tables: ft2
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
DROP FOREIGN TABLE ft2;
\d+ fd_pt1
@@ -1416,6 +1420,7 @@ DROP FOREIGN TABLE ft2;
c1 | integer | | not null | | plain | |
c2 | text | | | | extended | |
c3 | date | | | | plain | |
+Parallel DML: default
CREATE FOREIGN TABLE ft2 (
c1 integer NOT NULL,
@@ -1431,6 +1436,7 @@ CREATE FOREIGN TABLE ft2 (
c3 | date | | | | | plain | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER FOREIGN TABLE ft2 INHERIT fd_pt1;
\d+ fd_pt1
@@ -1441,6 +1447,7 @@ ALTER FOREIGN TABLE ft2 INHERIT fd_pt1;
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1452,6 +1459,7 @@ Child tables: ft2
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
CREATE TABLE ct3() INHERITS(ft2);
CREATE FOREIGN TABLE ft3 (
@@ -1475,6 +1483,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
\d+ ct3
Table "public.ct3"
@@ -1484,6 +1493,7 @@ Child tables: ct3,
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Inherits: ft2
+Parallel DML: default
\d+ ft3
Foreign table "public.ft3"
@@ -1494,6 +1504,7 @@ Inherits: ft2
c3 | date | | | | | plain | |
Server: s0
Inherits: ft2
+Parallel DML: default
-- add attributes recursively
ALTER TABLE fd_pt1 ADD COLUMN c4 integer;
@@ -1514,6 +1525,7 @@ ALTER TABLE fd_pt1 ADD COLUMN c8 integer;
c7 | integer | | not null | | plain | |
c8 | integer | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1532,6 +1544,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
\d+ ct3
Table "public.ct3"
@@ -1546,6 +1559,7 @@ Child tables: ct3,
c7 | integer | | not null | | plain | |
c8 | integer | | | | plain | |
Inherits: ft2
+Parallel DML: default
\d+ ft3
Foreign table "public.ft3"
@@ -1561,6 +1575,7 @@ Inherits: ft2
c8 | integer | | | | | plain | |
Server: s0
Inherits: ft2
+Parallel DML: default
-- alter attributes recursively
ALTER TABLE fd_pt1 ALTER COLUMN c4 SET DEFAULT 0;
@@ -1588,6 +1603,7 @@ ALTER TABLE fd_pt1 ALTER COLUMN c8 SET STORAGE EXTERNAL;
c7 | integer | | | | plain | |
c8 | text | | | | external | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1606,6 +1622,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
-- drop attributes recursively
ALTER TABLE fd_pt1 DROP COLUMN c4;
@@ -1621,6 +1638,7 @@ ALTER TABLE fd_pt1 DROP COLUMN c8;
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1634,6 +1652,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
-- add constraints recursively
ALTER TABLE fd_pt1 ADD CONSTRAINT fd_pt1chk1 CHECK (c1 > 0) NO INHERIT;
@@ -1661,6 +1680,7 @@ Check constraints:
"fd_pt1chk1" CHECK (c1 > 0) NO INHERIT
"fd_pt1chk2" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1676,6 +1696,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
DROP FOREIGN TABLE ft2; -- ERROR
ERROR: cannot drop foreign table ft2 because other objects depend on it
@@ -1708,6 +1729,7 @@ Check constraints:
"fd_pt1chk1" CHECK (c1 > 0) NO INHERIT
"fd_pt1chk2" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1721,6 +1743,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- drop constraints recursively
ALTER TABLE fd_pt1 DROP CONSTRAINT fd_pt1chk1 CASCADE;
@@ -1738,6 +1761,7 @@ ALTER TABLE fd_pt1 ADD CONSTRAINT fd_pt1chk3 CHECK (c2 <> '') NOT VALID;
Check constraints:
"fd_pt1chk3" CHECK (c2 <> ''::text) NOT VALID
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1752,6 +1776,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- VALIDATE CONSTRAINT need do nothing on foreign tables
ALTER TABLE fd_pt1 VALIDATE CONSTRAINT fd_pt1chk3;
@@ -1765,6 +1790,7 @@ ALTER TABLE fd_pt1 VALIDATE CONSTRAINT fd_pt1chk3;
Check constraints:
"fd_pt1chk3" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1779,6 +1805,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- changes name of an attribute recursively
ALTER TABLE fd_pt1 RENAME COLUMN c1 TO f1;
@@ -1796,6 +1823,7 @@ ALTER TABLE fd_pt1 RENAME CONSTRAINT fd_pt1chk3 TO f2_check;
Check constraints:
"f2_check" CHECK (f2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1810,6 +1838,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- TRUNCATE doesn't work on foreign tables, either directly or recursively
TRUNCATE ft2; -- ERROR
@@ -1859,6 +1888,7 @@ CREATE FOREIGN TABLE fd_pt2_1 PARTITION OF fd_pt2 FOR VALUES IN (1)
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1871,6 +1901,7 @@ Partition of: fd_pt2 FOR VALUES IN (1)
Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1))
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- partition cannot have additional columns
DROP FOREIGN TABLE fd_pt2_1;
@@ -1890,6 +1921,7 @@ CREATE FOREIGN TABLE fd_pt2_1 (
c4 | character(1) | | | | | extended | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: table "fd_pt2_1" contains column "c4" not found in parent "fd_pt2"
@@ -1904,6 +1936,7 @@ DROP FOREIGN TABLE fd_pt2_1;
c3 | date | | | | plain | |
Partition key: LIST (c1)
Number of partitions: 0
+Parallel DML: default
CREATE FOREIGN TABLE fd_pt2_1 (
c1 integer NOT NULL,
@@ -1919,6 +1952,7 @@ CREATE FOREIGN TABLE fd_pt2_1 (
c3 | date | | | | | plain | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- no attach partition validation occurs for foreign tables
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1);
@@ -1931,6 +1965,7 @@ ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1);
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1943,6 +1978,7 @@ Partition of: fd_pt2 FOR VALUES IN (1)
Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1))
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- cannot add column to a partition
ALTER TABLE fd_pt2_1 ADD c4 char;
@@ -1959,6 +1995,7 @@ ALTER TABLE fd_pt2_1 ADD CONSTRAINT p21chk CHECK (c2 <> '');
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1973,6 +2010,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- cannot drop inherited NOT NULL constraint from a partition
ALTER TABLE fd_pt2_1 ALTER c1 DROP NOT NULL;
@@ -1989,6 +2027,7 @@ ALTER TABLE fd_pt2 ALTER c2 SET NOT NULL;
c3 | date | | | | plain | |
Partition key: LIST (c1)
Number of partitions: 0
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -2001,6 +2040,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: column "c2" in child table must be marked NOT NULL
@@ -2019,6 +2059,7 @@ Partition key: LIST (c1)
Check constraints:
"fd_pt2chk1" CHECK (c1 > 0)
Number of partitions: 0
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -2031,6 +2072,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: child table is missing constraint "fd_pt2chk1"
diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out
index 99811570b7..6908fd141b 100644
--- a/src/test/regress/expected/identity.out
+++ b/src/test/regress/expected/identity.out
@@ -506,6 +506,7 @@ TABLE itest8;
f3 | integer | | not null | generated by default as identity | plain | |
f4 | bigint | | not null | generated always as identity | plain | |
f5 | bigint | | | | plain | |
+Parallel DML: default
\d itest8_f2_seq
Sequence "public.itest8_f2_seq"
diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out
index 06f44287bc..1c0da28d78 100644
--- a/src/test/regress/expected/inherit.out
+++ b/src/test/regress/expected/inherit.out
@@ -1059,6 +1059,7 @@ ALTER TABLE inhts RENAME d TO dd;
dd | integer | | | | plain | |
Inherits: inht1,
inhs1
+Parallel DML: default
DROP TABLE inhts;
-- Test for renaming in diamond inheritance
@@ -1079,6 +1080,7 @@ ALTER TABLE inht1 RENAME aa TO aaa;
z | integer | | | | plain | |
Inherits: inht2,
inht3
+Parallel DML: default
CREATE TABLE inhts (d int) INHERITS (inht2, inhs1);
NOTICE: merging multiple inherited definitions of column "b"
@@ -1096,6 +1098,7 @@ ERROR: cannot rename inherited column "b"
d | integer | | | | plain | |
Inherits: inht2,
inhs1
+Parallel DML: default
WITH RECURSIVE r AS (
SELECT 'inht1'::regclass AS inhrelid
@@ -1142,6 +1145,7 @@ CREATE TABLE test_constraints_inh () INHERITS (test_constraints);
Indexes:
"test_constraints_val1_val2_key" UNIQUE CONSTRAINT, btree (val1, val2)
Child tables: test_constraints_inh
+Parallel DML: default
ALTER TABLE ONLY test_constraints DROP CONSTRAINT test_constraints_val1_val2_key;
\d+ test_constraints
@@ -1152,6 +1156,7 @@ ALTER TABLE ONLY test_constraints DROP CONSTRAINT test_constraints_val1_val2_key
val1 | character varying | | | | extended | |
val2 | integer | | | | plain | |
Child tables: test_constraints_inh
+Parallel DML: default
\d+ test_constraints_inh
Table "public.test_constraints_inh"
@@ -1161,6 +1166,7 @@ Child tables: test_constraints_inh
val1 | character varying | | | | extended | |
val2 | integer | | | | plain | |
Inherits: test_constraints
+Parallel DML: default
DROP TABLE test_constraints_inh;
DROP TABLE test_constraints;
@@ -1177,6 +1183,7 @@ CREATE TABLE test_ex_constraints_inh () INHERITS (test_ex_constraints);
Indexes:
"test_ex_constraints_c_excl" EXCLUDE USING gist (c WITH &&)
Child tables: test_ex_constraints_inh
+Parallel DML: default
ALTER TABLE test_ex_constraints DROP CONSTRAINT test_ex_constraints_c_excl;
\d+ test_ex_constraints
@@ -1185,6 +1192,7 @@ ALTER TABLE test_ex_constraints DROP CONSTRAINT test_ex_constraints_c_excl;
--------+--------+-----------+----------+---------+---------+--------------+-------------
c | circle | | | | plain | |
Child tables: test_ex_constraints_inh
+Parallel DML: default
\d+ test_ex_constraints_inh
Table "public.test_ex_constraints_inh"
@@ -1192,6 +1200,7 @@ Child tables: test_ex_constraints_inh
--------+--------+-----------+----------+---------+---------+--------------+-------------
c | circle | | | | plain | |
Inherits: test_ex_constraints
+Parallel DML: default
DROP TABLE test_ex_constraints_inh;
DROP TABLE test_ex_constraints;
@@ -1208,6 +1217,7 @@ Indexes:
"test_primary_constraints_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "test_foreign_constraints" CONSTRAINT "test_foreign_constraints_id1_fkey" FOREIGN KEY (id1) REFERENCES test_primary_constraints(id)
+Parallel DML: default
\d+ test_foreign_constraints
Table "public.test_foreign_constraints"
@@ -1217,6 +1227,7 @@ Referenced by:
Foreign-key constraints:
"test_foreign_constraints_id1_fkey" FOREIGN KEY (id1) REFERENCES test_primary_constraints(id)
Child tables: test_foreign_constraints_inh
+Parallel DML: default
ALTER TABLE test_foreign_constraints DROP CONSTRAINT test_foreign_constraints_id1_fkey;
\d+ test_foreign_constraints
@@ -1225,6 +1236,7 @@ ALTER TABLE test_foreign_constraints DROP CONSTRAINT test_foreign_constraints_id
--------+---------+-----------+----------+---------+---------+--------------+-------------
id1 | integer | | | | plain | |
Child tables: test_foreign_constraints_inh
+Parallel DML: default
\d+ test_foreign_constraints_inh
Table "public.test_foreign_constraints_inh"
@@ -1232,6 +1244,7 @@ Child tables: test_foreign_constraints_inh
--------+---------+-----------+----------+---------+---------+--------------+-------------
id1 | integer | | | | plain | |
Inherits: test_foreign_constraints
+Parallel DML: default
DROP TABLE test_foreign_constraints_inh;
DROP TABLE test_foreign_constraints;
diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out
index 5063a3dc22..9e4a1bf886 100644
--- a/src/test/regress/expected/insert.out
+++ b/src/test/regress/expected/insert.out
@@ -177,6 +177,7 @@ Rules:
irule3 AS
ON INSERT TO inserttest2 DO INSERT INTO inserttest (f4[1].if1, f4[1].if2[2]) SELECT new.f1,
new.f2
+Parallel DML: default
drop table inserttest2;
drop table inserttest;
@@ -482,6 +483,7 @@ Partitions: part_aa_bb FOR VALUES IN ('aa', 'bb'),
part_null FOR VALUES IN (NULL),
part_xx_yy FOR VALUES IN ('xx', 'yy'), PARTITIONED,
part_default DEFAULT, PARTITIONED
+Parallel DML: default
-- cleanup
drop table range_parted, list_parted;
@@ -497,6 +499,7 @@ create table part_default partition of list_parted default;
a | integer | | | | plain | |
Partition of: list_parted DEFAULT
No partition constraint
+Parallel DML: default
insert into part_default values (null);
insert into part_default values (1);
@@ -888,6 +891,7 @@ Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVAL
mcrparted6_common_ge_10 FOR VALUES FROM ('common', 10) TO ('common', MAXVALUE),
mcrparted7_gt_common_lt_d FOR VALUES FROM ('common', MAXVALUE) TO ('d', MINVALUE),
mcrparted8_ge_d FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE)
+Parallel DML: default
\d+ mcrparted1_lt_b
Table "public.mcrparted1_lt_b"
@@ -897,6 +901,7 @@ Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVAL
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a < 'b'::text))
+Parallel DML: default
\d+ mcrparted2_b
Table "public.mcrparted2_b"
@@ -906,6 +911,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a < 'b'::text))
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('b', MINVALUE) TO ('c', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'b'::text) AND (a < 'c'::text))
+Parallel DML: default
\d+ mcrparted3_c_to_common
Table "public.mcrparted3_c_to_common"
@@ -915,6 +921,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'b'::text)
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('c', MINVALUE) TO ('common', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'c'::text) AND (a < 'common'::text))
+Parallel DML: default
\d+ mcrparted4_common_lt_0
Table "public.mcrparted4_common_lt_0"
@@ -924,6 +931,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'c'::text)
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', MINVALUE) TO ('common', 0)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b < 0))
+Parallel DML: default
\d+ mcrparted5_common_0_to_10
Table "public.mcrparted5_common_0_to_10"
@@ -933,6 +941,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', 0) TO ('common', 10)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b >= 0) AND (b < 10))
+Parallel DML: default
\d+ mcrparted6_common_ge_10
Table "public.mcrparted6_common_ge_10"
@@ -942,6 +951,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', 10) TO ('common', MAXVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b >= 10))
+Parallel DML: default
\d+ mcrparted7_gt_common_lt_d
Table "public.mcrparted7_gt_common_lt_d"
@@ -951,6 +961,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', MAXVALUE) TO ('d', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a > 'common'::text) AND (a < 'd'::text))
+Parallel DML: default
\d+ mcrparted8_ge_d
Table "public.mcrparted8_ge_d"
@@ -960,6 +971,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a > 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'd'::text))
+Parallel DML: default
insert into mcrparted values ('aaa', 0), ('b', 0), ('bz', 10), ('c', -10),
('comm', -10), ('common', -10), ('common', 0), ('common', 10),
diff --git a/src/test/regress/expected/insert_parallel.out b/src/test/regress/expected/insert_parallel.out
new file mode 100644
index 0000000000..28eb537687
--- /dev/null
+++ b/src/test/regress/expected/insert_parallel.out
@@ -0,0 +1,713 @@
+--
+-- PARALLEL
+--
+--
+-- START: setup some tables and data needed by the tests.
+--
+-- Setup - index expressions test
+create function pg_class_relname(Oid)
+returns name language sql parallel unsafe
+as 'select relname from pg_class where $1 = oid';
+-- For testing purposes, we'll mark this function as parallel-unsafe
+create or replace function fullname_parallel_unsafe(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel unsafe;
+create or replace function fullname_parallel_restricted(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel restricted;
+create table names(index int, first_name text, last_name text);
+create table names2(index int, first_name text, last_name text);
+create index names2_fullname_idx on names2 (fullname_parallel_unsafe(first_name, last_name));
+create table names4(index int, first_name text, last_name text);
+create index names4_fullname_idx on names4 (fullname_parallel_restricted(first_name, last_name));
+insert into names values
+ (1, 'albert', 'einstein'),
+ (2, 'niels', 'bohr'),
+ (3, 'erwin', 'schrodinger'),
+ (4, 'leonhard', 'euler'),
+ (5, 'stephen', 'hawking'),
+ (6, 'isaac', 'newton'),
+ (7, 'alan', 'turing'),
+ (8, 'richard', 'feynman');
+-- Setup - column default tests
+create or replace function bdefault_unsafe ()
+returns int language plpgsql parallel unsafe as $$
+begin
+ RETURN 5;
+end $$;
+create or replace function cdefault_restricted ()
+returns int language plpgsql parallel restricted as $$
+begin
+ RETURN 10;
+end $$;
+create or replace function ddefault_safe ()
+returns int language plpgsql parallel safe as $$
+begin
+ RETURN 20;
+end $$;
+create table testdef(a int, b int default bdefault_unsafe(), c int default cdefault_restricted(), d int default ddefault_safe());
+create table test_data(a int);
+insert into test_data select * from generate_series(1,10);
+--
+-- END: setup some tables and data needed by the tests.
+--
+begin;
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+create table para_insert_p1 (
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+);
+create table para_insert_f1 (
+ unique1 int4 REFERENCES para_insert_p1(unique1),
+ stringu1 name
+);
+create table para_insert_with_parallel_unsafe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml unsafe;
+create table para_insert_with_parallel_restricted(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml restricted;
+create table para_insert_with_parallel_safe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml safe;
+create table para_insert_with_parallel_auto(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml default;
+-- Check FK trigger
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('para_insert_f1');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | r
+ pg_trigger | r
+ pg_proc | r
+ pg_trigger | r
+(4 rows)
+
+select pg_get_table_max_parallel_dml_hazard('para_insert_f1');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+--
+-- Test INSERT with underlying query.
+-- Set parallel dml safe.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+alter table para_insert_p1 parallel dml safe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into para_insert_p1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+ count
+-------
+ 1
+(1 row)
+
+explain (costs off) insert into para_insert_with_parallel_safe select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------
+ Insert on para_insert_with_parallel_safe
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Set parallel dml unsafe.
+-- (should not create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml unsafe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+--------------------------
+ Insert on para_insert_p1
+ -> Seq Scan on tenk1
+(2 rows)
+
+explain (costs off) insert into para_insert_with_parallel_unsafe select unique1, stringu1 from tenk1;
+ QUERY PLAN
+--------------------------------------------
+ Insert on para_insert_with_parallel_unsafe
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Set parallel dml restricted.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml restricted;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+explain (costs off) insert into para_insert_with_parallel_restricted select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------------
+ Insert on para_insert_with_parallel_restricted
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Reset parallel dml.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml default;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+explain (costs off) insert into para_insert_with_parallel_auto select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------
+ Insert on para_insert_with_parallel_auto
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Test INSERT with ordered underlying query.
+-- (should create plan with parallel SELECT, GatherMerge parent node)
+--
+truncate para_insert_p1 cascade;
+NOTICE: truncate cascades to table "para_insert_f1"
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+ QUERY PLAN
+----------------------------------------------
+ Insert on para_insert_p1
+ -> Gather Merge
+ Workers Planned: 4
+ -> Sort
+ Sort Key: tenk1.unique1
+ -> Parallel Seq Scan on tenk1
+(6 rows)
+
+insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+ count
+-------
+ 1
+(1 row)
+
+--
+-- Test INSERT with RETURNING clause.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+create table test_data1(like test_data);
+explain (costs off) insert into test_data1 select * from test_data where a = 10 returning a as data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on test_data1
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+ Filter: (a = 10)
+(5 rows)
+
+insert into test_data1 select * from test_data where a = 10 returning a as data;
+ data
+------
+ 10
+(1 row)
+
+--
+-- Test INSERT into a table with a foreign key.
+-- (Insert into a table with a foreign key is parallel-restricted,
+-- as doing this in a parallel worker would create a new commandId
+-- and within a worker this is not currently supported)
+--
+explain (costs off) insert into para_insert_f1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_f1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into para_insert_f1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the insert worked
+select count(*), sum(unique1) from para_insert_f1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+--
+-- Test INSERT with ON CONFLICT ... DO UPDATE ...
+-- (should not create a parallel plan)
+--
+create table test_conflict_table(id serial primary key, somedata int);
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on test_conflict_table
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+(4 rows)
+
+insert into test_conflict_table(id, somedata) select a, a from test_data;
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data ON CONFLICT(id) DO UPDATE SET somedata = EXCLUDED.somedata + 1;
+ QUERY PLAN
+------------------------------------------------------
+ Insert on test_conflict_table
+ Conflict Resolution: UPDATE
+ Conflict Arbiter Indexes: test_conflict_table_pkey
+ -> Seq Scan on test_data
+(4 rows)
+
+--
+-- Test INSERT with parallel-unsafe index expression
+-- (should not create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names2');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_index | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names2');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into names2 select * from names;
+ QUERY PLAN
+-------------------------
+ Insert on names2
+ -> Seq Scan on names
+(2 rows)
+
+--
+-- Test INSERT with parallel-restricted index expression
+-- (should create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names4');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | r
+ pg_index | r
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names4');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+explain (costs off) insert into names4 select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on names4
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+--
+-- Test INSERT with underlying query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names5 (like names);
+explain (costs off) insert into names5 select * from names returning *;
+ QUERY PLAN
+----------------------------------------
+ Insert on names5
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names6 (like names);
+explain (costs off) insert into names6 select * from names order by last_name returning *;
+ QUERY PLAN
+----------------------------------------------
+ Insert on names6
+ -> Gather Merge
+ Workers Planned: 3
+ -> Sort
+ Sort Key: names.last_name
+ -> Parallel Seq Scan on names
+(6 rows)
+
+insert into names6 select * from names order by last_name returning *;
+ index | first_name | last_name
+-------+------------+-------------
+ 2 | niels | bohr
+ 1 | albert | einstein
+ 4 | leonhard | euler
+ 8 | richard | feynman
+ 5 | stephen | hawking
+ 6 | isaac | newton
+ 3 | erwin | schrodinger
+ 7 | alan | turing
+(8 rows)
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (with projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names7 (like names);
+explain (costs off) insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+ QUERY PLAN
+----------------------------------------------
+ Insert on names7
+ -> Gather Merge
+ Workers Planned: 3
+ -> Sort
+ Sort Key: names.last_name
+ -> Parallel Seq Scan on names
+(6 rows)
+
+insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+ last_name_then_first_name
+---------------------------
+ bohr, niels
+ einstein, albert
+ euler, leonhard
+ feynman, richard
+ hawking, stephen
+ newton, isaac
+ schrodinger, erwin
+ turing, alan
+(8 rows)
+
+--
+-- Test INSERT into temporary table with underlying query.
+-- (Insert into a temp table is parallel-restricted;
+-- should create a parallel plan; parallel SELECT)
+--
+create temporary table temp_names (like names);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('temp_names');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_class | r
+(1 row)
+
+select pg_get_table_max_parallel_dml_hazard('temp_names');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+explain (costs off) insert into temp_names select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on temp_names
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+insert into temp_names select * from names;
+--
+-- Test INSERT with column defaults
+--
+--
+--
+-- Parallel INSERT with unsafe column default, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,c,d) select a,a*4,a*8 from test_data;
+ QUERY PLAN
+-----------------------------
+ Insert on testdef
+ -> Seq Scan on test_data
+(2 rows)
+
+--
+-- Parallel INSERT with restricted column default, should use parallel SELECT
+--
+explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on testdef
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+(4 rows)
+
+insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+select * from testdef order by a;
+ a | b | c | d
+----+----+----+----
+ 1 | 2 | 10 | 8
+ 2 | 4 | 10 | 16
+ 3 | 6 | 10 | 24
+ 4 | 8 | 10 | 32
+ 5 | 10 | 10 | 40
+ 6 | 12 | 10 | 48
+ 7 | 14 | 10 | 56
+ 8 | 16 | 10 | 64
+ 9 | 18 | 10 | 72
+ 10 | 20 | 10 | 80
+(10 rows)
+
+truncate testdef;
+--
+-- Parallel INSERT with restricted and unsafe column defaults, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,d) select a,a*8 from test_data;
+ QUERY PLAN
+-----------------------------
+ Insert on testdef
+ -> Seq Scan on test_data
+(2 rows)
+
+--
+-- Test INSERT into partition with underlying query.
+--
+create table parttable1 (a int, b name) partition by range (a);
+create table parttable1_1 partition of parttable1 for values from (0) to (5000);
+create table parttable1_2 partition of parttable1 for values from (5000) to (10000);
+alter table parttable1 parallel dml safe;
+explain (costs off) insert into parttable1 select unique1,stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on parttable1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into parttable1 select unique1,stringu1 from tenk1;
+select count(*) from parttable1_1;
+ count
+-------
+ 5000
+(1 row)
+
+select count(*) from parttable1_2;
+ count
+-------
+ 5000
+(1 row)
+
+--
+-- Test table with parallel-unsafe check constraint
+--
+create or replace function check_b_unsafe(b name) returns boolean as $$
+ begin
+ return (b <> 'XXXXXX');
+ end;
+$$ language plpgsql parallel unsafe;
+create table table_check_b(a int4, b name check (check_b_unsafe(b)), c name);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('table_check_b');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_constraint | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('table_check_b');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into table_check_b(a,b,c) select unique1, unique2, stringu1 from tenk1;
+ QUERY PLAN
+-------------------------
+ Insert on table_check_b
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Test table with parallel-safe before stmt-level triggers
+-- (should create a parallel SELECT plan; triggers should fire)
+--
+create table names_with_safe_trigger (like names);
+create or replace function insert_before_trigger_safe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_safe';
+ return new;
+ end;
+$$ language plpgsql parallel safe;
+create trigger insert_before_trigger_safe before insert on names_with_safe_trigger
+ for each statement execute procedure insert_before_trigger_safe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_safe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+(0 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names_with_safe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ s
+(1 row)
+
+explain (costs off) insert into names_with_safe_trigger select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on names_with_safe_trigger
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+insert into names_with_safe_trigger select * from names;
+NOTICE: hello from insert_before_trigger_safe
+--
+-- Test table with parallel-unsafe before stmt-level triggers
+-- (should not create a parallel plan; triggers should fire)
+--
+create table names_with_unsafe_trigger (like names);
+create or replace function insert_before_trigger_unsafe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_unsafe';
+ return new;
+ end;
+$$ language plpgsql parallel unsafe;
+create trigger insert_before_trigger_unsafe before insert on names_with_unsafe_trigger
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_unsafe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_trigger | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names_with_unsafe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into names_with_unsafe_trigger select * from names;
+ QUERY PLAN
+-------------------------------------
+ Insert on names_with_unsafe_trigger
+ -> Seq Scan on names
+(2 rows)
+
+insert into names_with_unsafe_trigger select * from names;
+NOTICE: hello from insert_before_trigger_unsafe
+--
+-- Test partition with parallel-unsafe trigger
+-- (should not create a parallel plan)
+--
+create table part_unsafe_trigger (a int4, b name) partition by range (a);
+create table part_unsafe_trigger_1 partition of part_unsafe_trigger for values from (0) to (5000);
+create table part_unsafe_trigger_2 partition of part_unsafe_trigger for values from (5000) to (10000);
+create trigger part_insert_before_trigger_unsafe before insert on part_unsafe_trigger_1
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('part_unsafe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_trigger | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('part_unsafe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into part_unsafe_trigger select unique1, stringu1 from tenk1;
+ QUERY PLAN
+-------------------------------
+ Insert on part_unsafe_trigger
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Test DOMAIN column with a CHECK constraint
+--
+create function sql_is_distinct_from_u(anyelement, anyelement)
+returns boolean language sql parallel unsafe
+as 'select $1 is distinct from $2 limit 1';
+create domain inotnull_u int
+ check (sql_is_distinct_from_u(value, null));
+create table dom_table_u (x inotnull_u, y int);
+-- Test DOMAIN column with parallel-unsafe CHECK constraint
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('dom_table_u');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_constraint | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('dom_table_u');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into dom_table_u select unique1, unique2 from tenk1;
+ QUERY PLAN
+-------------------------
+ Insert on dom_table_u
+ -> Seq Scan on tenk1
+(2 rows)
+
+rollback;
+--
+-- Clean up anything not created in the transaction
+--
+drop table names;
+drop index names2_fullname_idx;
+drop table names2;
+drop index names4_fullname_idx;
+drop table names4;
+drop table testdef;
+drop table test_data;
+drop function bdefault_unsafe;
+drop function cdefault_restricted;
+drop function ddefault_safe;
+drop function fullname_parallel_unsafe;
+drop function fullname_parallel_restricted;
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 1b2f6bc418..1fedebcd9b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -2818,6 +2818,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
--------+----------------+-----------+----------+---------+----------+--------------+-------------
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
+Parallel DML: default
\d+ tbl_heap
Table "tableam_display.tbl_heap"
@@ -2825,6 +2826,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
--------+----------------+-----------+----------+---------+----------+--------------+-------------
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
+Parallel DML: default
\set HIDE_TABLEAM off
\d+ tbl_heap_psql
@@ -2834,6 +2836,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
Access method: heap_psql
+Parallel DML: default
\d+ tbl_heap
Table "tableam_display.tbl_heap"
@@ -2842,50 +2845,51 @@ Access method: heap_psql
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
Access method: heap
+Parallel DML: default
-- AM is displayed for tables, indexes and materialized views.
\d+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | 0 bytes |
- tableam_display | tbl_heap | table | regress_display_role | permanent | heap | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | 0 bytes |
- tableam_display | view_heap_psql | view | regress_display_role | permanent | | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | default | 0 bytes |
+ tableam_display | tbl_heap | table | regress_display_role | permanent | heap | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | default | 0 bytes |
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | | default | 0 bytes |
(4 rows)
\dt+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+---------------+-------+----------------------+-------------+---------------+---------+-------------
- tableam_display | tbl_heap | table | regress_display_role | permanent | heap | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+---------------+-------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | tbl_heap | table | regress_display_role | permanent | heap | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | default | 0 bytes |
(2 rows)
\dm+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | default | 0 bytes |
(1 row)
-- But not for views and sequences.
\dv+
- List of relations
- Schema | Name | Type | Owner | Persistence | Size | Description
------------------+----------------+------+----------------------+-------------+---------+-------------
- tableam_display | view_heap_psql | view | regress_display_role | permanent | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Parallel DML | Size | Description
+-----------------+----------------+------+----------------------+-------------+--------------+---------+-------------
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | default | 0 bytes |
(1 row)
\set HIDE_TABLEAM on
\d+
- List of relations
- Schema | Name | Type | Owner | Persistence | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | 0 bytes |
- tableam_display | tbl_heap | table | regress_display_role | permanent | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | 0 bytes |
- tableam_display | view_heap_psql | view | regress_display_role | permanent | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | tbl_heap | table | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | default | 0 bytes |
(4 rows)
RESET ROLE;
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4a5ef0bc24..f448b80856 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -85,6 +85,7 @@ Indexes:
"testpub_tbl2_pkey" PRIMARY KEY, btree (id)
Publications:
"testpub_foralltables"
+Parallel DML: default
\dRp+ testpub_foralltables
Publication testpub_foralltables
@@ -198,6 +199,7 @@ Publications:
"testpib_ins_trunct"
"testpub_default"
"testpub_fortbl"
+Parallel DML: default
\d+ testpub_tbl1
Table "public.testpub_tbl1"
@@ -211,6 +213,7 @@ Publications:
"testpib_ins_trunct"
"testpub_default"
"testpub_fortbl"
+Parallel DML: default
\dRp+ testpub_default
Publication testpub_default
@@ -236,6 +239,7 @@ Indexes:
Publications:
"testpib_ins_trunct"
"testpub_fortbl"
+Parallel DML: default
-- permissions
SET ROLE regress_publication_user2;
diff --git a/src/test/regress/expected/replica_identity.out b/src/test/regress/expected/replica_identity.out
index 79002197a7..8fce774332 100644
--- a/src/test/regress/expected/replica_identity.out
+++ b/src/test/regress/expected/replica_identity.out
@@ -171,6 +171,7 @@ Indexes:
"test_replica_identity_unique_defer" UNIQUE CONSTRAINT, btree (keya, keyb) DEFERRABLE
"test_replica_identity_unique_nondefer" UNIQUE CONSTRAINT, btree (keya, keyb)
Replica Identity: FULL
+Parallel DML: default
ALTER TABLE test_replica_identity REPLICA IDENTITY NOTHING;
SELECT relreplident FROM pg_class WHERE oid = 'test_replica_identity'::regclass;
diff --git a/src/test/regress/expected/rowsecurity.out b/src/test/regress/expected/rowsecurity.out
index 89397e41f0..5e6807f90a 100644
--- a/src/test/regress/expected/rowsecurity.out
+++ b/src/test/regress/expected/rowsecurity.out
@@ -958,6 +958,7 @@ Policies:
Partitions: part_document_fiction FOR VALUES FROM (11) TO (12),
part_document_nonfiction FOR VALUES FROM (99) TO (100),
part_document_satire FOR VALUES FROM (55) TO (56)
+Parallel DML: default
SELECT * FROM pg_policies WHERE schemaname = 'regress_rls_schema' AND tablename like '%part_document%' ORDER BY policyname;
schemaname | tablename | policyname | permissive | roles | cmd | qual | with_check
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index e5ab11275d..0ae35e1662 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -3155,6 +3155,7 @@ Rules:
r3 AS
ON DELETE TO rules_src DO
NOTIFY rules_src_deletion
+Parallel DML: default
--
-- Ensure an aliased target relation for insert is correctly deparsed.
@@ -3183,6 +3184,7 @@ Rules:
r5 AS
ON UPDATE TO rules_src DO INSTEAD UPDATE rules_log trgt SET tag = 'updated'::text
WHERE trgt.f1 = new.f1
+Parallel DML: default
--
-- Also check multiassignment deparsing.
@@ -3206,6 +3208,7 @@ Rules:
WHERE trgt.f1 = new.f1
RETURNING new.f1,
new.f2
+Parallel DML: default
drop table rule_t1, rule_dest;
--
diff --git a/src/test/regress/expected/stats_ext.out b/src/test/regress/expected/stats_ext.out
index 7fb54de53d..e4fa545c8c 100644
--- a/src/test/regress/expected/stats_ext.out
+++ b/src/test/regress/expected/stats_ext.out
@@ -145,6 +145,7 @@ ALTER STATISTICS ab1_a_b_stats SET STATISTICS -1;
b | integer | | | | plain | |
Statistics objects:
"public"."ab1_a_b_stats" ON a, b FROM ab1
+Parallel DML: default
-- partial analyze doesn't build stats either
ANALYZE ab1 (a);
diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out
index 5d124cf96f..9d39fad795 100644
--- a/src/test/regress/expected/triggers.out
+++ b/src/test/regress/expected/triggers.out
@@ -3483,6 +3483,7 @@ alter trigger parenttrig on parent rename to anothertrig;
Triggers:
parenttrig AFTER INSERT ON child FOR EACH ROW EXECUTE FUNCTION f()
Inherits: parent
+Parallel DML: default
drop table parent, child;
drop function f();
diff --git a/src/test/regress/expected/update.out b/src/test/regress/expected/update.out
index c809f88f54..d99b133644 100644
--- a/src/test/regress/expected/update.out
+++ b/src/test/regress/expected/update.out
@@ -753,6 +753,7 @@ create table part_def partition of range_parted default;
e | character varying | | | | extended | |
Partition of: range_parted DEFAULT
Partition constraint: (NOT ((a IS NOT NULL) AND (b IS NOT NULL) AND (((a = 'a'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'a'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'b'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '20'::bigint) AND (b < '30'::bigint)))))
+Parallel DML: default
insert into range_parted values ('c', 9);
-- ok
diff --git a/src/test/regress/output/tablespace.source b/src/test/regress/output/tablespace.source
index 1bbe7e0323..19c65ce435 100644
--- a/src/test/regress/output/tablespace.source
+++ b/src/test/regress/output/tablespace.source
@@ -339,6 +339,7 @@ Indexes:
"part_a_idx" btree (a), tablespace "regress_tblspace"
Partitions: testschema.part1 FOR VALUES IN (1),
testschema.part2 FOR VALUES IN (2)
+Parallel DML: default
\d testschema.part1
Table "testschema.part1"
@@ -358,6 +359,7 @@ Partition of: testschema.part FOR VALUES IN (1)
Partition constraint: ((a IS NOT NULL) AND (a = 1))
Indexes:
"part1_a_idx" btree (a), tablespace "regress_tblspace"
+Parallel DML: default
\d testschema.part_a_idx
Partitioned index "testschema.part_a_idx"
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 7be89178f0..daf0bad4d5 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -96,6 +96,7 @@ test: rules psql psql_crosstab amutils stats_ext collate.linux.utf8
# run by itself so it can run parallel workers
test: select_parallel
test: write_parallel
+test: insert_parallel
# no relation related tests can be put in this group
test: publication subscription
diff --git a/src/test/regress/sql/insert_parallel.sql b/src/test/regress/sql/insert_parallel.sql
new file mode 100644
index 0000000000..9bf1809ccd
--- /dev/null
+++ b/src/test/regress/sql/insert_parallel.sql
@@ -0,0 +1,381 @@
+--
+-- PARALLEL
+--
+
+--
+-- START: setup some tables and data needed by the tests.
+--
+
+-- Setup - index expressions test
+
+create function pg_class_relname(Oid)
+returns name language sql parallel unsafe
+as 'select relname from pg_class where $1 = oid';
+
+-- For testing purposes, we'll mark this function as parallel-unsafe
+create or replace function fullname_parallel_unsafe(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel unsafe;
+
+create or replace function fullname_parallel_restricted(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel restricted;
+
+create table names(index int, first_name text, last_name text);
+create table names2(index int, first_name text, last_name text);
+create index names2_fullname_idx on names2 (fullname_parallel_unsafe(first_name, last_name));
+create table names4(index int, first_name text, last_name text);
+create index names4_fullname_idx on names4 (fullname_parallel_restricted(first_name, last_name));
+
+
+insert into names values
+ (1, 'albert', 'einstein'),
+ (2, 'niels', 'bohr'),
+ (3, 'erwin', 'schrodinger'),
+ (4, 'leonhard', 'euler'),
+ (5, 'stephen', 'hawking'),
+ (6, 'isaac', 'newton'),
+ (7, 'alan', 'turing'),
+ (8, 'richard', 'feynman');
+
+-- Setup - column default tests
+
+create or replace function bdefault_unsafe ()
+returns int language plpgsql parallel unsafe as $$
+begin
+ RETURN 5;
+end $$;
+
+create or replace function cdefault_restricted ()
+returns int language plpgsql parallel restricted as $$
+begin
+ RETURN 10;
+end $$;
+
+create or replace function ddefault_safe ()
+returns int language plpgsql parallel safe as $$
+begin
+ RETURN 20;
+end $$;
+
+create table testdef(a int, b int default bdefault_unsafe(), c int default cdefault_restricted(), d int default ddefault_safe());
+create table test_data(a int);
+insert into test_data select * from generate_series(1,10);
+
+--
+-- END: setup some tables and data needed by the tests.
+--
+
+begin;
+
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+
+create table para_insert_p1 (
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+);
+
+create table para_insert_f1 (
+ unique1 int4 REFERENCES para_insert_p1(unique1),
+ stringu1 name
+);
+
+create table para_insert_with_parallel_unsafe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml unsafe;
+
+create table para_insert_with_parallel_restricted(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml restricted;
+
+create table para_insert_with_parallel_safe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml safe;
+
+create table para_insert_with_parallel_auto(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml default;
+
+-- Check FK trigger
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('para_insert_f1');
+select pg_get_table_max_parallel_dml_hazard('para_insert_f1');
+
+--
+-- Test INSERT with underlying query.
+-- Set parallel dml safe.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+alter table para_insert_p1 parallel dml safe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+insert into para_insert_p1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+explain (costs off) insert into para_insert_with_parallel_safe select unique1, stringu1 from tenk1;
+
+--
+-- Set parallel dml unsafe.
+-- (should not create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml unsafe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_unsafe select unique1, stringu1 from tenk1;
+
+--
+-- Set parallel dml restricted.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml restricted;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_restricted select unique1, stringu1 from tenk1;
+
+--
+-- Reset parallel dml.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml default;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_auto select unique1, stringu1 from tenk1;
+
+--
+-- Test INSERT with ordered underlying query.
+-- (should create plan with parallel SELECT, GatherMerge parent node)
+--
+truncate para_insert_p1 cascade;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+
+--
+-- Test INSERT with RETURNING clause.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+create table test_data1(like test_data);
+explain (costs off) insert into test_data1 select * from test_data where a = 10 returning a as data;
+insert into test_data1 select * from test_data where a = 10 returning a as data;
+
+--
+-- Test INSERT into a table with a foreign key.
+-- (Insert into a table with a foreign key is parallel-restricted,
+-- as doing this in a parallel worker would create a new commandId
+-- and within a worker this is not currently supported)
+--
+explain (costs off) insert into para_insert_f1 select unique1, stringu1 from tenk1;
+insert into para_insert_f1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the insert worked
+select count(*), sum(unique1) from para_insert_f1;
+
+--
+-- Test INSERT with ON CONFLICT ... DO UPDATE ...
+-- (should not create a parallel plan)
+--
+create table test_conflict_table(id serial primary key, somedata int);
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data;
+insert into test_conflict_table(id, somedata) select a, a from test_data;
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data ON CONFLICT(id) DO UPDATE SET somedata = EXCLUDED.somedata + 1;
+
+--
+-- Test INSERT with parallel-unsafe index expression
+-- (should not create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names2');
+select pg_get_table_max_parallel_dml_hazard('names2');
+explain (costs off) insert into names2 select * from names;
+
+--
+-- Test INSERT with parallel-restricted index expression
+-- (should create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names4');
+select pg_get_table_max_parallel_dml_hazard('names4');
+explain (costs off) insert into names4 select * from names;
+
+--
+-- Test INSERT with underlying query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names5 (like names);
+explain (costs off) insert into names5 select * from names returning *;
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names6 (like names);
+explain (costs off) insert into names6 select * from names order by last_name returning *;
+insert into names6 select * from names order by last_name returning *;
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (with projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names7 (like names);
+explain (costs off) insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+
+
+--
+-- Test INSERT into temporary table with underlying query.
+-- (Insert into a temp table is parallel-restricted;
+-- should create a parallel plan; parallel SELECT)
+--
+create temporary table temp_names (like names);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('temp_names');
+select pg_get_table_max_parallel_dml_hazard('temp_names');
+explain (costs off) insert into temp_names select * from names;
+insert into temp_names select * from names;
+
+--
+-- Test INSERT with column defaults
+--
+--
+
+--
+-- Parallel INSERT with unsafe column default, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,c,d) select a,a*4,a*8 from test_data;
+
+--
+-- Parallel INSERT with restricted column default, should use parallel SELECT
+--
+explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+select * from testdef order by a;
+truncate testdef;
+
+--
+-- Parallel INSERT with restricted and unsafe column defaults, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,d) select a,a*8 from test_data;
+
+--
+-- Test INSERT into partition with underlying query.
+--
+create table parttable1 (a int, b name) partition by range (a);
+create table parttable1_1 partition of parttable1 for values from (0) to (5000);
+create table parttable1_2 partition of parttable1 for values from (5000) to (10000);
+
+alter table parttable1 parallel dml safe;
+
+explain (costs off) insert into parttable1 select unique1,stringu1 from tenk1;
+insert into parttable1 select unique1,stringu1 from tenk1;
+select count(*) from parttable1_1;
+select count(*) from parttable1_2;
+
+--
+-- Test table with parallel-unsafe check constraint
+--
+create or replace function check_b_unsafe(b name) returns boolean as $$
+ begin
+ return (b <> 'XXXXXX');
+ end;
+$$ language plpgsql parallel unsafe;
+
+create table table_check_b(a int4, b name check (check_b_unsafe(b)), c name);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('table_check_b');
+select pg_get_table_max_parallel_dml_hazard('table_check_b');
+explain (costs off) insert into table_check_b(a,b,c) select unique1, unique2, stringu1 from tenk1;
+
+--
+-- Test table with parallel-safe before stmt-level triggers
+-- (should create a parallel SELECT plan; triggers should fire)
+--
+create table names_with_safe_trigger (like names);
+
+create or replace function insert_before_trigger_safe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_safe';
+ return new;
+ end;
+$$ language plpgsql parallel safe;
+create trigger insert_before_trigger_safe before insert on names_with_safe_trigger
+ for each statement execute procedure insert_before_trigger_safe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_safe_trigger');
+select pg_get_table_max_parallel_dml_hazard('names_with_safe_trigger');
+explain (costs off) insert into names_with_safe_trigger select * from names;
+insert into names_with_safe_trigger select * from names;
+
+--
+-- Test table with parallel-unsafe before stmt-level triggers
+-- (should not create a parallel plan; triggers should fire)
+--
+create table names_with_unsafe_trigger (like names);
+create or replace function insert_before_trigger_unsafe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_unsafe';
+ return new;
+ end;
+$$ language plpgsql parallel unsafe;
+create trigger insert_before_trigger_unsafe before insert on names_with_unsafe_trigger
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_unsafe_trigger');
+select pg_get_table_max_parallel_dml_hazard('names_with_unsafe_trigger');
+explain (costs off) insert into names_with_unsafe_trigger select * from names;
+insert into names_with_unsafe_trigger select * from names;
+
+--
+-- Test partition with parallel-unsafe trigger
+-- (should not create a parallel plan)
+--
+create table part_unsafe_trigger (a int4, b name) partition by range (a);
+create table part_unsafe_trigger_1 partition of part_unsafe_trigger for values from (0) to (5000);
+create table part_unsafe_trigger_2 partition of part_unsafe_trigger for values from (5000) to (10000);
+create trigger part_insert_before_trigger_unsafe before insert on part_unsafe_trigger_1
+ for each statement execute procedure insert_before_trigger_unsafe();
+
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('part_unsafe_trigger');
+select pg_get_table_max_parallel_dml_hazard('part_unsafe_trigger');
+explain (costs off) insert into part_unsafe_trigger select unique1, stringu1 from tenk1;
+
+--
+-- Test DOMAIN column with a CHECK constraint
+--
+create function sql_is_distinct_from_u(anyelement, anyelement)
+returns boolean language sql parallel unsafe
+as 'select $1 is distinct from $2 limit 1';
+
+create domain inotnull_u int
+ check (sql_is_distinct_from_u(value, null));
+
+create table dom_table_u (x inotnull_u, y int);
+
+-- Test DOMAIN column with parallel-unsafe CHECK constraint
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('dom_table_u');
+select pg_get_table_max_parallel_dml_hazard('dom_table_u');
+explain (costs off) insert into dom_table_u select unique1, unique2 from tenk1;
+
+rollback;
+
+--
+-- Clean up anything not created in the transaction
+--
+
+drop table names;
+drop index names2_fullname_idx;
+drop table names2;
+drop index names4_fullname_idx;
+drop table names4;
+drop table testdef;
+drop table test_data;
+
+drop function bdefault_unsafe;
+drop function cdefault_restricted;
+drop function ddefault_safe;
+drop function fullname_parallel_unsafe;
+drop function fullname_parallel_restricted;
--
2.27.0
v16-0006-Workaround-for-query-rewriter-hasModifyingCTE-bug.patchapplication/octet-stream; name=v16-0006-Workaround-for-query-rewriter-hasModifyingCTE-bug.patchDownload
From 0b7733c62a4bc80aab9dd36bd593982da1586429 Mon Sep 17 00:00:00 2001
From: Greg Nancarrow <gregn4422@gmail.com>
Date: Fri, 6 Aug 2021 13:39:45 +1000
Subject: [PATCH] Workaround for query rewriter bug which results in
modifyingCTE flag not being set.
If a query uses a modifying CTE, the hasModifyingCTE flag should be set in the
query tree, and the query will be regarded as parallel-unsafe. However, in some
cases, a re-written query with a modifying CTE does not have that flag set, due
to a bug in the query rewriter. The workaround is to update the
max_parallel_hazard_walker() to detect a modifying CTE in the query and indicate
in this case that the query is parallel-unsafe.
Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
---
src/backend/optimizer/util/clauses.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 7187f17da5..7eb305ffda 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -758,6 +758,30 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
return true;
}
+ /*
+ * ModifyingCTE expressions are treated as parallel-unsafe.
+ *
+ * XXX Normally, if the Query has a modifying CTE, the hasModifyingCTE
+ * flag is set in the Query tree, and the query will be regarded as
+ * parallel-usafe. However, in some cases, a re-written query with a
+ * modifying CTE does not have that flag set, due to a bug in the query
+ * rewriter. The following else-if is a workaround for this bug, to detect
+ * a modifying CTE in the query and regard it as parallel-unsafe. This
+ * comment, and the else-if block immediately below, may be removed once
+ * the bug in the query rewriter is fixed.
+ */
+ else if (IsA(node, CommonTableExpr))
+ {
+ CommonTableExpr *cte = (CommonTableExpr *) node;
+ Query *ctequery = castNode(Query, cte->ctequery);
+
+ if (ctequery->commandType != CMD_SELECT)
+ {
+ context->max_hazard = PROPARALLEL_UNSAFE;
+ return true;
+ }
+ }
+
/*
* As a notational convenience for callers, look through RestrictInfo.
*/
--
2.27.0
v16-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patchapplication/octet-stream; name=v16-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patchDownload
From 01bdde01fb66e93928cb84b6aeee7dd31ea9ad83 Mon Sep 17 00:00:00 2001
From: Hou Zhijie <HouZhijie@foxmail.com>
Date: Tue, 3 Aug 2021 14:13:39 +0800
Subject: [PATCH] CREATE-ALTER-TABLE-PARALLEL-DML
Enable users to declare a table's parallel data-modification safety
(DEFAULT/SAFE/RESTRICTED/UNSAFE).
Add a table property that represents parallel safety of a table for
DML statement execution.
It can be specified as follows:
CREATE TABLE table_name PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE };
ALTER TABLE table_name PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE };
This property is recorded in pg_class's relparalleldml column as 'u',
'r', or 's' like pg_proc's proparallel and as 'd' if not set.
The default is 'd'.
If relparalleldml is specific(safe/restricted/unsafe), then
the planner assumes that all of the table, its descendant partitions,
and their ancillary objects have, at worst, the specified parallel
safety. The user is responsible for its correctness.
If relparalleldml is not set or set to DEFAULT, for non-partitioned table,
planner will check the parallel safety automatically(see 0004 patch).
But for partitioned table, planner will assume that the table is UNSAFE
to be modified in parallel mode.
---
src/backend/bootstrap/bootparse.y | 3 +
src/backend/catalog/heap.c | 7 +-
src/backend/catalog/index.c | 2 +
src/backend/catalog/toasting.c | 1 +
src/backend/commands/cluster.c | 1 +
src/backend/commands/createas.c | 1 +
src/backend/commands/sequence.c | 1 +
src/backend/commands/tablecmds.c | 97 +++++++++++++++++++
src/backend/commands/typecmds.c | 1 +
src/backend/commands/view.c | 1 +
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 2 +
src/backend/nodes/outfuncs.c | 2 +
src/backend/nodes/readfuncs.c | 1 +
src/backend/parser/gram.y | 73 ++++++++++----
src/backend/utils/cache/relcache.c | 6 +-
src/bin/pg_dump/pg_dump.c | 50 ++++++++--
src/bin/pg_dump/pg_dump.h | 1 +
src/bin/psql/describe.c | 71 ++++++++++++--
src/include/catalog/heap.h | 2 +
src/include/catalog/pg_class.h | 3 +
src/include/catalog/pg_proc.h | 2 +
src/include/nodes/parsenodes.h | 4 +-
src/include/nodes/primnodes.h | 1 +
src/include/parser/kwlist.h | 1 +
src/include/utils/relcache.h | 3 +-
.../test_ddl_deparse/test_ddl_deparse.c | 3 +
27 files changed, 302 insertions(+), 39 deletions(-)
diff --git a/src/backend/bootstrap/bootparse.y b/src/backend/bootstrap/bootparse.y
index 5fcd004e1b..4712536088 100644
--- a/src/backend/bootstrap/bootparse.y
+++ b/src/backend/bootstrap/bootparse.y
@@ -25,6 +25,7 @@
#include "catalog/pg_authid.h"
#include "catalog/pg_class.h"
#include "catalog/pg_namespace.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/toasting.h"
#include "commands/defrem.h"
@@ -208,6 +209,7 @@ Boot_CreateStmt:
tupdesc,
RELKIND_RELATION,
RELPERSISTENCE_PERMANENT,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
true,
@@ -231,6 +233,7 @@ Boot_CreateStmt:
NIL,
RELKIND_RELATION,
RELPERSISTENCE_PERMANENT,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
ONCOMMIT_NOOP,
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 83746d3fd9..135df961c9 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -302,6 +302,7 @@ heap_create(const char *relname,
TupleDesc tupDesc,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
bool allow_system_table_mods,
@@ -404,7 +405,8 @@ heap_create(const char *relname,
shared_relation,
mapped_relation,
relpersistence,
- relkind);
+ relkind,
+ relparalleldml);
/*
* Have the storage manager create the relation's disk file, if needed.
@@ -959,6 +961,7 @@ InsertPgClassTuple(Relation pg_class_desc,
values[Anum_pg_class_relhassubclass - 1] = BoolGetDatum(rd_rel->relhassubclass);
values[Anum_pg_class_relispopulated - 1] = BoolGetDatum(rd_rel->relispopulated);
values[Anum_pg_class_relreplident - 1] = CharGetDatum(rd_rel->relreplident);
+ values[Anum_pg_class_relparalleldml - 1] = CharGetDatum(rd_rel->relparalleldml);
values[Anum_pg_class_relispartition - 1] = BoolGetDatum(rd_rel->relispartition);
values[Anum_pg_class_relrewrite - 1] = ObjectIdGetDatum(rd_rel->relrewrite);
values[Anum_pg_class_relfrozenxid - 1] = TransactionIdGetDatum(rd_rel->relfrozenxid);
@@ -1152,6 +1155,7 @@ heap_create_with_catalog(const char *relname,
List *cooked_constraints,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
OnCommitAction oncommit,
@@ -1299,6 +1303,7 @@ heap_create_with_catalog(const char *relname,
tupdesc,
relkind,
relpersistence,
+ relparalleldml,
shared_relation,
mapped_relation,
allow_system_table_mods,
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 26bfa74ce7..18f3a51686 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -50,6 +50,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_opclass.h"
#include "catalog/pg_operator.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_trigger.h"
#include "catalog/pg_type.h"
@@ -935,6 +936,7 @@ index_create(Relation heapRelation,
indexTupDesc,
relkind,
relpersistence,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
allow_system_table_mods,
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index 147b5abc19..b32d2d4132 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -251,6 +251,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
NIL,
RELKIND_TOASTVALUE,
rel->rd_rel->relpersistence,
+ rel->rd_rel->relparalleldml,
shared_relation,
mapped_relation,
ONCOMMIT_NOOP,
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index b3d8b6deb0..d1a7603d90 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -693,6 +693,7 @@ make_new_heap(Oid OIDOldHeap, Oid NewTableSpace, Oid NewAccessMethod,
NIL,
RELKIND_RELATION,
relpersistence,
+ OldHeap->rd_rel->relparalleldml,
false,
RelationIsMapped(OldHeap),
ONCOMMIT_NOOP,
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 0982851715..7607b91ae8 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -107,6 +107,7 @@ create_ctas_internal(List *attrList, IntoClause *into)
create->options = into->options;
create->oncommit = into->onCommit;
create->tablespacename = into->tableSpaceName;
+ create->paralleldmlsafety = into->paralleldmlsafety;
create->if_not_exists = false;
create->accessMethod = into->accessMethod;
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..384770050a 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -211,6 +211,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
stmt->options = NIL;
stmt->oncommit = ONCOMMIT_NOOP;
stmt->tablespacename = NULL;
+ stmt->paralleldmlsafety = NULL;
stmt->if_not_exists = seq->if_not_exists;
address = DefineRelation(stmt, RELKIND_SEQUENCE, seq->ownerId, NULL, NULL);
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fcd778c62a..5968252648 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -40,6 +40,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_namespace.h"
#include "catalog/pg_opclass.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_statistic_ext.h"
#include "catalog/pg_trigger.h"
@@ -603,6 +604,7 @@ static void refuseDupeIndexAttach(Relation parentIdx, Relation partIdx,
static List *GetParentedForeignKeyRefs(Relation partition);
static void ATDetachCheckNoForeignKeyRefs(Relation partition);
static char GetAttributeCompression(Oid atttypid, char *compression);
+static void ATExecParallelDMLSafety(Relation rel, Node *def);
/* ----------------------------------------------------------------
@@ -648,6 +650,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
LOCKMODE parentLockmode;
const char *accessMethod = NULL;
Oid accessMethodId = InvalidOid;
+ char relparalleldml = PROPARALLEL_DEFAULT;
/*
* Truncate relname to appropriate length (probably a waste of time, as
@@ -926,6 +929,32 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
if (accessMethod != NULL)
accessMethodId = get_table_am_oid(accessMethod, false);
+ if (stmt->paralleldmlsafety != NULL)
+ {
+ if (strcmp(stmt->paralleldmlsafety, "safe") == 0)
+ {
+ if (relkind == RELKIND_FOREIGN_TABLE ||
+ stmt->relation->relpersistence == RELPERSISTENCE_TEMP)
+ ereport(ERROR,
+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ errmsg("cannot perform parallel data modification on relation \"%s\"",
+ relname),
+ errdetail_relkind_not_supported(relkind)));
+
+ relparalleldml = PROPARALLEL_SAFE;
+ }
+ else if (strcmp(stmt->paralleldmlsafety, "restricted") == 0)
+ relparalleldml = PROPARALLEL_RESTRICTED;
+ else if (strcmp(stmt->paralleldmlsafety, "unsafe") == 0)
+ relparalleldml = PROPARALLEL_UNSAFE;
+ else if (strcmp(stmt->paralleldmlsafety, "default") == 0)
+ relparalleldml = PROPARALLEL_DEFAULT;
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("parameter \"parallel dml\" must be SAFE, RESTRICTED, UNSAFE or DEFAULT")));
+ }
+
/*
* Create the relation. Inherited defaults and constraints are passed in
* for immediate handling --- since they don't need parsing, they can be
@@ -944,6 +973,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
old_constraints),
relkind,
stmt->relation->relpersistence,
+ relparalleldml,
false,
false,
stmt->oncommit,
@@ -4187,6 +4217,7 @@ AlterTableGetLockLevel(List *cmds)
case AT_SetIdentity:
case AT_DropExpression:
case AT_SetCompression:
+ case AT_ParallelDMLSafety:
cmd_lockmode = AccessExclusiveLock;
break;
@@ -4737,6 +4768,11 @@ ATPrepCmd(List **wqueue, Relation rel, AlterTableCmd *cmd,
/* No command-specific prep needed */
pass = AT_PASS_MISC;
break;
+ case AT_ParallelDMLSafety:
+ ATSimplePermissions(cmd->subtype, rel, ATT_TABLE | ATT_FOREIGN_TABLE);
+ /* No command-specific prep needed */
+ pass = AT_PASS_MISC;
+ break;
default: /* oops */
elog(ERROR, "unrecognized alter table type: %d",
(int) cmd->subtype);
@@ -5142,6 +5178,9 @@ ATExecCmd(List **wqueue, AlteredTableInfo *tab,
case AT_DetachPartitionFinalize:
ATExecDetachPartitionFinalize(rel, ((PartitionCmd *) cmd->def)->name);
break;
+ case AT_ParallelDMLSafety:
+ ATExecParallelDMLSafety(rel, cmd->def);
+ break;
default: /* oops */
elog(ERROR, "unrecognized alter table type: %d",
(int) cmd->subtype);
@@ -6113,6 +6152,8 @@ alter_table_type_to_string(AlterTableType cmdtype)
return "ALTER COLUMN ... DROP IDENTITY";
case AT_ReAddStatistics:
return NULL; /* not real grammar */
+ case AT_ParallelDMLSafety:
+ return "PARALLEL DML SAFETY";
}
return NULL;
@@ -18773,3 +18814,59 @@ GetAttributeCompression(Oid atttypid, char *compression)
return cmethod;
}
+
+static void
+ATExecParallelDMLSafety(Relation rel, Node *def)
+{
+ Relation pg_class;
+ Oid relid;
+ HeapTuple tuple;
+ char relparallel = PROPARALLEL_DEFAULT;
+ char *parallel = strVal(def);
+
+ if (parallel)
+ {
+ if (strcmp(parallel, "safe") == 0)
+ {
+ /*
+ * We can't support table modification in a parallel worker if it's
+ * a foreign table/partition (no FDW API for supporting parallel
+ * access) or a temporary table.
+ */
+ if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||
+ RelationUsesLocalBuffers(rel))
+ ereport(ERROR,
+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ errmsg("cannot perform parallel data modification on relation \"%s\"",
+ RelationGetRelationName(rel)),
+ errdetail_relkind_not_supported(rel->rd_rel->relkind)));
+
+ relparallel = PROPARALLEL_SAFE;
+ }
+ else if (strcmp(parallel, "restricted") == 0)
+ relparallel = PROPARALLEL_RESTRICTED;
+ else if (strcmp(parallel, "unsafe") == 0)
+ relparallel = PROPARALLEL_UNSAFE;
+ else if (strcmp(parallel, "default") == 0)
+ relparallel = PROPARALLEL_DEFAULT;
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("parameter \"parallel dml\" must be SAFE, RESTRICTED, UNSAFE or DEFAULT")));
+ }
+
+ relid = RelationGetRelid(rel);
+
+ pg_class = table_open(RelationRelationId, RowExclusiveLock);
+
+ tuple = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(relid));
+
+ if (!HeapTupleIsValid(tuple))
+ elog(ERROR, "cache lookup failed for relation %u", relid);
+
+ ((Form_pg_class) GETSTRUCT(tuple))->relparalleldml = relparallel;
+ CatalogTupleUpdate(pg_class, &tuple->t_self, tuple);
+
+ table_close(pg_class, RowExclusiveLock);
+ heap_freetuple(tuple);
+}
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index 93eeff950b..a2f06c3e79 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2525,6 +2525,7 @@ DefineCompositeType(RangeVar *typevar, List *coldeflist)
createStmt->options = NIL;
createStmt->oncommit = ONCOMMIT_NOOP;
createStmt->tablespacename = NULL;
+ createStmt->paralleldmlsafety = NULL;
createStmt->if_not_exists = false;
/*
diff --git a/src/backend/commands/view.c b/src/backend/commands/view.c
index 4df05a0b33..65f33a95d8 100644
--- a/src/backend/commands/view.c
+++ b/src/backend/commands/view.c
@@ -227,6 +227,7 @@ DefineVirtualRelation(RangeVar *relation, List *tlist, bool replace,
createStmt->options = options;
createStmt->oncommit = ONCOMMIT_NOOP;
createStmt->tablespacename = NULL;
+ createStmt->paralleldmlsafety = NULL;
createStmt->if_not_exists = false;
/*
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 29020c908e..df41165c5f 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -3534,6 +3534,7 @@ CopyCreateStmtFields(const CreateStmt *from, CreateStmt *newnode)
COPY_SCALAR_FIELD(oncommit);
COPY_STRING_FIELD(tablespacename);
COPY_STRING_FIELD(accessMethod);
+ COPY_STRING_FIELD(paralleldmlsafety);
COPY_SCALAR_FIELD(if_not_exists);
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 8a1762000c..67b1966f18 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -146,6 +146,7 @@ _equalIntoClause(const IntoClause *a, const IntoClause *b)
COMPARE_NODE_FIELD(options);
COMPARE_SCALAR_FIELD(onCommit);
COMPARE_STRING_FIELD(tableSpaceName);
+ COMPARE_STRING_FIELD(paralleldmlsafety);
COMPARE_NODE_FIELD(viewQuery);
COMPARE_SCALAR_FIELD(skipData);
@@ -1292,6 +1293,7 @@ _equalCreateStmt(const CreateStmt *a, const CreateStmt *b)
COMPARE_SCALAR_FIELD(oncommit);
COMPARE_STRING_FIELD(tablespacename);
COMPARE_STRING_FIELD(accessMethod);
+ COMPARE_STRING_FIELD(paralleldmlsafety);
COMPARE_SCALAR_FIELD(if_not_exists);
return true;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 48202d2232..fdc5b63c28 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -1107,6 +1107,7 @@ _outIntoClause(StringInfo str, const IntoClause *node)
WRITE_NODE_FIELD(options);
WRITE_ENUM_FIELD(onCommit, OnCommitAction);
WRITE_STRING_FIELD(tableSpaceName);
+ WRITE_STRING_FIELD(paralleldmlsafety);
WRITE_NODE_FIELD(viewQuery);
WRITE_BOOL_FIELD(skipData);
}
@@ -2714,6 +2715,7 @@ _outCreateStmtInfo(StringInfo str, const CreateStmt *node)
WRITE_ENUM_FIELD(oncommit, OnCommitAction);
WRITE_STRING_FIELD(tablespacename);
WRITE_STRING_FIELD(accessMethod);
+ WRITE_STRING_FIELD(paralleldmlsafety);
WRITE_BOOL_FIELD(if_not_exists);
}
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 77d082d8b4..ba725cb290 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -563,6 +563,7 @@ _readIntoClause(void)
READ_NODE_FIELD(options);
READ_ENUM_FIELD(onCommit, OnCommitAction);
READ_STRING_FIELD(tableSpaceName);
+ READ_STRING_FIELD(paralleldmlsafety);
READ_NODE_FIELD(viewQuery);
READ_BOOL_FIELD(skipData);
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 39a2849eba..f74a7cac60 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -609,7 +609,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <partboundspec> PartitionBoundSpec
%type <list> hash_partbound
%type <defelt> hash_partbound_elem
-
+%type <str> ParallelDMLSafety
/*
* Non-keyword token types. These are hard-wired into the "flex" lexer.
@@ -654,7 +654,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
DATA_P DATABASE DAY_P DEALLOCATE DEC DECIMAL_P DECLARE DEFAULT DEFAULTS
DEFERRABLE DEFERRED DEFINER DELETE_P DELIMITER DELIMITERS DEPENDS DEPTH DESC
- DETACH DICTIONARY DISABLE_P DISCARD DISTINCT DO DOCUMENT_P DOMAIN_P
+ DETACH DICTIONARY DISABLE_P DISCARD DISTINCT DML DO DOCUMENT_P DOMAIN_P
DOUBLE_P DROP
EACH ELSE ENABLE_P ENCODING ENCRYPTED END_P ENUM_P ESCAPE EVENT EXCEPT
@@ -2691,6 +2691,21 @@ alter_table_cmd:
n->subtype = AT_NoForceRowSecurity;
$$ = (Node *)n;
}
+ /* ALTER TABLE <name> PARALLEL DML SAFE/RESTRICTED/UNSAFE/DEFAULT */
+ | PARALLEL DML ColId
+ {
+ AlterTableCmd *n = makeNode(AlterTableCmd);
+ n->subtype = AT_ParallelDMLSafety;
+ n->def = (Node *)makeString($3);
+ $$ = (Node *)n;
+ }
+ | PARALLEL DML DEFAULT
+ {
+ AlterTableCmd *n = makeNode(AlterTableCmd);
+ n->subtype = AT_ParallelDMLSafety;
+ n->def = (Node *)makeString("default");
+ $$ = (Node *)n;
+ }
| alter_generic_options
{
AlterTableCmd *n = makeNode(AlterTableCmd);
@@ -3276,7 +3291,7 @@ copy_generic_opt_arg_list_item:
CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
OptInherit OptPartitionSpec table_access_method_clause OptWith
- OnCommitOption OptTableSpace
+ OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3290,12 +3305,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $11;
n->oncommit = $12;
n->tablespacename = $13;
+ n->paralleldmlsafety = $14;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name '('
OptTableElementList ')' OptInherit OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3309,12 +3325,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $14;
n->oncommit = $15;
n->tablespacename = $16;
+ n->paralleldmlsafety = $17;
n->if_not_exists = true;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE qualified_name OF any_name
OptTypedTableElementList OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3329,12 +3346,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $10;
n->oncommit = $11;
n->tablespacename = $12;
+ n->paralleldmlsafety = $13;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name OF any_name
OptTypedTableElementList OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3349,12 +3367,14 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $13;
n->oncommit = $14;
n->tablespacename = $15;
+ n->paralleldmlsafety = $16;
n->if_not_exists = true;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE qualified_name PARTITION OF qualified_name
OptTypedTableElementList PartitionBoundSpec OptPartitionSpec
table_access_method_clause OptWith OnCommitOption OptTableSpace
+ ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3369,12 +3389,14 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $12;
n->oncommit = $13;
n->tablespacename = $14;
+ n->paralleldmlsafety = $15;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name PARTITION OF
qualified_name OptTypedTableElementList PartitionBoundSpec OptPartitionSpec
table_access_method_clause OptWith OnCommitOption OptTableSpace
+ ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3389,6 +3411,7 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $15;
n->oncommit = $16;
n->tablespacename = $17;
+ n->paralleldmlsafety = $18;
n->if_not_exists = true;
$$ = (Node *)n;
}
@@ -4089,6 +4112,11 @@ OptTableSpace: TABLESPACE name { $$ = $2; }
| /*EMPTY*/ { $$ = NULL; }
;
+ParallelDMLSafety: PARALLEL DML name { $$ = $3; }
+ | PARALLEL DML DEFAULT { $$ = pstrdup("default"); }
+ | /*EMPTY*/ { $$ = NULL; }
+ ;
+
OptConsTableSpace: USING INDEX TABLESPACE name { $$ = $4; }
| /*EMPTY*/ { $$ = NULL; }
;
@@ -4236,7 +4264,7 @@ CreateAsStmt:
create_as_target:
qualified_name opt_column_list table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
$$ = makeNode(IntoClause);
$$->rel = $1;
@@ -4245,6 +4273,7 @@ create_as_target:
$$->options = $4;
$$->onCommit = $5;
$$->tableSpaceName = $6;
+ $$->paralleldmlsafety = $7;
$$->viewQuery = NULL;
$$->skipData = false; /* might get changed later */
}
@@ -5024,7 +5053,7 @@ AlterForeignServerStmt: ALTER SERVER name foreign_server_version alter_generic_o
CreateForeignTableStmt:
CREATE FOREIGN TABLE qualified_name
'(' OptTableElementList ')'
- OptInherit SERVER name create_generic_options
+ OptInherit ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$4->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5036,15 +5065,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $9;
n->base.if_not_exists = false;
/* FDW-specific data */
- n->servername = $10;
- n->options = $11;
+ n->servername = $11;
+ n->options = $12;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE IF_P NOT EXISTS qualified_name
'(' OptTableElementList ')'
- OptInherit SERVER name create_generic_options
+ OptInherit ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$7->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5056,15 +5086,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $12;
n->base.if_not_exists = true;
/* FDW-specific data */
- n->servername = $13;
- n->options = $14;
+ n->servername = $14;
+ n->options = $15;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE qualified_name
PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec
- SERVER name create_generic_options
+ ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$4->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5077,15 +5108,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $10;
n->base.if_not_exists = false;
/* FDW-specific data */
- n->servername = $11;
- n->options = $12;
+ n->servername = $12;
+ n->options = $13;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE IF_P NOT EXISTS qualified_name
PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec
- SERVER name create_generic_options
+ ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$7->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5098,10 +5130,11 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $13;
n->base.if_not_exists = true;
/* FDW-specific data */
- n->servername = $14;
- n->options = $15;
+ n->servername = $15;
+ n->options = $16;
$$ = (Node *) n;
}
;
@@ -15547,6 +15580,7 @@ unreserved_keyword:
| DICTIONARY
| DISABLE_P
| DISCARD
+ | DML
| DOCUMENT_P
| DOMAIN_P
| DOUBLE_P
@@ -16087,6 +16121,7 @@ bare_label_keyword:
| DISABLE_P
| DISCARD
| DISTINCT
+ | DML
| DO
| DOCUMENT_P
| DOMAIN_P
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 13d9994af3..70d8ecb1dd 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -1873,6 +1873,7 @@ formrdesc(const char *relationName, Oid relationReltype,
relation->rd_rel->relkind = RELKIND_RELATION;
relation->rd_rel->relnatts = (int16) natts;
relation->rd_rel->relam = HEAP_TABLE_AM_OID;
+ relation->rd_rel->relparalleldml = PROPARALLEL_DEFAULT;
/*
* initialize attribute tuple form
@@ -3359,7 +3360,8 @@ RelationBuildLocalRelation(const char *relname,
bool shared_relation,
bool mapped_relation,
char relpersistence,
- char relkind)
+ char relkind,
+ char relparalleldml)
{
Relation rel;
MemoryContext oldcxt;
@@ -3509,6 +3511,8 @@ RelationBuildLocalRelation(const char *relname,
else
rel->rd_rel->relreplident = REPLICA_IDENTITY_NOTHING;
+ rel->rd_rel->relparalleldml = relparalleldml;
+
/*
* Insert relation physical and logical identifiers (OIDs) into the right
* places. For a mapped relation, we set relfilenode to zero and rely on
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 90ac445bcd..5165202e84 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -6253,6 +6253,7 @@ getTables(Archive *fout, int *numTables)
int i_relpersistence;
int i_relispopulated;
int i_relreplident;
+ int i_relparalleldml;
int i_owning_tab;
int i_owning_col;
int i_reltablespace;
@@ -6358,7 +6359,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, am.amname, "
+ "c.relreplident, c.relparalleldml, c.relpages, am.amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
"ELSE 0 END AS foreignserver, "
@@ -6450,7 +6451,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, "
+ "c.relreplident, c.relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6503,7 +6504,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, "
+ "c.relreplident, c.relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6556,7 +6557,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6609,7 +6610,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"c.relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6660,7 +6661,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, "
@@ -6708,7 +6709,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6756,7 +6757,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6803,7 +6804,7 @@ getTables(Archive *fout, int *numTables)
"0 AS toid, "
"0 AS tfrozenxid, 0 AS tminmxid,"
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6872,6 +6873,7 @@ getTables(Archive *fout, int *numTables)
i_relpersistence = PQfnumber(res, "relpersistence");
i_relispopulated = PQfnumber(res, "relispopulated");
i_relreplident = PQfnumber(res, "relreplident");
+ i_relparalleldml = PQfnumber(res, "relparalleldml");
i_relpages = PQfnumber(res, "relpages");
i_foreignserver = PQfnumber(res, "foreignserver");
i_owning_tab = PQfnumber(res, "owning_tab");
@@ -6927,6 +6929,7 @@ getTables(Archive *fout, int *numTables)
tblinfo[i].hasoids = (strcmp(PQgetvalue(res, i, i_relhasoids), "t") == 0);
tblinfo[i].relispopulated = (strcmp(PQgetvalue(res, i, i_relispopulated), "t") == 0);
tblinfo[i].relreplident = *(PQgetvalue(res, i, i_relreplident));
+ tblinfo[i].relparalleldml = *(PQgetvalue(res, i, i_relparalleldml));
tblinfo[i].relpages = atoi(PQgetvalue(res, i, i_relpages));
tblinfo[i].frozenxid = atooid(PQgetvalue(res, i, i_relfrozenxid));
tblinfo[i].minmxid = atooid(PQgetvalue(res, i, i_relminmxid));
@@ -16555,6 +16558,35 @@ dumpTableSchema(Archive *fout, const TableInfo *tbinfo)
}
}
+ if (tbinfo->relkind == RELKIND_RELATION ||
+ tbinfo->relkind == RELKIND_PARTITIONED_TABLE ||
+ tbinfo->relkind == RELKIND_FOREIGN_TABLE)
+ {
+ appendPQExpBuffer(q, "\nALTER %sTABLE %s PARALLEL DML ",
+ tbinfo->relkind == RELKIND_FOREIGN_TABLE ? "FOREIGN " : "",
+ qualrelname);
+
+ switch (tbinfo->relparalleldml)
+ {
+ case 's':
+ appendPQExpBuffer(q, "SAFE;\n");
+ break;
+ case 'r':
+ appendPQExpBuffer(q, "RESTRICTED;\n");
+ break;
+ case 'u':
+ appendPQExpBuffer(q, "UNSAFE;\n");
+ break;
+ case 'd':
+ appendPQExpBuffer(q, "DEFAULT;\n");
+ break;
+ default:
+ /* should not reach here */
+ appendPQExpBuffer(q, "DEFAULT;\n");
+ break;
+ }
+ }
+
if (tbinfo->forcerowsec)
appendPQExpBuffer(q, "\nALTER TABLE ONLY %s FORCE ROW LEVEL SECURITY;\n",
qualrelname);
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index f5e170e0db..8175a0bc82 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -270,6 +270,7 @@ typedef struct _tableInfo
char relpersistence; /* relation persistence */
bool relispopulated; /* relation is populated */
char relreplident; /* replica identifier */
+ char relparalleldml; /* parallel safety of dml on the relation */
char *reltablespace; /* relation tablespace */
char *reloptions; /* options specified by WITH (...) */
char *checkoption; /* WITH CHECK OPTION, if any */
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 8333558bda..f896fe1793 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1656,6 +1656,7 @@ describeOneTableDetails(const char *schemaname,
char *reloftype;
char relpersistence;
char relreplident;
+ char relparalleldml;
char *relam;
} tableinfo;
bool show_column_details = false;
@@ -1669,7 +1670,25 @@ describeOneTableDetails(const char *schemaname,
initPQExpBuffer(&tmpbuf);
/* Get general table info */
- if (pset.sversion >= 120000)
+ if (pset.sversion >= 150000)
+ {
+ printfPQExpBuffer(&buf,
+ "SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, "
+ "c.relhastriggers, c.relrowsecurity, c.relforcerowsecurity, "
+ "false AS relhasoids, c.relispartition, %s, c.reltablespace, "
+ "CASE WHEN c.reloftype = 0 THEN '' ELSE c.reloftype::pg_catalog.regtype::pg_catalog.text END, "
+ "c.relpersistence, c.relreplident, am.amname, c.relparalleldml\n"
+ "FROM pg_catalog.pg_class c\n "
+ "LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)\n"
+ "LEFT JOIN pg_catalog.pg_am am ON (c.relam = am.oid)\n"
+ "WHERE c.oid = '%s';",
+ (verbose ?
+ "pg_catalog.array_to_string(c.reloptions || "
+ "array(select 'toast.' || x from pg_catalog.unnest(tc.reloptions) x), ', ')\n"
+ : "''"),
+ oid);
+ }
+ else if (pset.sversion >= 120000)
{
printfPQExpBuffer(&buf,
"SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, "
@@ -1853,6 +1872,8 @@ describeOneTableDetails(const char *schemaname,
(char *) NULL : pg_strdup(PQgetvalue(res, 0, 14));
else
tableinfo.relam = NULL;
+ tableinfo.relparalleldml = (pset.sversion >= 150000) ?
+ *(PQgetvalue(res, 0, 15)) : 0;
PQclear(res);
res = NULL;
@@ -3630,6 +3651,21 @@ describeOneTableDetails(const char *schemaname,
printfPQExpBuffer(&buf, _("Access method: %s"), tableinfo.relam);
printTableAddFooter(&cont, buf.data);
}
+
+ if (verbose &&
+ (tableinfo.relkind == RELKIND_RELATION ||
+ tableinfo.relkind == RELKIND_PARTITIONED_TABLE ||
+ tableinfo.relkind == RELKIND_FOREIGN_TABLE) &&
+ tableinfo.relparalleldml != 0)
+ {
+ printfPQExpBuffer(&buf, _("Parallel DML: %s"),
+ tableinfo.relparalleldml == 'd' ? "default" :
+ tableinfo.relparalleldml == 'u' ? "unsafe" :
+ tableinfo.relparalleldml == 'r' ? "restricted" :
+ tableinfo.relparalleldml == 's' ? "safe" :
+ "???");
+ printTableAddFooter(&cont, buf.data);
+ }
}
/* reloptions, if verbose */
@@ -4005,7 +4041,7 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys
PGresult *res;
printQueryOpt myopt = pset.popt;
int cols_so_far;
- bool translate_columns[] = {false, false, true, false, false, false, false, false, false};
+ bool translate_columns[] = {false, false, true, false, false, false, false, false, false, false};
/* If tabtypes is empty, we default to \dtvmsE (but see also command.c) */
if (!(showTables || showIndexes || showViews || showMatViews || showSeq || showForeign))
@@ -4073,22 +4109,43 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys
gettext_noop("unlogged"),
gettext_noop("Persistence"));
translate_columns[cols_so_far] = true;
+ cols_so_far++;
}
- /*
- * We don't bother to count cols_so_far below here, as there's no need
- * to; this might change with future additions to the output columns.
- */
-
/*
* Access methods exist for tables, materialized views and indexes.
* This has been introduced in PostgreSQL 12 for tables.
*/
if (pset.sversion >= 120000 && !pset.hide_tableam &&
(showTables || showMatViews || showIndexes))
+ {
appendPQExpBuffer(&buf,
",\n am.amname as \"%s\"",
gettext_noop("Access method"));
+ cols_so_far++;
+ }
+
+ /*
+ * Show whether the data in the relation is default('d') unsafe('u'),
+ * restricted('r'), or safe('s') can be modified in parallel mode.
+ * This has been introduced in PostgreSQL 15 for tables.
+ */
+ if (pset.sversion >= 150000)
+ {
+ appendPQExpBuffer(&buf,
+ ",\n CASE c.relparalleldml WHEN 'd' THEN '%s' WHEN 'u' THEN '%s' WHEN 'r' THEN '%s' WHEN 's' THEN '%s' END as \"%s\"",
+ gettext_noop("default"),
+ gettext_noop("unsafe"),
+ gettext_noop("restricted"),
+ gettext_noop("safe"),
+ gettext_noop("Parallel DML"));
+ translate_columns[cols_so_far] = true;
+ }
+
+ /*
+ * We don't bother to count cols_so_far below here, as there's no need
+ * to; this might change with future additions to the output columns.
+ */
/*
* As of PostgreSQL 9.0, use pg_table_size() to show a more accurate
diff --git a/src/include/catalog/heap.h b/src/include/catalog/heap.h
index 6ce480b49c..b59975919b 100644
--- a/src/include/catalog/heap.h
+++ b/src/include/catalog/heap.h
@@ -55,6 +55,7 @@ extern Relation heap_create(const char *relname,
TupleDesc tupDesc,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
bool allow_system_table_mods,
@@ -73,6 +74,7 @@ extern Oid heap_create_with_catalog(const char *relname,
List *cooked_constraints,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
OnCommitAction oncommit,
diff --git a/src/include/catalog/pg_class.h b/src/include/catalog/pg_class.h
index fef9945ed8..244eac6bd8 100644
--- a/src/include/catalog/pg_class.h
+++ b/src/include/catalog/pg_class.h
@@ -116,6 +116,9 @@ CATALOG(pg_class,1259,RelationRelationId) BKI_BOOTSTRAP BKI_ROWTYPE_OID(83,Relat
/* see REPLICA_IDENTITY_xxx constants */
char relreplident BKI_DEFAULT(n);
+ /* parallel safety of the dml on the relation */
+ char relparalleldml BKI_DEFAULT(d);
+
/* is relation a partition? */
bool relispartition BKI_DEFAULT(f);
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index b33b8b0134..cd52c0e254 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -171,6 +171,8 @@ DECLARE_UNIQUE_INDEX(pg_proc_proname_args_nsp_index, 2691, ProcedureNameArgsNspI
#define PROPARALLEL_RESTRICTED 'r' /* can run in parallel leader only */
#define PROPARALLEL_UNSAFE 'u' /* banned while in parallel mode */
+#define PROPARALLEL_DEFAULT 'd' /* only used for parallel dml safety */
+
/*
* Symbolic values for proargmodes column. Note that these must agree with
* the FunctionParameterMode enum in parsenodes.h; we declare them here to
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index e28248af32..0352e41c6e 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -1934,7 +1934,8 @@ typedef enum AlterTableType
AT_AddIdentity, /* ADD IDENTITY */
AT_SetIdentity, /* SET identity column options */
AT_DropIdentity, /* DROP IDENTITY */
- AT_ReAddStatistics /* internal to commands/tablecmds.c */
+ AT_ReAddStatistics, /* internal to commands/tablecmds.c */
+ AT_ParallelDMLSafety /* PARALLEL DML SAFE/RESTRICTED/UNSAFE/DEFAULT */
} AlterTableType;
typedef struct ReplicaIdentityStmt
@@ -2180,6 +2181,7 @@ typedef struct CreateStmt
OnCommitAction oncommit; /* what do we do at COMMIT? */
char *tablespacename; /* table space to use, or NULL */
char *accessMethod; /* table access method */
+ char *paralleldmlsafety; /* parallel dml safety */
bool if_not_exists; /* just do nothing if it already exists? */
} CreateStmt;
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index c04282f91f..6e679d9f97 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -115,6 +115,7 @@ typedef struct IntoClause
List *options; /* options from WITH clause */
OnCommitAction onCommit; /* what do we do at COMMIT? */
char *tableSpaceName; /* table space to use, or NULL */
+ char *paralleldmlsafety; /* parallel dml safety */
Node *viewQuery; /* materialized view's SELECT query */
bool skipData; /* true for WITH NO DATA */
} IntoClause;
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index f836acf876..05222faccd 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -139,6 +139,7 @@ PG_KEYWORD("dictionary", DICTIONARY, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("disable", DISABLE_P, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("discard", DISCARD, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("distinct", DISTINCT, RESERVED_KEYWORD, BARE_LABEL)
+PG_KEYWORD("dml", DML, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("do", DO, RESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("document", DOCUMENT_P, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("domain", DOMAIN_P, UNRESERVED_KEYWORD, BARE_LABEL)
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index f772855ac6..5ea225ac2d 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -108,7 +108,8 @@ extern Relation RelationBuildLocalRelation(const char *relname,
bool shared_relation,
bool mapped_relation,
char relpersistence,
- char relkind);
+ char relkind,
+ char relparalleldml);
/*
* Routines to manage assignment of new relfilenode to a relation
diff --git a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
index 1bae1e5438..e1f5678eef 100644
--- a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
+++ b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
@@ -276,6 +276,9 @@ get_altertable_subcmdtypes(PG_FUNCTION_ARGS)
case AT_NoForceRowSecurity:
strtype = "NO FORCE ROW SECURITY";
break;
+ case AT_ParallelDMLSafety:
+ strtype = "PARALLEL DML SAFETY";
+ break;
case AT_GenericOptions:
strtype = "SET OPTIONS";
break;
--
2.27.0
On Fri, Aug 6, 2021 4:23 PM Hou zhijie <houzj.fnst@fujitsu.com> wrote:
Update the commit message in patches to make it easier for others to review.
CFbot reported a compile error due to recent commit 3aafc03.
Attach rebased patches which fix the error.
Best regards,
Hou zj
Attachments:
v17-0002-Parallel-SELECT-for-INSERT.patchapplication/octet-stream; name=v17-0002-Parallel-SELECT-for-INSERT.patchDownload
From 7cad3cf052856ec9f5e087f1edec1c24b920dc74 Mon Sep 17 00:00:00 2001
From: houzj <houzj.fnst@fujitsu.com>
Date: Mon, 31 May 2021 09:32:54 +0800
Subject: [PATCH v14 2/4] parallel-SELECT-for-INSERT
Enable parallel select for insert.
Prepare for entering parallel mode by assigning a TransactionId.
---
src/backend/access/transam/xact.c | 26 +++++++++
src/backend/executor/execMain.c | 3 +
src/backend/optimizer/plan/planner.c | 21 +++----
src/backend/optimizer/util/clauses.c | 87 +++++++++++++++++++++++++++-
src/include/access/xact.h | 15 +++++
src/include/optimizer/clauses.h | 2 +
6 files changed, 143 insertions(+), 11 deletions(-)
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 441445927e..2d68e4633a 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -1014,6 +1014,32 @@ IsInParallelMode(void)
return CurrentTransactionState->parallelModeLevel != 0;
}
+/*
+ * PrepareParallelModePlanExec
+ *
+ * Prepare for entering parallel mode plan execution, based on command-type.
+ */
+void
+PrepareParallelModePlanExec(CmdType commandType)
+{
+ if (IsModifySupportedInParallelMode(commandType))
+ {
+ Assert(!IsInParallelMode());
+
+ /*
+ * Prepare for entering parallel mode by assigning a TransactionId.
+ * Failure to do this now would result in heap_insert() subsequently
+ * attempting to assign a TransactionId whilst in parallel-mode, which
+ * is not allowed.
+ *
+ * This approach has a disadvantage in that if the underlying SELECT
+ * does not return any rows, then the TransactionId is not used,
+ * however that shouldn't happen in practice in many cases.
+ */
+ (void) GetCurrentTransactionId();
+ }
+}
+
/*
* CommandCounterIncrement
*/
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index b3ce4bae53..ea685f0846 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -1535,7 +1535,10 @@ ExecutePlan(EState *estate,
estate->es_use_parallel_mode = use_parallel_mode;
if (use_parallel_mode)
+ {
+ PrepareParallelModePlanExec(estate->es_plannedstmt->commandType);
EnterParallelMode();
+ }
/*
* Loop until we've processed the proper number of tuples from the plan.
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 1868c4eff4..7736813230 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -314,16 +314,16 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
/*
* Assess whether it's feasible to use parallel mode for this query. We
* can't do this in a standalone backend, or if the command will try to
- * modify any data, or if this is a cursor operation, or if GUCs are set
- * to values that don't permit parallelism, or if parallel-unsafe
- * functions are present in the query tree.
+ * modify any data (except for Insert), or if this is a cursor operation,
+ * or if GUCs are set to values that don't permit parallelism, or if
+ * parallel-unsafe functions are present in the query tree.
*
- * (Note that we do allow CREATE TABLE AS, SELECT INTO, and CREATE
- * MATERIALIZED VIEW to use parallel plans, but as of now, only the leader
- * backend writes into a completely new table. In the future, we can
- * extend it to allow workers to write into the table. However, to allow
- * parallel updates and deletes, we have to solve other problems,
- * especially around combo CIDs.)
+ * (Note that we do allow CREATE TABLE AS, INSERT INTO...SELECT, SELECT
+ * INTO, and CREATE MATERIALIZED VIEW to use parallel plans. However, as
+ * of now, only the leader backend writes into a completely new table. In
+ * the future, we can extend it to allow workers to write into the table.
+ * However, to allow parallel updates and deletes, we have to solve other
+ * problems, especially around combo CIDs.)
*
* For now, we don't try to use parallel mode if we're running inside a
* parallel worker. We might eventually be able to relax this
@@ -332,7 +332,8 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
*/
if ((cursorOptions & CURSOR_OPT_PARALLEL_OK) != 0 &&
IsUnderPostmaster &&
- parse->commandType == CMD_SELECT &&
+ (parse->commandType == CMD_SELECT ||
+ is_parallel_allowed_for_modify(parse)) &&
!parse->hasModifyingCTE &&
max_parallel_workers_per_gather > 0 &&
!IsParallelWorker())
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 7187f17da5..ac0f243bf1 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -20,6 +20,8 @@
#include "postgres.h"
#include "access/htup_details.h"
+#include "access/table.h"
+#include "access/xact.h"
#include "catalog/pg_aggregate.h"
#include "catalog/pg_class.h"
#include "catalog/pg_language.h"
@@ -43,6 +45,7 @@
#include "parser/parse_agg.h"
#include "parser/parse_coerce.h"
#include "parser/parse_func.h"
+#include "parser/parsetree.h"
#include "rewrite/rewriteManip.h"
#include "tcop/tcopprot.h"
#include "utils/acl.h"
@@ -51,6 +54,7 @@
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
+#include "utils/rel.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
@@ -151,6 +155,7 @@ static Query *substitute_actual_srf_parameters(Query *expr,
int nargs, List *args);
static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
+static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
/*****************************************************************************
@@ -618,12 +623,34 @@ contain_volatile_functions_not_nextval_walker(Node *node, void *context)
char
max_parallel_hazard(Query *parse)
{
+ bool max_hazard_found;
max_parallel_hazard_context context;
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_UNSAFE;
context.safe_param_ids = NIL;
- (void) max_parallel_hazard_walker((Node *) parse, &context);
+
+ max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
+
+ if (!max_hazard_found &&
+ IsModifySupportedInParallelMode(parse->commandType))
+ {
+ RangeTblEntry *rte;
+ Relation target_rel;
+
+ rte = rt_fetch(parse->resultRelation, parse->rtable);
+
+ /*
+ * The target table is already locked by the caller (this is done in the
+ * parse/analyze phase), and remains locked until end-of-transaction.
+ */
+ target_rel = table_open(rte->relid, NoLock);
+
+ (void) max_parallel_hazard_test(target_rel->rd_rel->relparalleldml,
+ &context);
+ table_close(target_rel, NoLock);
+ }
+
return context.max_hazard;
}
@@ -857,6 +884,64 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
context);
}
+/*
+ * is_parallel_allowed_for_modify
+ *
+ * Check at a high-level if parallel mode is able to be used for the specified
+ * table-modification statement. Currently, we support only Inserts.
+ *
+ * It's not possible in the following cases:
+ *
+ * 1) INSERT...ON CONFLICT...DO UPDATE
+ * 2) INSERT without SELECT
+ *
+ * (Note: we don't do in-depth parallel-safety checks here, we do only the
+ * cheaper tests that can quickly exclude obvious cases for which
+ * parallelism isn't supported, to avoid having to do further parallel-safety
+ * checks for these)
+ */
+bool
+is_parallel_allowed_for_modify(Query *parse)
+{
+ bool hasSubQuery;
+ RangeTblEntry *rte;
+ ListCell *lc;
+
+ if (!IsModifySupportedInParallelMode(parse->commandType))
+ return false;
+
+ /*
+ * UPDATE is not currently supported in parallel-mode, so prohibit
+ * INSERT...ON CONFLICT...DO UPDATE...
+ *
+ * In order to support update, even if only in the leader, some further
+ * work would need to be done. A mechanism would be needed for sharing
+ * combo-cids between leader and workers during parallel-mode, since for
+ * example, the leader might generate a combo-cid and it needs to be
+ * propagated to the workers.
+ */
+ if (parse->commandType == CMD_INSERT &&
+ parse->onConflict != NULL &&
+ parse->onConflict->action == ONCONFLICT_UPDATE)
+ return false;
+
+ /*
+ * If there is no underlying SELECT, a parallel insert operation is not
+ * desirable.
+ */
+ hasSubQuery = false;
+ foreach(lc, parse->rtable)
+ {
+ rte = lfirst_node(RangeTblEntry, lc);
+ if (rte->rtekind == RTE_SUBQUERY)
+ {
+ hasSubQuery = true;
+ break;
+ }
+ }
+
+ return hasSubQuery;
+}
/*****************************************************************************
* Check clauses for nonstrict functions
diff --git a/src/include/access/xact.h b/src/include/access/xact.h
index 134f6862da..fd3f86bf7c 100644
--- a/src/include/access/xact.h
+++ b/src/include/access/xact.h
@@ -466,5 +466,20 @@ extern void ParsePrepareRecord(uint8 info, xl_xact_prepare *xlrec, xl_xact_parse
extern void EnterParallelMode(void);
extern void ExitParallelMode(void);
extern bool IsInParallelMode(void);
+extern void PrepareParallelModePlanExec(CmdType commandType);
+
+/*
+ * IsModifySupportedInParallelMode
+ *
+ * Indicates whether execution of the specified table-modification command
+ * (INSERT/UPDATE/DELETE) in parallel-mode is supported, subject to certain
+ * parallel-safety conditions.
+ */
+static inline bool
+IsModifySupportedInParallelMode(CmdType commandType)
+{
+ /* Currently only INSERT is supported */
+ return (commandType == CMD_INSERT);
+}
#endif /* XACT_H */
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 0673887a85..32b56565e5 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -53,4 +53,6 @@ extern void CommuteOpExpr(OpExpr *clause);
extern Query *inline_set_returning_function(PlannerInfo *root,
RangeTblEntry *rte);
+extern bool is_parallel_allowed_for_modify(Query *parse);
+
#endif /* CLAUSES_H */
--
2.27.0
v17-0003-Get-parallel-safety-functions.patchapplication/octet-stream; name=v17-0003-Get-parallel-safety-functions.patchDownload
From d93281fdbeef47af1b16bf6803d80c18e592fc13 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Fri, 30 Jul 2021 11:50:55 +0800
Subject: [PATCH] get-parallel-safety-functions
Parallel SELECT can't be utilized for INSERT when target table has a
parallel-unsafe: trigger, index expression or predicate, column default
expression, partition key expression or check constraint.
Provide a utility function "pg_get_table_parallel_dml_safety(regclass)" that
returns records of (objid, classid, parallel_safety) for all
parallel unsafe/restricted table-related objects from which the
table's parallel DML safety is determined. The user can use this
information during development in order to accurately declare a
table's parallel DML safety. Or to identify any problematic objects
if a parallel DML fails or behaves unexpectedly.
When the use of an index-related parallel unsafe/restricted function
is detected, both the function oid and the index oid are returned.
Provide a utility function "pg_get_table_max_parallel_dml_hazard(regclass)" that
returns the worst parallel DML safety hazard that can be found in the
given relation. Users can use this function to do a quick check without
caring about specific parallel-related objects.
---
src/backend/optimizer/util/clauses.c | 658 ++++++++++++++++++++++++++++++++++-
src/backend/utils/adt/misc.c | 94 +++++
src/backend/utils/cache/typcache.c | 17 +
src/include/catalog/pg_proc.dat | 22 +-
src/include/optimizer/clauses.h | 14 +
src/include/utils/typcache.h | 2 +
src/tools/pgindent/typedefs.list | 1 +
7 files changed, 803 insertions(+), 5 deletions(-)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index ac0f243..749cb0d 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -19,15 +19,20 @@
#include "postgres.h"
+#include "access/amapi.h"
+#include "access/genam.h"
#include "access/htup_details.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/pg_aggregate.h"
#include "catalog/pg_class.h"
+#include "catalog/pg_constraint.h"
#include "catalog/pg_language.h"
#include "catalog/pg_operator.h"
#include "catalog/pg_proc.h"
+#include "catalog/pg_trigger.h"
#include "catalog/pg_type.h"
+#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/functions.h"
#include "funcapi.h"
@@ -46,6 +51,8 @@
#include "parser/parse_coerce.h"
#include "parser/parse_func.h"
#include "parser/parsetree.h"
+#include "partitioning/partdesc.h"
+#include "rewrite/rewriteHandler.h"
#include "rewrite/rewriteManip.h"
#include "tcop/tcopprot.h"
#include "utils/acl.h"
@@ -54,6 +61,7 @@
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
+#include "utils/partcache.h"
#include "utils/rel.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
@@ -92,6 +100,9 @@ typedef struct
char max_hazard; /* worst proparallel hazard found so far */
char max_interesting; /* worst proparallel hazard of interest */
List *safe_param_ids; /* PARAM_EXEC Param IDs to treat as safe */
+ bool check_all; /* whether collect all the unsafe/restricted objects */
+ List *objects; /* parallel unsafe/restricted objects */
+ PartitionDirectory partition_directory; /* partition descriptors */
} max_parallel_hazard_context;
static bool contain_agg_clause_walker(Node *node, void *context);
@@ -102,6 +113,25 @@ static bool contain_volatile_functions_walker(Node *node, void *context);
static bool contain_volatile_functions_not_nextval_walker(Node *node, void *context);
static bool max_parallel_hazard_walker(Node *node,
max_parallel_hazard_context *context);
+static bool target_rel_parallel_hazard_recurse(Relation relation,
+ max_parallel_hazard_context *context,
+ bool is_partition,
+ bool check_column_default);
+static bool target_rel_trigger_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
+static bool index_expr_parallel_hazard(Relation index_rel,
+ List *ii_Expressions,
+ List *ii_Predicate,
+ max_parallel_hazard_context *context);
+static bool target_rel_index_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
+static bool target_rel_domain_parallel_hazard(Oid typid,
+ max_parallel_hazard_context *context);
+static bool target_rel_partitions_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition);
+static bool target_rel_chk_constr_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
static bool contain_nonstrict_functions_walker(Node *node, void *context);
static bool contain_exec_param_walker(Node *node, List *param_ids);
static bool contain_context_dependent_node(Node *clause);
@@ -156,6 +186,7 @@ static Query *substitute_actual_srf_parameters(Query *expr,
static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
+static safety_object *make_safety_object(Oid objid, Oid classid, char proparallel);
/*****************************************************************************
@@ -629,6 +660,9 @@ max_parallel_hazard(Query *parse)
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_UNSAFE;
context.safe_param_ids = NIL;
+ context.check_all = false;
+ context.objects = NIL;
+ context.partition_directory = NULL;
max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
@@ -681,6 +715,9 @@ is_parallel_safe(PlannerInfo *root, Node *node)
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_RESTRICTED;
context.safe_param_ids = NIL;
+ context.check_all = false;
+ context.objects = NIL;
+ context.partition_directory = NULL;
/*
* The params that refer to the same or parent query level are considered
@@ -712,7 +749,7 @@ max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context)
break;
case PROPARALLEL_RESTRICTED:
/* increase max_hazard to RESTRICTED */
- Assert(context->max_hazard != PROPARALLEL_UNSAFE);
+ Assert(context->check_all || context->max_hazard != PROPARALLEL_UNSAFE);
context->max_hazard = proparallel;
/* done if we are not expecting any unsafe functions */
if (context->max_interesting == proparallel)
@@ -729,6 +766,82 @@ max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context)
return false;
}
+/*
+ * make_safety_object
+ *
+ * Creates a safety_object, given object id, class id and parallel safety.
+ */
+static safety_object *
+make_safety_object(Oid objid, Oid classid, char proparallel)
+{
+ safety_object *object = (safety_object *) palloc(sizeof(safety_object));
+
+ object->objid = objid;
+ object->classid = classid;
+ object->proparallel = proparallel;
+
+ return object;
+}
+
+/* check_functions_in_node callback */
+static bool
+parallel_hazard_checker(Oid func_id, void *context)
+{
+ char proparallel;
+ max_parallel_hazard_context *cont = (max_parallel_hazard_context *) context;
+
+ proparallel = func_parallel(func_id);
+
+ if (max_parallel_hazard_test(proparallel, cont) && !cont->check_all)
+ return true;
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object = make_safety_object(func_id,
+ ProcedureRelationId,
+ proparallel);
+ cont->objects = lappend(cont->objects, object);
+ }
+
+ return false;
+}
+
+/*
+ * parallel_hazard_walker
+ *
+ * Recursively search an expression tree which is defined as partition key or
+ * index or constraint or column default expression for PARALLEL
+ * UNSAFE/RESTRICTED table-related objects.
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
+{
+ if (node == NULL)
+ return false;
+
+ /* Check for hazardous functions in node itself */
+ if (check_functions_in_node(node, parallel_hazard_checker,
+ context))
+ return true;
+
+ if (IsA(node, CoerceToDomain))
+ {
+ CoerceToDomain *domain = (CoerceToDomain *) node;
+
+ if (target_rel_domain_parallel_hazard(domain->resulttype, context))
+ return true;
+ }
+
+ /* Recurse to check arguments */
+ return expression_tree_walker(node,
+ parallel_hazard_walker,
+ context);
+}
+
/* check_functions_in_node callback */
static bool
max_parallel_hazard_checker(Oid func_id, void *context)
@@ -885,6 +998,549 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
}
/*
+ * target_rel_parallel_hazard
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+List*
+target_rel_parallel_hazard(Oid relOid, bool findall,
+ char max_interesting, char *max_hazard)
+{
+ max_parallel_hazard_context context;
+ Relation targetRel;
+
+ context.check_all = findall;
+ context.objects = NIL;
+ context.max_hazard = PROPARALLEL_SAFE;
+ context.max_interesting = max_interesting;
+ context.safe_param_ids = NIL;
+ context.partition_directory = NULL;
+
+ targetRel = table_open(relOid, AccessShareLock);
+
+ (void) target_rel_parallel_hazard_recurse(targetRel, &context, false, true);
+ if (context.partition_directory)
+ DestroyPartitionDirectory(context.partition_directory);
+
+ table_close(targetRel, AccessShareLock);
+
+ *max_hazard = context.max_hazard;
+
+ return context.objects;
+}
+
+/*
+ * target_rel_parallel_hazard_recurse
+ *
+ * Recursively search all table-related objects for PARALLEL UNSAFE/RESTRICTED
+ * objects.
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_parallel_hazard_recurse(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition,
+ bool check_column_default)
+{
+ TupleDesc tupdesc;
+ int attnum;
+
+ /*
+ * We can't support table modification in a parallel worker if it's a
+ * foreign table/partition (no FDW API for supporting parallel access) or
+ * a temporary table.
+ */
+ if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||
+ RelationUsesLocalBuffers(rel))
+ {
+ if (max_parallel_hazard_test(PROPARALLEL_RESTRICTED, context) &&
+ !context->check_all)
+ return true;
+ else
+ {
+ safety_object *object = make_safety_object(rel->rd_rel->oid,
+ RelationRelationId,
+ PROPARALLEL_RESTRICTED);
+ context->objects = lappend(context->objects, object);
+ }
+ }
+
+ /*
+ * If a partitioned table, check that each partition is safe for
+ * modification in parallel-mode.
+ */
+ if (target_rel_partitions_parallel_hazard(rel, context, is_partition))
+ return true;
+
+ /*
+ * If there are any index expressions or index predicate, check that they
+ * are parallel-mode safe.
+ */
+ if (target_rel_index_parallel_hazard(rel, context))
+ return true;
+
+ /*
+ * If any triggers exist, check that they are parallel-safe.
+ */
+ if (target_rel_trigger_parallel_hazard(rel, context))
+ return true;
+
+ /*
+ * Column default expressions are only applicable to INSERT and UPDATE.
+ * Note that even though column defaults may be specified separately for
+ * each partition in a partitioned table, a partition's default value is
+ * not applied when inserting a tuple through a partitioned table.
+ */
+
+ tupdesc = RelationGetDescr(rel);
+ for (attnum = 0; attnum < tupdesc->natts; attnum++)
+ {
+ Form_pg_attribute att = TupleDescAttr(tupdesc, attnum);
+
+ /* We don't need info for dropped or generated attributes */
+ if (att->attisdropped || att->attgenerated)
+ continue;
+
+ if (att->atthasdef && check_column_default)
+ {
+ Node *defaultexpr;
+
+ defaultexpr = build_column_default(rel, attnum + 1);
+ if (parallel_hazard_walker((Node *) defaultexpr, context))
+ return true;
+ }
+
+ /*
+ * If the column is of a DOMAIN type, determine whether that
+ * domain has any CHECK expressions that are not parallel-mode
+ * safe.
+ */
+ if (get_typtype(att->atttypid) == TYPTYPE_DOMAIN)
+ {
+ if (target_rel_domain_parallel_hazard(att->atttypid, context))
+ return true;
+ }
+ }
+
+ /*
+ * CHECK constraints are only applicable to INSERT and UPDATE. If any
+ * CHECK constraints exist, determine if they are parallel-safe.
+ */
+ if (target_rel_chk_constr_parallel_hazard(rel, context))
+ return true;
+
+ return false;
+}
+
+/*
+ * target_rel_trigger_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the specified relation's trigger data.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_trigger_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ int i;
+ char proparallel;
+
+ if (rel->trigdesc == NULL)
+ return false;
+
+ /*
+ * Care is needed here to avoid using the same relcache TriggerDesc field
+ * across other cache accesses, because relcache doesn't guarantee that it
+ * won't move.
+ */
+ for (i = 0; i < rel->trigdesc->numtriggers; i++)
+ {
+ Oid tgfoid = rel->trigdesc->triggers[i].tgfoid;
+ Oid tgoid = rel->trigdesc->triggers[i].tgoid;
+
+ proparallel = func_parallel(tgfoid);
+
+ if (max_parallel_hazard_test(proparallel, context) &&
+ !context->check_all)
+ return true;
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object,
+ *parent_object;
+
+ object = make_safety_object(tgfoid, ProcedureRelationId,
+ proparallel);
+ parent_object = make_safety_object(tgoid, TriggerRelationId,
+ proparallel);
+
+ context->objects = lappend(context->objects, object);
+ context->objects = lappend(context->objects, parent_object);
+ }
+ }
+
+ return false;
+}
+
+/*
+ * index_expr_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the input index expression and index predicate.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+index_expr_parallel_hazard(Relation index_rel,
+ List *ii_Expressions,
+ List *ii_Predicate,
+ max_parallel_hazard_context *context)
+{
+ int i;
+ Form_pg_index indexStruct;
+ ListCell *index_expr_item;
+
+ indexStruct = index_rel->rd_index;
+ index_expr_item = list_head(ii_Expressions);
+
+ /* Check parallel-safety of index expression */
+ for (i = 0; i < indexStruct->indnatts; i++)
+ {
+ int keycol = indexStruct->indkey.values[i];
+
+ if (keycol == 0)
+ {
+ /* Found an index expression */
+ Node *index_expr;
+
+ Assert(index_expr_item != NULL);
+ if (index_expr_item == NULL) /* shouldn't happen */
+ elog(ERROR, "too few entries in indexprs list");
+
+ index_expr = (Node *) lfirst(index_expr_item);
+
+ if (parallel_hazard_walker(index_expr, context))
+ return true;
+
+ index_expr_item = lnext(ii_Expressions, index_expr_item);
+ }
+ }
+
+ /* Check parallel-safety of index predicate */
+ if (parallel_hazard_walker((Node *) ii_Predicate, context))
+ return true;
+
+ return false;
+}
+
+/*
+ * target_rel_index_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any existing index expressions or index predicate of a specified
+ * relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_index_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ List *index_oid_list;
+ ListCell *lc;
+ LOCKMODE lockmode = AccessShareLock;
+ bool max_hazard_found;
+
+ index_oid_list = RelationGetIndexList(rel);
+ foreach(lc, index_oid_list)
+ {
+ Relation index_rel;
+ List *ii_Expressions;
+ List *ii_Predicate;
+ List *temp_objects;
+ char temp_hazard;
+ Oid index_oid = lfirst_oid(lc);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ index_rel = index_open(index_oid, lockmode);
+
+ /* Check index expression */
+ ii_Expressions = RelationGetIndexExpressions(index_rel);
+ ii_Predicate = RelationGetIndexPredicate(index_rel);
+
+ max_hazard_found = index_expr_parallel_hazard(index_rel,
+ ii_Expressions,
+ ii_Predicate,
+ context);
+
+ index_close(index_rel, lockmode);
+
+ if (max_hazard_found)
+ return true;
+
+ /* Add the index itself to the objects list */
+ else if (context->objects != NIL)
+ {
+ safety_object *object;
+
+ object = make_safety_object(index_oid, IndexRelationId,
+ context->max_hazard);
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ list_free(index_oid_list);
+
+ return false;
+}
+
+/*
+ * target_rel_domain_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the specified DOMAIN type. Only any CHECK expressions are
+ * examined for parallel-safety.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_domain_parallel_hazard(Oid typid,
+ max_parallel_hazard_context *context)
+{
+ ListCell *lc;
+ List *domain_list;
+ List *temp_objects;
+ char temp_hazard;
+
+ domain_list = GetDomainConstraints(typid);
+
+ foreach(lc, domain_list)
+ {
+ DomainConstraintState *r = (DomainConstraintState *) lfirst(lc);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ if (parallel_hazard_walker((Node *) r->check_expr, context))
+ return true;
+
+ /* Add the constraint itself to the objects list */
+ else if (context->objects != NIL)
+ {
+ safety_object *object;
+ Oid constr_oid = get_domain_constraint_oid(typid,
+ r->name,
+ false);
+
+ object = make_safety_object(constr_oid,
+ ConstraintRelationId,
+ context->max_hazard);
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ return false;
+
+}
+
+/*
+ * target_rel_partitions_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any partitions of a specified relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_partitions_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition)
+{
+ int i;
+ PartitionDesc pdesc;
+ PartitionKey pkey;
+ ListCell *partexprs_item;
+ int partnatts;
+ List *partexprs,
+ *qual;
+
+ /*
+ * The partition check expression is composed of its parent table's
+ * partition key expression, we do not need to check it again for a
+ * partition because we already checked the parallel safety of its parent
+ * table's partition key expression.
+ */
+ if (!is_partition)
+ {
+ qual = RelationGetPartitionQual(rel);
+ if (parallel_hazard_walker((Node *) qual, context))
+ return true;
+ }
+
+ if (rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
+ return false;
+
+ pkey = RelationGetPartitionKey(rel);
+
+ partnatts = get_partition_natts(pkey);
+ partexprs = get_partition_exprs(pkey);
+
+ partexprs_item = list_head(partexprs);
+ for (i = 0; i < partnatts; i++)
+ {
+ Oid funcOid = pkey->partsupfunc[i].fn_oid;
+
+ if (OidIsValid(funcOid))
+ {
+ char proparallel = func_parallel(funcOid);
+
+ if (max_parallel_hazard_test(proparallel, context) &&
+ !context->check_all)
+ return true;
+
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object;
+
+ object = make_safety_object(funcOid, ProcedureRelationId,
+ proparallel);
+ context->objects = lappend(context->objects, object);
+ }
+ }
+
+ /* Check parallel-safety of any expressions in the partition key */
+ if (get_partition_col_attnum(pkey, i) == 0)
+ {
+ Node *check_expr = (Node *) lfirst(partexprs_item);
+
+ if (parallel_hazard_walker(check_expr, context))
+ return true;
+
+ partexprs_item = lnext(partexprs, partexprs_item);
+ }
+ }
+
+ /* Recursively check each partition ... */
+
+ /* Create the PartitionDirectory infrastructure if we didn't already */
+ if (context->partition_directory == NULL)
+ context->partition_directory =
+ CreatePartitionDirectory(CurrentMemoryContext, false);
+
+ pdesc = PartitionDirectoryLookup(context->partition_directory, rel);
+
+ for (i = 0; i < pdesc->nparts; i++)
+ {
+ Relation part_rel;
+ bool max_hazard_found;
+
+ part_rel = table_open(pdesc->oids[i], AccessShareLock);
+ max_hazard_found = target_rel_parallel_hazard_recurse(part_rel,
+ context,
+ true,
+ false);
+ table_close(part_rel, AccessShareLock);
+
+ if (max_hazard_found)
+ return true;
+ }
+
+ return false;
+}
+
+/*
+ * target_rel_chk_constr_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any CHECK expressions or CHECK constraints related to the
+ * specified relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_chk_constr_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ char temp_hazard;
+ int i;
+ TupleDesc tupdesc;
+ List *temp_objects;
+ ConstrCheck *check;
+
+ tupdesc = RelationGetDescr(rel);
+
+ if (tupdesc->constr == NULL)
+ return false;
+
+ check = tupdesc->constr->check;
+
+ /*
+ * Determine if there are any CHECK constraints which are not
+ * parallel-safe.
+ */
+ for (i = 0; i < tupdesc->constr->num_check; i++)
+ {
+ Expr *check_expr = stringToNode(check[i].ccbin);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ if (parallel_hazard_walker((Node *) check_expr, context))
+ return true;
+
+ /* Add the constraint itself to the objects list */
+ if (context->objects != NIL)
+ {
+ Oid constr_oid;
+ safety_object *object;
+
+ constr_oid = get_relation_constraint_oid(rel->rd_rel->oid,
+ check->ccname,
+ true);
+
+ object = make_safety_object(constr_oid,
+ ConstraintRelationId,
+ context->max_hazard);
+
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ return false;
+}
+
+/*
* is_parallel_allowed_for_modify
*
* Check at a high-level if parallel mode is able to be used for the specified
diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c
index 88faf4d..06d859c 100644
--- a/src/backend/utils/adt/misc.c
+++ b/src/backend/utils/adt/misc.c
@@ -23,6 +23,8 @@
#include "access/sysattr.h"
#include "access/table.h"
#include "catalog/catalog.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_type.h"
#include "catalog/system_fk_info.h"
@@ -31,6 +33,7 @@
#include "common/keywords.h"
#include "funcapi.h"
#include "miscadmin.h"
+#include "optimizer/clauses.h"
#include "parser/scansup.h"
#include "pgstat.h"
#include "postmaster/syslogger.h"
@@ -43,6 +46,7 @@
#include "utils/lsyscache.h"
#include "utils/ruleutils.h"
#include "utils/timestamp.h"
+#include "utils/varlena.h"
/*
* Common subroutine for num_nulls() and num_nonnulls().
@@ -605,6 +609,96 @@ pg_collation_for(PG_FUNCTION_ARGS)
PG_RETURN_TEXT_P(cstring_to_text(generate_collation_name(collid)));
}
+/*
+ * Find the worst parallel-hazard level in the given relation
+ *
+ * Returns the worst parallel hazard level (the earliest in this list:
+ * PROPARALLEL_UNSAFE, PROPARALLEL_RESTRICTED, PROPARALLEL_SAFE) that can
+ * be found in the given relation.
+ */
+Datum
+pg_get_table_max_parallel_dml_hazard(PG_FUNCTION_ARGS)
+{
+ char max_parallel_hazard;
+ Oid relOid = PG_GETARG_OID(0);
+
+ (void) target_rel_parallel_hazard(relOid, false,
+ PROPARALLEL_UNSAFE,
+ &max_parallel_hazard);
+
+ PG_RETURN_CHAR(max_parallel_hazard);
+}
+
+/*
+ * Determine whether the target relation is safe to execute parallel modification.
+ *
+ * Return all the PARALLEL RESTRICTED/UNSAFE objects.
+ */
+Datum
+pg_get_table_parallel_dml_safety(PG_FUNCTION_ARGS)
+{
+#define PG_GET_PARALLEL_SAFETY_COLS 3
+ List *objects;
+ ListCell *object;
+ TupleDesc tupdesc;
+ Tuplestorestate *tupstore;
+ MemoryContext per_query_ctx;
+ MemoryContext oldcontext;
+ ReturnSetInfo *rsinfo;
+ char max_parallel_hazard;
+ Oid relOid = PG_GETARG_OID(0);
+
+ rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+
+ /* check to see if caller supports us returning a tuplestore */
+ if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
+
+ if (!(rsinfo->allowedModes & SFRM_Materialize))
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("materialize mode required, but it is not allowed in this context")));
+
+ /* Build a tuple descriptor for our result type */
+ if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+ elog(ERROR, "return type must be a row type");
+
+ per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+ oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+ tupstore = tuplestore_begin_heap(true, false, work_mem);
+ rsinfo->returnMode = SFRM_Materialize;
+ rsinfo->setResult = tupstore;
+ rsinfo->setDesc = tupdesc;
+
+ MemoryContextSwitchTo(oldcontext);
+
+ objects = target_rel_parallel_hazard(relOid, true,
+ PROPARALLEL_UNSAFE,
+ &max_parallel_hazard);
+ foreach(object, objects)
+ {
+ Datum values[PG_GET_PARALLEL_SAFETY_COLS];
+ bool nulls[PG_GET_PARALLEL_SAFETY_COLS];
+ safety_object *sobject = (safety_object *) lfirst(object);
+
+ memset(nulls, 0, sizeof(nulls));
+
+ values[0] = sobject->objid;
+ values[1] = sobject->classid;
+ values[2] = sobject->proparallel;
+
+ tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+ }
+
+ /* clean up and return the tuplestore */
+ tuplestore_donestoring(tupstore);
+
+ return (Datum) 0;
+}
+
/*
* pg_relation_is_updatable - determine which update events the specified
diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c
index 326fae6..02a8f70 100644
--- a/src/backend/utils/cache/typcache.c
+++ b/src/backend/utils/cache/typcache.c
@@ -2535,6 +2535,23 @@ compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2)
}
/*
+ * GetDomainConstraints --- get DomainConstraintState list of specified domain type
+ */
+List *
+GetDomainConstraints(Oid type_id)
+{
+ TypeCacheEntry *typentry;
+ List *constraints = NIL;
+
+ typentry = lookup_type_cache(type_id, TYPECACHE_DOMAIN_CONSTR_INFO);
+
+ if(typentry->domainData != NULL)
+ constraints = typentry->domainData->constraints;
+
+ return constraints;
+}
+
+/*
* Load (or re-load) the enumData member of the typcache entry.
*/
static void
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8cd0252..4483cd1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3770,6 +3770,20 @@
provolatile => 's', prorettype => 'regclass', proargtypes => 'regclass',
prosrc => 'pg_get_replica_identity_index' },
+{ oid => '6122',
+ descr => 'parallel unsafe/restricted objects in the target relation',
+ proname => 'pg_get_table_parallel_dml_safety', prorows => '100',
+ proretset => 't', provolatile => 'v', proparallel => 'u',
+ prorettype => 'record', proargtypes => 'regclass',
+ proallargtypes => '{regclass,oid,oid,char}',
+ proargmodes => '{i,o,o,o}',
+ proargnames => '{table_name, objid, classid, proparallel}',
+ prosrc => 'pg_get_table_parallel_dml_safety' },
+
+{ oid => '6123', descr => 'worst parallel-hazard level in the given relation for DML',
+ proname => 'pg_get_table_max_parallel_dml_hazard', prorettype => 'char', proargtypes => 'regclass',
+ prosrc => 'pg_get_table_max_parallel_dml_hazard', provolatile => 'v', proparallel => 'u' },
+
# Deferrable unique constraint trigger
{ oid => '1250', descr => 'deferred UNIQUE constraint check',
proname => 'unique_key_recheck', provolatile => 'v', prorettype => 'trigger',
@@ -3777,11 +3791,11 @@
# Generic referential integrity constraint triggers
{ oid => '1644', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
- proname => 'RI_FKey_check_ins', provolatile => 'v', prorettype => 'trigger',
- proargtypes => '', prosrc => 'RI_FKey_check_ins' },
+ proname => 'RI_FKey_check_ins', provolatile => 'v', proparallel => 'r',
+ prorettype => 'trigger', proargtypes => '', prosrc => 'RI_FKey_check_ins' },
{ oid => '1645', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
- proname => 'RI_FKey_check_upd', provolatile => 'v', prorettype => 'trigger',
- proargtypes => '', prosrc => 'RI_FKey_check_upd' },
+ proname => 'RI_FKey_check_upd', provolatile => 'v', proparallel => 'r',
+ prorettype => 'trigger', proargtypes => '', prosrc => 'RI_FKey_check_upd' },
{ oid => '1646', descr => 'referential integrity ON DELETE CASCADE',
proname => 'RI_FKey_cascade_del', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'RI_FKey_cascade_del' },
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 32b5656..f8b2a72 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -23,6 +23,17 @@ typedef struct
List **windowFuncs; /* lists of WindowFuncs for each winref */
} WindowFuncLists;
+/*
+ * Information about a table-related object which could affect the safety of
+ * parallel data modification on table.
+ */
+typedef struct safety_object
+{
+ Oid objid; /* OID of object itself */
+ Oid classid; /* OID of its catalog */
+ char proparallel; /* parallel safety of the object */
+} safety_object;
+
extern bool contain_agg_clause(Node *clause);
extern bool contain_window_function(Node *clause);
@@ -54,5 +65,8 @@ extern Query *inline_set_returning_function(PlannerInfo *root,
RangeTblEntry *rte);
extern bool is_parallel_allowed_for_modify(Query *parse);
+extern List *target_rel_parallel_hazard(Oid relOid, bool findall,
+ char max_interesting,
+ char *max_hazard);
#endif /* CLAUSES_H */
diff --git a/src/include/utils/typcache.h b/src/include/utils/typcache.h
index 1d68a9a..28ca7d8 100644
--- a/src/include/utils/typcache.h
+++ b/src/include/utils/typcache.h
@@ -199,6 +199,8 @@ extern uint64 assign_record_type_identifier(Oid type_id, int32 typmod);
extern int compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2);
+extern List *GetDomainConstraints(Oid type_id);
+
extern size_t SharedRecordTypmodRegistryEstimate(void);
extern void SharedRecordTypmodRegistryInit(SharedRecordTypmodRegistry *,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 37cf4b2..307bb97 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3491,6 +3491,7 @@ rm_detail_t
role_auth_extra
row_security_policy_hook_type
rsv_callback
+safety_object
saophash_hash
save_buffer
scram_state
--
2.7.2.windows.1
v17-0005-Regression-test-and-doc-updates.patchapplication/octet-stream; name=v17-0005-Regression-test-and-doc-updates.patchDownload
From 86c0b68d9d6c2c4ec4d42b97d1f8fa4677adb475 Mon Sep 17 00:00:00 2001
From: Hou Zhijie <HouZhijie@foxmail.com>
Date: Fri, 30 Jul 2021 10:06:04 +0800
Subject: [PATCH] regression-test-and-doc-updates
---
contrib/test_decoding/expected/ddl.out | 4 +
doc/src/sgml/func.sgml | 61 ++
doc/src/sgml/ref/alter_foreign_table.sgml | 13 +
doc/src/sgml/ref/alter_function.sgml | 2 +-
doc/src/sgml/ref/alter_table.sgml | 12 +
doc/src/sgml/ref/create_foreign_table.sgml | 39 +
doc/src/sgml/ref/create_table.sgml | 44 ++
doc/src/sgml/ref/create_table_as.sgml | 38 +
src/test/regress/expected/alter_table.out | 2 +
src/test/regress/expected/compression_1.out | 9 +
src/test/regress/expected/copy2.out | 1 +
src/test/regress/expected/create_table.out | 14 +
.../regress/expected/create_table_like.out | 8 +
src/test/regress/expected/domain.out | 2 +
src/test/regress/expected/foreign_data.out | 42 ++
src/test/regress/expected/identity.out | 1 +
src/test/regress/expected/inherit.out | 13 +
src/test/regress/expected/insert.out | 12 +
src/test/regress/expected/insert_parallel.out | 713 ++++++++++++++++++
src/test/regress/expected/psql.out | 58 +-
src/test/regress/expected/publication.out | 4 +
.../regress/expected/replica_identity.out | 1 +
src/test/regress/expected/rowsecurity.out | 1 +
src/test/regress/expected/rules.out | 3 +
src/test/regress/expected/stats_ext.out | 1 +
src/test/regress/expected/triggers.out | 1 +
src/test/regress/expected/update.out | 1 +
src/test/regress/output/tablespace.source | 2 +
src/test/regress/parallel_schedule | 1 +
src/test/regress/sql/insert_parallel.sql | 381 ++++++++++
30 files changed, 1456 insertions(+), 28 deletions(-)
create mode 100644 src/test/regress/expected/insert_parallel.out
create mode 100644 src/test/regress/sql/insert_parallel.sql
diff --git a/contrib/test_decoding/expected/ddl.out b/contrib/test_decoding/expected/ddl.out
index 4ff0044c78..5c9b5ea3b9 100644
--- a/contrib/test_decoding/expected/ddl.out
+++ b/contrib/test_decoding/expected/ddl.out
@@ -446,6 +446,7 @@ WITH (user_catalog_table = true)
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=true
INSERT INTO replication_metadata(relation, options)
@@ -460,6 +461,7 @@ ALTER TABLE replication_metadata RESET (user_catalog_table);
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
INSERT INTO replication_metadata(relation, options)
VALUES ('bar', ARRAY['a', 'b']);
@@ -473,6 +475,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = true);
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=true
INSERT INTO replication_metadata(relation, options)
@@ -492,6 +495,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = false);
rewritemeornot | integer | | | | plain | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=false
INSERT INTO replication_metadata(relation, options)
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index d83f39f7cd..6679ad9974 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -23940,6 +23940,67 @@ SELECT collation for ('foo' COLLATE "de_DE");
Undefined objects are identified with <literal>NULL</literal> values.
</para></entry>
</row>
+
+ <row>
+ <entry role="func_table_entry"><para role="func_signature">
+ <indexterm>
+ <primary>pg_get_table_parallel_dml_safety</primary>
+ </indexterm>
+ <function>pg_get_table_parallel_dml_safety</function> ( <parameter>table_name</parameter> <type>regclass</type> )
+ <returnvalue>record</returnvalue>
+ ( <parameter>objid</parameter> <type>oid</type>,
+ <parameter>classid</parameter> <type>oid</type>,
+ <parameter>proparallel</parameter> <type>char</type> )
+ </para>
+ <para>
+ Returns a row containing enough information to uniquely identify the
+ parallel unsafe/restricted table-related objects from which the
+ table's parallel DML safety is determined. The user can use this
+ information during development in order to accurately declare a
+ table's parallel DML safety, or to identify any problematic objects
+ if parallel DML fails or behaves unexpectedly. Note that when the
+ use of an object-related parallel unsafe/restricted function is
+ detected, both the function OID and the object OID are returned.
+ <parameter>classid</parameter> is the OID of the system catalog
+ containing the object;
+ <parameter>objid</parameter> is the OID of the object itself.
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="func_table_entry"><para role="func_signature">
+ <indexterm>
+ <primary>pg_get_table_max_parallel_dml_hazard</primary>
+ </indexterm>
+ <function>pg_get_table_max_parallel_dml_hazard</function> ( <type>regclass</type> )
+ <returnvalue>char</returnvalue>
+ </para>
+ <para>
+ Returns the worst parallel DML safety hazard that can be found in the
+ given relation:
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal>s</literal> safe
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>r</literal> restricted
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>u</literal> unsafe
+ </para>
+ </listitem>
+ </itemizedlist>
+ </para>
+ <para>
+ Users can use this function to do a quick check without caring about
+ specific parallel-related objects.
+ </para></entry>
+ </row>
</tbody>
</tgroup>
</table>
diff --git a/doc/src/sgml/ref/alter_foreign_table.sgml b/doc/src/sgml/ref/alter_foreign_table.sgml
index 7ca03f3ac9..58f1c0d567 100644
--- a/doc/src/sgml/ref/alter_foreign_table.sgml
+++ b/doc/src/sgml/ref/alter_foreign_table.sgml
@@ -29,6 +29,8 @@ ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceab
RENAME TO <replaceable class="parameter">new_name</replaceable>
ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
SET SCHEMA <replaceable class="parameter">new_schema</replaceable>
+ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
<phrase>where <replaceable class="parameter">action</replaceable> is one of:</phrase>
@@ -299,6 +301,17 @@ ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceab
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML</literal></term>
+ <listitem>
+ <para>
+ Change whether the data in the table can be modified in parallel mode.
+ See the similar form of <link linkend="sql-altertable"><command>ALTER TABLE</command></link>
+ for more details.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</para>
diff --git a/doc/src/sgml/ref/alter_function.sgml b/doc/src/sgml/ref/alter_function.sgml
index 0ee756a94d..1a0fd3cd88 100644
--- a/doc/src/sgml/ref/alter_function.sgml
+++ b/doc/src/sgml/ref/alter_function.sgml
@@ -38,7 +38,7 @@ ALTER FUNCTION <replaceable>name</replaceable> [ ( [ [ <replaceable class="param
IMMUTABLE | STABLE | VOLATILE
[ NOT ] LEAKPROOF
[ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER
- PARALLEL { UNSAFE | RESTRICTED | SAFE }
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
COST <replaceable class="parameter">execution_cost</replaceable>
ROWS <replaceable class="parameter">result_rows</replaceable>
SUPPORT <replaceable class="parameter">support_function</replaceable>
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index 81291577f8..99bd75648f 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -37,6 +37,8 @@ ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
ATTACH PARTITION <replaceable class="parameter">partition_name</replaceable> { FOR VALUES <replaceable class="parameter">partition_bound_spec</replaceable> | DEFAULT }
ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
DETACH PARTITION <replaceable class="parameter">partition_name</replaceable> [ CONCURRENTLY | FINALIZE ]
+ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
<phrase>where <replaceable class="parameter">action</replaceable> is one of:</phrase>
@@ -1030,6 +1032,16 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML</literal></term>
+ <listitem>
+ <para>
+ Change whether the data in the table can be modified in parallel mode.
+ See <link linkend="sql-createtable"><command>CREATE TABLE</command></link> for details.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</para>
diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml
index f9477efe58..7a8a7ddbec 100644
--- a/doc/src/sgml/ref/create_foreign_table.sgml
+++ b/doc/src/sgml/ref/create_foreign_table.sgml
@@ -27,6 +27,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name
[, ... ]
] )
[ INHERITS ( <replaceable>parent_table</replaceable> [, ... ] ) ]
+[ PARALLEL DML { NOTESET | UNSAFE | RESTRICTED | SAFE } ]
SERVER <replaceable class="parameter">server_name</replaceable>
[ OPTIONS ( <replaceable class="parameter">option</replaceable> '<replaceable class="parameter">value</replaceable>' [, ... ] ) ]
@@ -36,6 +37,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name
| <replaceable>table_constraint</replaceable> }
[, ... ]
) ] <replaceable class="parameter">partition_bound_spec</replaceable>
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
SERVER <replaceable class="parameter">server_name</replaceable>
[ OPTIONS ( <replaceable class="parameter">option</replaceable> '<replaceable class="parameter">value</replaceable>' [, ... ] ) ]
@@ -290,6 +292,43 @@ CHECK ( <replaceable class="parameter">expression</replaceable> ) [ NO INHERIT ]
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML DEFAULT</literal> indicates that the safety of
+ parallel modification will be checked automatically. This is default.
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the
+ table can't be modified in parallel mode, and this forces a serial
+ execution plan for DML statements operating on the table.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader. <literal>PARALLEL DML
+ SAFE</literal> indicates that the data in the table can be modified in
+ parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ the table (e.g., functions in triggers/index expression/constraints etc.).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable class="parameter">server_name</replaceable></term>
<listitem>
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 15aed2f251..7abc527bf9 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -33,6 +33,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name</replaceable>
OF <replaceable class="parameter">type_name</replaceable> [ (
@@ -45,6 +46,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name</replaceable>
PARTITION OF <replaceable class="parameter">parent_table</replaceable> [ (
@@ -57,6 +59,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
<phrase>where <replaceable class="parameter">column_constraint</replaceable> is:</phrase>
@@ -1336,6 +1339,47 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry id="sql-createtable-paralleldmlsafety">
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the table
+ can't be modified in parallel mode, and this forces a serial execution plan
+ for DML statements operating on the table. This is the default.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader.
+ <literal>PARALLEL DML SAFE</literal> indicates that the data in the table
+ can be modified in parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Note that for partitioned table, <literal>PARALLEL DML DEFAULT</literal>
+ is the same as <literal>PARALLEL DML UNSAFE</literal> which indicates
+ that the data in the table can't be modified in parallel mode.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ the table
+ (e.g., functions in triggers/index expressions/constraints etc.).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>USING INDEX TABLESPACE <replaceable class="parameter">tablespace_name</replaceable></literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml
index 07558ab56c..2e7851db44 100644
--- a/doc/src/sgml/ref/create_table_as.sgml
+++ b/doc/src/sgml/ref/create_table_as.sgml
@@ -27,6 +27,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+ [ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
AS <replaceable>query</replaceable>
[ WITH [ NO ] DATA ]
</synopsis>
@@ -223,6 +224,43 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML DEFAULT</literal> indicates that the safety of
+ parallel modification will be checked automatically. This is default.
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the
+ table can't be modified in parallel mode, and this forces a serial
+ execution plan for DML statements operating on the table.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader. <literal>PARALLEL DML
+ SAFE</literal> indicates that the data in the table can be modified in
+ parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ table (e.g., functions in trigger/index expression/constraints ...).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable>query</replaceable></term>
<listitem>
diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out
index 8dcb00ac67..1c360e04bf 100644
--- a/src/test/regress/expected/alter_table.out
+++ b/src/test/regress/expected/alter_table.out
@@ -2206,6 +2206,7 @@ alter table test_storage alter column a set storage external;
b | integer | | | 0 | plain | |
Indexes:
"test_storage_idx" btree (b, a)
+Parallel DML: default
\d+ test_storage_idx
Index "public.test_storage_idx"
@@ -4193,6 +4194,7 @@ ALTER TABLE range_parted2 DETACH PARTITION part_rp CONCURRENTLY;
a | integer | | | | plain | |
Partition key: RANGE (a)
Number of partitions: 0
+Parallel DML: default
-- constraint should be created
\d part_rp
diff --git a/src/test/regress/expected/compression_1.out b/src/test/regress/expected/compression_1.out
index 1ce2962d55..8559e94226 100644
--- a/src/test/regress/expected/compression_1.out
+++ b/src/test/regress/expected/compression_1.out
@@ -12,6 +12,7 @@ INSERT INTO cmdata VALUES(repeat('1234567890', 1000));
f1 | text | | | | extended | pglz | |
Indexes:
"idx" btree (f1)
+Parallel DML: default
CREATE TABLE cmdata1(f1 TEXT COMPRESSION lz4);
ERROR: compression method lz4 not supported
@@ -51,6 +52,7 @@ SELECT * INTO cmmove1 FROM cmdata;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | text | | | | extended | | |
+Parallel DML: default
SELECT pg_column_compression(f1) FROM cmmove1;
pg_column_compression
@@ -138,6 +140,7 @@ CREATE TABLE cmdata2 (f1 int);
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | integer | | | | plain | | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;
\d+ cmdata2
@@ -145,6 +148,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | character varying | | | | extended | | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE int USING f1::integer;
\d+ cmdata2
@@ -152,6 +156,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE int USING f1::integer;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | integer | | | | plain | | |
+Parallel DML: default
--changing column storage should not impact the compression method
--but the data should not be compressed
@@ -162,6 +167,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION pglz;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | character varying | | | | extended | pglz | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 SET STORAGE plain;
\d+ cmdata2
@@ -169,6 +175,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET STORAGE plain;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | character varying | | | | plain | pglz | |
+Parallel DML: default
INSERT INTO cmdata2 VALUES (repeat('123456789', 800));
SELECT pg_column_compression(f1) FROM cmdata2;
@@ -249,6 +256,7 @@ INSERT INTO cmdata VALUES (repeat('123456789', 4004));
f1 | text | | | | extended | pglz | |
Indexes:
"idx" btree (f1)
+Parallel DML: default
SELECT pg_column_compression(f1) FROM cmdata;
pg_column_compression
@@ -263,6 +271,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION default;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | character varying | | | | plain | | |
+Parallel DML: default
-- test alter compression method for materialized views
ALTER MATERIALIZED VIEW compressmv ALTER COLUMN x SET COMPRESSION lz4;
diff --git a/src/test/regress/expected/copy2.out b/src/test/regress/expected/copy2.out
index 5f3685e9ef..46f817417a 100644
--- a/src/test/regress/expected/copy2.out
+++ b/src/test/regress/expected/copy2.out
@@ -519,6 +519,7 @@ alter table check_con_tbl add check (check_con_function(check_con_tbl.*));
f1 | integer | | | | plain | |
Check constraints:
"check_con_tbl_check" CHECK (check_con_function(check_con_tbl.*))
+Parallel DML: default
copy check_con_tbl from stdin;
NOTICE: input = {"f1":1}
diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out
index 96bf426d98..b7e2a535cd 100644
--- a/src/test/regress/expected/create_table.out
+++ b/src/test/regress/expected/create_table.out
@@ -505,6 +505,7 @@ Number of partitions: 0
b | text | | | | extended | |
Partition key: RANGE (((a + 1)), substr(b, 1, 5))
Number of partitions: 0
+Parallel DML: default
INSERT INTO partitioned2 VALUES (1, 'hello');
ERROR: no partition of relation "partitioned2" found for row
@@ -518,6 +519,7 @@ CREATE TABLE part2_1 PARTITION OF partitioned2 FOR VALUES FROM (-1, 'aaaaa') TO
b | text | | | | extended | |
Partition of: partitioned2 FOR VALUES FROM ('-1', 'aaaaa') TO (100, 'ccccc')
Partition constraint: (((a + 1) IS NOT NULL) AND (substr(b, 1, 5) IS NOT NULL) AND (((a + 1) > '-1'::integer) OR (((a + 1) = '-1'::integer) AND (substr(b, 1, 5) >= 'aaaaa'::text))) AND (((a + 1) < 100) OR (((a + 1) = 100) AND (substr(b, 1, 5) < 'ccccc'::text))))
+Parallel DML: default
DROP TABLE partitioned, partitioned2;
-- check reference to partitioned table's rowtype in partition descriptor
@@ -559,6 +561,7 @@ select * from partitioned where partitioned = '(1,2)'::partitioned;
b | integer | | | | plain | |
Partition of: partitioned FOR VALUES IN ('(1,2)')
Partition constraint: (((partitioned1.*)::partitioned IS DISTINCT FROM NULL) AND ((partitioned1.*)::partitioned = '(1,2)'::partitioned))
+Parallel DML: default
drop table partitioned;
-- check that dependencies of partition columns are handled correctly
@@ -618,6 +621,7 @@ Partitions: part_null FOR VALUES IN (NULL),
part_p1 FOR VALUES IN (1),
part_p2 FOR VALUES IN (2),
part_p3 FOR VALUES IN (3)
+Parallel DML: default
-- forbidden expressions for partition bound with list partitioned table
CREATE TABLE part_bogus_expr_fail PARTITION OF list_parted FOR VALUES IN (somename);
@@ -1064,6 +1068,7 @@ drop table test_part_coll_posix;
b | integer | | not null | 1 | plain | |
Partition of: parted FOR VALUES IN ('b')
Partition constraint: ((a IS NOT NULL) AND (a = 'b'::text))
+Parallel DML: default
-- Both partition bound and partition key in describe output
\d+ part_c
@@ -1076,6 +1081,7 @@ Partition of: parted FOR VALUES IN ('c')
Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text))
Partition key: RANGE (b)
Partitions: part_c_1_10 FOR VALUES FROM (1) TO (10)
+Parallel DML: default
-- a level-2 partition's constraint will include the parent's expressions
\d+ part_c_1_10
@@ -1086,6 +1092,7 @@ Partitions: part_c_1_10 FOR VALUES FROM (1) TO (10)
b | integer | | not null | 0 | plain | |
Partition of: part_c FOR VALUES FROM (1) TO (10)
Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text) AND (b IS NOT NULL) AND (b >= 1) AND (b < 10))
+Parallel DML: default
-- Show partition count in the parent's describe output
-- Tempted to include \d+ output listing partitions with bound info but
@@ -1120,6 +1127,7 @@ CREATE TABLE unbounded_range_part PARTITION OF range_parted4 FOR VALUES FROM (MI
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (MAXVALUE, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL))
+Parallel DML: default
DROP TABLE unbounded_range_part;
CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE);
@@ -1132,6 +1140,7 @@ CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALU
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND (abs(a) <= 1))
+Parallel DML: default
CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE);
\d+ range_parted4_2
@@ -1143,6 +1152,7 @@ CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 3) OR ((abs(a) = 3) AND (abs(b) > 4)) OR ((abs(a) = 3) AND (abs(b) = 4) AND (c >= 5))) AND ((abs(a) < 6) OR ((abs(a) = 6) AND (abs(b) <= 7))))
+Parallel DML: default
CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE);
\d+ range_parted4_3
@@ -1154,6 +1164,7 @@ CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, M
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 6) OR ((abs(a) = 6) AND (abs(b) >= 8))) AND (abs(a) <= 9))
+Parallel DML: default
DROP TABLE range_parted4;
-- user-defined operator class in partition key
@@ -1190,6 +1201,7 @@ SELECT obj_description('parted_col_comment'::regclass);
b | text | | | | extended | |
Partition key: LIST (a)
Number of partitions: 0
+Parallel DML: default
DROP TABLE parted_col_comment;
-- list partitioning on array type column
@@ -1202,6 +1214,7 @@ CREATE TABLE arrlp12 PARTITION OF arrlp FOR VALUES IN ('{1}', '{2}');
a | integer[] | | | | extended | |
Partition of: arrlp FOR VALUES IN ('{1}', '{2}')
Partition constraint: ((a IS NOT NULL) AND ((a = '{1}'::integer[]) OR (a = '{2}'::integer[])))
+Parallel DML: default
DROP TABLE arrlp;
-- partition on boolean column
@@ -1216,6 +1229,7 @@ create table boolspart_f partition of boolspart for values in (false);
Partition key: LIST (a)
Partitions: boolspart_f FOR VALUES IN (false),
boolspart_t FOR VALUES IN (true)
+Parallel DML: default
drop table boolspart;
-- partitions mixing temporary and permanent relations
diff --git a/src/test/regress/expected/create_table_like.out b/src/test/regress/expected/create_table_like.out
index 7ad5fafe93..da59d8b3c2 100644
--- a/src/test/regress/expected/create_table_like.out
+++ b/src/test/regress/expected/create_table_like.out
@@ -333,6 +333,7 @@ CREATE TABLE ctlt12_storage (LIKE ctlt1 INCLUDING STORAGE, LIKE ctlt2 INCLUDING
a | text | | not null | | main | |
b | text | | | | extended | |
c | text | | | | external | |
+Parallel DML: default
CREATE TABLE ctlt12_comments (LIKE ctlt1 INCLUDING COMMENTS, LIKE ctlt2 INCLUDING COMMENTS);
\d+ ctlt12_comments
@@ -342,6 +343,7 @@ CREATE TABLE ctlt12_comments (LIKE ctlt1 INCLUDING COMMENTS, LIKE ctlt2 INCLUDIN
a | text | | not null | | extended | | A
b | text | | | | extended | | B
c | text | | | | extended | | C
+Parallel DML: default
CREATE TABLE ctlt1_inh (LIKE ctlt1 INCLUDING CONSTRAINTS INCLUDING COMMENTS) INHERITS (ctlt1);
NOTICE: merging column "a" with inherited definition
@@ -356,6 +358,7 @@ NOTICE: merging constraint "ctlt1_a_check" with inherited definition
Check constraints:
"ctlt1_a_check" CHECK (length(a) > 2)
Inherits: ctlt1
+Parallel DML: default
SELECT description FROM pg_description, pg_constraint c WHERE classoid = 'pg_constraint'::regclass AND objoid = c.oid AND c.conrelid = 'ctlt1_inh'::regclass;
description
@@ -378,6 +381,7 @@ Check constraints:
"ctlt3_c_check" CHECK (length(c) < 7)
Inherits: ctlt1,
ctlt3
+Parallel DML: default
CREATE TABLE ctlt13_like (LIKE ctlt3 INCLUDING CONSTRAINTS INCLUDING INDEXES INCLUDING COMMENTS INCLUDING STORAGE) INHERITS (ctlt1);
NOTICE: merging column "a" with inherited definition
@@ -395,6 +399,7 @@ Check constraints:
"ctlt3_a_check" CHECK (length(a) < 5)
"ctlt3_c_check" CHECK (length(c) < 7)
Inherits: ctlt1
+Parallel DML: default
SELECT description FROM pg_description, pg_constraint c WHERE classoid = 'pg_constraint'::regclass AND objoid = c.oid AND c.conrelid = 'ctlt13_like'::regclass;
description
@@ -418,6 +423,7 @@ Check constraints:
Statistics objects:
"public"."ctlt_all_a_b_stat" ON a, b FROM ctlt_all
"public"."ctlt_all_expr_stat" ON ((a || b)) FROM ctlt_all
+Parallel DML: default
SELECT c.relname, objsubid, description FROM pg_description, pg_index i, pg_class c WHERE classoid = 'pg_class'::regclass AND objoid = i.indexrelid AND c.oid = i.indexrelid AND i.indrelid = 'ctlt_all'::regclass ORDER BY c.relname, objsubid;
relname | objsubid | description
@@ -458,6 +464,7 @@ Check constraints:
Statistics objects:
"public"."pg_attrdef_a_b_stat" ON a, b FROM public.pg_attrdef
"public"."pg_attrdef_expr_stat" ON ((a || b)) FROM public.pg_attrdef
+Parallel DML: default
DROP TABLE public.pg_attrdef;
-- Check that LIKE isn't confused when new table masks the old, either
@@ -480,6 +487,7 @@ Check constraints:
Statistics objects:
"ctl_schema"."ctlt1_a_b_stat" ON a, b FROM ctlt1
"ctl_schema"."ctlt1_expr_stat" ON ((a || b)) FROM ctlt1
+Parallel DML: default
ROLLBACK;
DROP TABLE ctlt1, ctlt2, ctlt3, ctlt4, ctlt12_storage, ctlt12_comments, ctlt1_inh, ctlt13_inh, ctlt13_like, ctlt_all, ctla, ctlb CASCADE;
diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out
index 411d5c003e..342e9d234d 100644
--- a/src/test/regress/expected/domain.out
+++ b/src/test/regress/expected/domain.out
@@ -276,6 +276,7 @@ Rules:
silly AS
ON DELETE TO dcomptable DO INSTEAD UPDATE dcomptable SET d1.r = (dcomptable.d1).r - 1::double precision, d1.i = (dcomptable.d1).i + 1::double precision
WHERE (dcomptable.d1).i > 0::double precision
+Parallel DML: default
drop table dcomptable;
drop type comptype cascade;
@@ -413,6 +414,7 @@ Rules:
silly AS
ON DELETE TO dcomptable DO INSTEAD UPDATE dcomptable SET d1[1].r = dcomptable.d1[1].r - 1::double precision, d1[1].i = dcomptable.d1[1].i + 1::double precision
WHERE dcomptable.d1[1].i > 0::double precision
+Parallel DML: default
drop table dcomptable;
drop type comptype cascade;
diff --git a/src/test/regress/expected/foreign_data.out b/src/test/regress/expected/foreign_data.out
index 426080ae39..330f25ea9e 100644
--- a/src/test/regress/expected/foreign_data.out
+++ b/src/test/regress/expected/foreign_data.out
@@ -735,6 +735,7 @@ Check constraints:
"ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
\det+
List of foreign tables
@@ -857,6 +858,7 @@ Check constraints:
"ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- can't change the column type if it's used elsewhere
CREATE TABLE use_ft1_column_type (x ft1);
@@ -1396,6 +1398,7 @@ CREATE FOREIGN TABLE ft2 () INHERITS (fd_pt1)
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1407,6 +1410,7 @@ Child tables: ft2
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
DROP FOREIGN TABLE ft2;
\d+ fd_pt1
@@ -1416,6 +1420,7 @@ DROP FOREIGN TABLE ft2;
c1 | integer | | not null | | plain | |
c2 | text | | | | extended | |
c3 | date | | | | plain | |
+Parallel DML: default
CREATE FOREIGN TABLE ft2 (
c1 integer NOT NULL,
@@ -1431,6 +1436,7 @@ CREATE FOREIGN TABLE ft2 (
c3 | date | | | | | plain | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER FOREIGN TABLE ft2 INHERIT fd_pt1;
\d+ fd_pt1
@@ -1441,6 +1447,7 @@ ALTER FOREIGN TABLE ft2 INHERIT fd_pt1;
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1452,6 +1459,7 @@ Child tables: ft2
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
CREATE TABLE ct3() INHERITS(ft2);
CREATE FOREIGN TABLE ft3 (
@@ -1475,6 +1483,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
\d+ ct3
Table "public.ct3"
@@ -1484,6 +1493,7 @@ Child tables: ct3,
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Inherits: ft2
+Parallel DML: default
\d+ ft3
Foreign table "public.ft3"
@@ -1494,6 +1504,7 @@ Inherits: ft2
c3 | date | | | | | plain | |
Server: s0
Inherits: ft2
+Parallel DML: default
-- add attributes recursively
ALTER TABLE fd_pt1 ADD COLUMN c4 integer;
@@ -1514,6 +1525,7 @@ ALTER TABLE fd_pt1 ADD COLUMN c8 integer;
c7 | integer | | not null | | plain | |
c8 | integer | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1532,6 +1544,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
\d+ ct3
Table "public.ct3"
@@ -1546,6 +1559,7 @@ Child tables: ct3,
c7 | integer | | not null | | plain | |
c8 | integer | | | | plain | |
Inherits: ft2
+Parallel DML: default
\d+ ft3
Foreign table "public.ft3"
@@ -1561,6 +1575,7 @@ Inherits: ft2
c8 | integer | | | | | plain | |
Server: s0
Inherits: ft2
+Parallel DML: default
-- alter attributes recursively
ALTER TABLE fd_pt1 ALTER COLUMN c4 SET DEFAULT 0;
@@ -1588,6 +1603,7 @@ ALTER TABLE fd_pt1 ALTER COLUMN c8 SET STORAGE EXTERNAL;
c7 | integer | | | | plain | |
c8 | text | | | | external | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1606,6 +1622,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
-- drop attributes recursively
ALTER TABLE fd_pt1 DROP COLUMN c4;
@@ -1621,6 +1638,7 @@ ALTER TABLE fd_pt1 DROP COLUMN c8;
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1634,6 +1652,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
-- add constraints recursively
ALTER TABLE fd_pt1 ADD CONSTRAINT fd_pt1chk1 CHECK (c1 > 0) NO INHERIT;
@@ -1661,6 +1680,7 @@ Check constraints:
"fd_pt1chk1" CHECK (c1 > 0) NO INHERIT
"fd_pt1chk2" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1676,6 +1696,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
DROP FOREIGN TABLE ft2; -- ERROR
ERROR: cannot drop foreign table ft2 because other objects depend on it
@@ -1708,6 +1729,7 @@ Check constraints:
"fd_pt1chk1" CHECK (c1 > 0) NO INHERIT
"fd_pt1chk2" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1721,6 +1743,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- drop constraints recursively
ALTER TABLE fd_pt1 DROP CONSTRAINT fd_pt1chk1 CASCADE;
@@ -1738,6 +1761,7 @@ ALTER TABLE fd_pt1 ADD CONSTRAINT fd_pt1chk3 CHECK (c2 <> '') NOT VALID;
Check constraints:
"fd_pt1chk3" CHECK (c2 <> ''::text) NOT VALID
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1752,6 +1776,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- VALIDATE CONSTRAINT need do nothing on foreign tables
ALTER TABLE fd_pt1 VALIDATE CONSTRAINT fd_pt1chk3;
@@ -1765,6 +1790,7 @@ ALTER TABLE fd_pt1 VALIDATE CONSTRAINT fd_pt1chk3;
Check constraints:
"fd_pt1chk3" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1779,6 +1805,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- changes name of an attribute recursively
ALTER TABLE fd_pt1 RENAME COLUMN c1 TO f1;
@@ -1796,6 +1823,7 @@ ALTER TABLE fd_pt1 RENAME CONSTRAINT fd_pt1chk3 TO f2_check;
Check constraints:
"f2_check" CHECK (f2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1810,6 +1838,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- TRUNCATE doesn't work on foreign tables, either directly or recursively
TRUNCATE ft2; -- ERROR
@@ -1859,6 +1888,7 @@ CREATE FOREIGN TABLE fd_pt2_1 PARTITION OF fd_pt2 FOR VALUES IN (1)
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1871,6 +1901,7 @@ Partition of: fd_pt2 FOR VALUES IN (1)
Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1))
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- partition cannot have additional columns
DROP FOREIGN TABLE fd_pt2_1;
@@ -1890,6 +1921,7 @@ CREATE FOREIGN TABLE fd_pt2_1 (
c4 | character(1) | | | | | extended | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: table "fd_pt2_1" contains column "c4" not found in parent "fd_pt2"
@@ -1904,6 +1936,7 @@ DROP FOREIGN TABLE fd_pt2_1;
c3 | date | | | | plain | |
Partition key: LIST (c1)
Number of partitions: 0
+Parallel DML: default
CREATE FOREIGN TABLE fd_pt2_1 (
c1 integer NOT NULL,
@@ -1919,6 +1952,7 @@ CREATE FOREIGN TABLE fd_pt2_1 (
c3 | date | | | | | plain | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- no attach partition validation occurs for foreign tables
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1);
@@ -1931,6 +1965,7 @@ ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1);
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1943,6 +1978,7 @@ Partition of: fd_pt2 FOR VALUES IN (1)
Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1))
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- cannot add column to a partition
ALTER TABLE fd_pt2_1 ADD c4 char;
@@ -1959,6 +1995,7 @@ ALTER TABLE fd_pt2_1 ADD CONSTRAINT p21chk CHECK (c2 <> '');
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1973,6 +2010,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- cannot drop inherited NOT NULL constraint from a partition
ALTER TABLE fd_pt2_1 ALTER c1 DROP NOT NULL;
@@ -1989,6 +2027,7 @@ ALTER TABLE fd_pt2 ALTER c2 SET NOT NULL;
c3 | date | | | | plain | |
Partition key: LIST (c1)
Number of partitions: 0
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -2001,6 +2040,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: column "c2" in child table must be marked NOT NULL
@@ -2019,6 +2059,7 @@ Partition key: LIST (c1)
Check constraints:
"fd_pt2chk1" CHECK (c1 > 0)
Number of partitions: 0
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -2031,6 +2072,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: child table is missing constraint "fd_pt2chk1"
diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out
index 99811570b7..6908fd141b 100644
--- a/src/test/regress/expected/identity.out
+++ b/src/test/regress/expected/identity.out
@@ -506,6 +506,7 @@ TABLE itest8;
f3 | integer | | not null | generated by default as identity | plain | |
f4 | bigint | | not null | generated always as identity | plain | |
f5 | bigint | | | | plain | |
+Parallel DML: default
\d itest8_f2_seq
Sequence "public.itest8_f2_seq"
diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out
index 06f44287bc..1c0da28d78 100644
--- a/src/test/regress/expected/inherit.out
+++ b/src/test/regress/expected/inherit.out
@@ -1059,6 +1059,7 @@ ALTER TABLE inhts RENAME d TO dd;
dd | integer | | | | plain | |
Inherits: inht1,
inhs1
+Parallel DML: default
DROP TABLE inhts;
-- Test for renaming in diamond inheritance
@@ -1079,6 +1080,7 @@ ALTER TABLE inht1 RENAME aa TO aaa;
z | integer | | | | plain | |
Inherits: inht2,
inht3
+Parallel DML: default
CREATE TABLE inhts (d int) INHERITS (inht2, inhs1);
NOTICE: merging multiple inherited definitions of column "b"
@@ -1096,6 +1098,7 @@ ERROR: cannot rename inherited column "b"
d | integer | | | | plain | |
Inherits: inht2,
inhs1
+Parallel DML: default
WITH RECURSIVE r AS (
SELECT 'inht1'::regclass AS inhrelid
@@ -1142,6 +1145,7 @@ CREATE TABLE test_constraints_inh () INHERITS (test_constraints);
Indexes:
"test_constraints_val1_val2_key" UNIQUE CONSTRAINT, btree (val1, val2)
Child tables: test_constraints_inh
+Parallel DML: default
ALTER TABLE ONLY test_constraints DROP CONSTRAINT test_constraints_val1_val2_key;
\d+ test_constraints
@@ -1152,6 +1156,7 @@ ALTER TABLE ONLY test_constraints DROP CONSTRAINT test_constraints_val1_val2_key
val1 | character varying | | | | extended | |
val2 | integer | | | | plain | |
Child tables: test_constraints_inh
+Parallel DML: default
\d+ test_constraints_inh
Table "public.test_constraints_inh"
@@ -1161,6 +1166,7 @@ Child tables: test_constraints_inh
val1 | character varying | | | | extended | |
val2 | integer | | | | plain | |
Inherits: test_constraints
+Parallel DML: default
DROP TABLE test_constraints_inh;
DROP TABLE test_constraints;
@@ -1177,6 +1183,7 @@ CREATE TABLE test_ex_constraints_inh () INHERITS (test_ex_constraints);
Indexes:
"test_ex_constraints_c_excl" EXCLUDE USING gist (c WITH &&)
Child tables: test_ex_constraints_inh
+Parallel DML: default
ALTER TABLE test_ex_constraints DROP CONSTRAINT test_ex_constraints_c_excl;
\d+ test_ex_constraints
@@ -1185,6 +1192,7 @@ ALTER TABLE test_ex_constraints DROP CONSTRAINT test_ex_constraints_c_excl;
--------+--------+-----------+----------+---------+---------+--------------+-------------
c | circle | | | | plain | |
Child tables: test_ex_constraints_inh
+Parallel DML: default
\d+ test_ex_constraints_inh
Table "public.test_ex_constraints_inh"
@@ -1192,6 +1200,7 @@ Child tables: test_ex_constraints_inh
--------+--------+-----------+----------+---------+---------+--------------+-------------
c | circle | | | | plain | |
Inherits: test_ex_constraints
+Parallel DML: default
DROP TABLE test_ex_constraints_inh;
DROP TABLE test_ex_constraints;
@@ -1208,6 +1217,7 @@ Indexes:
"test_primary_constraints_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "test_foreign_constraints" CONSTRAINT "test_foreign_constraints_id1_fkey" FOREIGN KEY (id1) REFERENCES test_primary_constraints(id)
+Parallel DML: default
\d+ test_foreign_constraints
Table "public.test_foreign_constraints"
@@ -1217,6 +1227,7 @@ Referenced by:
Foreign-key constraints:
"test_foreign_constraints_id1_fkey" FOREIGN KEY (id1) REFERENCES test_primary_constraints(id)
Child tables: test_foreign_constraints_inh
+Parallel DML: default
ALTER TABLE test_foreign_constraints DROP CONSTRAINT test_foreign_constraints_id1_fkey;
\d+ test_foreign_constraints
@@ -1225,6 +1236,7 @@ ALTER TABLE test_foreign_constraints DROP CONSTRAINT test_foreign_constraints_id
--------+---------+-----------+----------+---------+---------+--------------+-------------
id1 | integer | | | | plain | |
Child tables: test_foreign_constraints_inh
+Parallel DML: default
\d+ test_foreign_constraints_inh
Table "public.test_foreign_constraints_inh"
@@ -1232,6 +1244,7 @@ Child tables: test_foreign_constraints_inh
--------+---------+-----------+----------+---------+---------+--------------+-------------
id1 | integer | | | | plain | |
Inherits: test_foreign_constraints
+Parallel DML: default
DROP TABLE test_foreign_constraints_inh;
DROP TABLE test_foreign_constraints;
diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out
index 5063a3dc22..9e4a1bf886 100644
--- a/src/test/regress/expected/insert.out
+++ b/src/test/regress/expected/insert.out
@@ -177,6 +177,7 @@ Rules:
irule3 AS
ON INSERT TO inserttest2 DO INSERT INTO inserttest (f4[1].if1, f4[1].if2[2]) SELECT new.f1,
new.f2
+Parallel DML: default
drop table inserttest2;
drop table inserttest;
@@ -482,6 +483,7 @@ Partitions: part_aa_bb FOR VALUES IN ('aa', 'bb'),
part_null FOR VALUES IN (NULL),
part_xx_yy FOR VALUES IN ('xx', 'yy'), PARTITIONED,
part_default DEFAULT, PARTITIONED
+Parallel DML: default
-- cleanup
drop table range_parted, list_parted;
@@ -497,6 +499,7 @@ create table part_default partition of list_parted default;
a | integer | | | | plain | |
Partition of: list_parted DEFAULT
No partition constraint
+Parallel DML: default
insert into part_default values (null);
insert into part_default values (1);
@@ -888,6 +891,7 @@ Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVAL
mcrparted6_common_ge_10 FOR VALUES FROM ('common', 10) TO ('common', MAXVALUE),
mcrparted7_gt_common_lt_d FOR VALUES FROM ('common', MAXVALUE) TO ('d', MINVALUE),
mcrparted8_ge_d FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE)
+Parallel DML: default
\d+ mcrparted1_lt_b
Table "public.mcrparted1_lt_b"
@@ -897,6 +901,7 @@ Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVAL
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a < 'b'::text))
+Parallel DML: default
\d+ mcrparted2_b
Table "public.mcrparted2_b"
@@ -906,6 +911,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a < 'b'::text))
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('b', MINVALUE) TO ('c', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'b'::text) AND (a < 'c'::text))
+Parallel DML: default
\d+ mcrparted3_c_to_common
Table "public.mcrparted3_c_to_common"
@@ -915,6 +921,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'b'::text)
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('c', MINVALUE) TO ('common', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'c'::text) AND (a < 'common'::text))
+Parallel DML: default
\d+ mcrparted4_common_lt_0
Table "public.mcrparted4_common_lt_0"
@@ -924,6 +931,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'c'::text)
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', MINVALUE) TO ('common', 0)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b < 0))
+Parallel DML: default
\d+ mcrparted5_common_0_to_10
Table "public.mcrparted5_common_0_to_10"
@@ -933,6 +941,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', 0) TO ('common', 10)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b >= 0) AND (b < 10))
+Parallel DML: default
\d+ mcrparted6_common_ge_10
Table "public.mcrparted6_common_ge_10"
@@ -942,6 +951,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', 10) TO ('common', MAXVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b >= 10))
+Parallel DML: default
\d+ mcrparted7_gt_common_lt_d
Table "public.mcrparted7_gt_common_lt_d"
@@ -951,6 +961,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', MAXVALUE) TO ('d', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a > 'common'::text) AND (a < 'd'::text))
+Parallel DML: default
\d+ mcrparted8_ge_d
Table "public.mcrparted8_ge_d"
@@ -960,6 +971,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a > 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'd'::text))
+Parallel DML: default
insert into mcrparted values ('aaa', 0), ('b', 0), ('bz', 10), ('c', -10),
('comm', -10), ('common', -10), ('common', 0), ('common', 10),
diff --git a/src/test/regress/expected/insert_parallel.out b/src/test/regress/expected/insert_parallel.out
new file mode 100644
index 0000000000..28eb537687
--- /dev/null
+++ b/src/test/regress/expected/insert_parallel.out
@@ -0,0 +1,713 @@
+--
+-- PARALLEL
+--
+--
+-- START: setup some tables and data needed by the tests.
+--
+-- Setup - index expressions test
+create function pg_class_relname(Oid)
+returns name language sql parallel unsafe
+as 'select relname from pg_class where $1 = oid';
+-- For testing purposes, we'll mark this function as parallel-unsafe
+create or replace function fullname_parallel_unsafe(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel unsafe;
+create or replace function fullname_parallel_restricted(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel restricted;
+create table names(index int, first_name text, last_name text);
+create table names2(index int, first_name text, last_name text);
+create index names2_fullname_idx on names2 (fullname_parallel_unsafe(first_name, last_name));
+create table names4(index int, first_name text, last_name text);
+create index names4_fullname_idx on names4 (fullname_parallel_restricted(first_name, last_name));
+insert into names values
+ (1, 'albert', 'einstein'),
+ (2, 'niels', 'bohr'),
+ (3, 'erwin', 'schrodinger'),
+ (4, 'leonhard', 'euler'),
+ (5, 'stephen', 'hawking'),
+ (6, 'isaac', 'newton'),
+ (7, 'alan', 'turing'),
+ (8, 'richard', 'feynman');
+-- Setup - column default tests
+create or replace function bdefault_unsafe ()
+returns int language plpgsql parallel unsafe as $$
+begin
+ RETURN 5;
+end $$;
+create or replace function cdefault_restricted ()
+returns int language plpgsql parallel restricted as $$
+begin
+ RETURN 10;
+end $$;
+create or replace function ddefault_safe ()
+returns int language plpgsql parallel safe as $$
+begin
+ RETURN 20;
+end $$;
+create table testdef(a int, b int default bdefault_unsafe(), c int default cdefault_restricted(), d int default ddefault_safe());
+create table test_data(a int);
+insert into test_data select * from generate_series(1,10);
+--
+-- END: setup some tables and data needed by the tests.
+--
+begin;
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+create table para_insert_p1 (
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+);
+create table para_insert_f1 (
+ unique1 int4 REFERENCES para_insert_p1(unique1),
+ stringu1 name
+);
+create table para_insert_with_parallel_unsafe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml unsafe;
+create table para_insert_with_parallel_restricted(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml restricted;
+create table para_insert_with_parallel_safe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml safe;
+create table para_insert_with_parallel_auto(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml default;
+-- Check FK trigger
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('para_insert_f1');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | r
+ pg_trigger | r
+ pg_proc | r
+ pg_trigger | r
+(4 rows)
+
+select pg_get_table_max_parallel_dml_hazard('para_insert_f1');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+--
+-- Test INSERT with underlying query.
+-- Set parallel dml safe.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+alter table para_insert_p1 parallel dml safe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into para_insert_p1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+ count
+-------
+ 1
+(1 row)
+
+explain (costs off) insert into para_insert_with_parallel_safe select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------
+ Insert on para_insert_with_parallel_safe
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Set parallel dml unsafe.
+-- (should not create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml unsafe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+--------------------------
+ Insert on para_insert_p1
+ -> Seq Scan on tenk1
+(2 rows)
+
+explain (costs off) insert into para_insert_with_parallel_unsafe select unique1, stringu1 from tenk1;
+ QUERY PLAN
+--------------------------------------------
+ Insert on para_insert_with_parallel_unsafe
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Set parallel dml restricted.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml restricted;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+explain (costs off) insert into para_insert_with_parallel_restricted select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------------
+ Insert on para_insert_with_parallel_restricted
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Reset parallel dml.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml default;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+explain (costs off) insert into para_insert_with_parallel_auto select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------
+ Insert on para_insert_with_parallel_auto
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Test INSERT with ordered underlying query.
+-- (should create plan with parallel SELECT, GatherMerge parent node)
+--
+truncate para_insert_p1 cascade;
+NOTICE: truncate cascades to table "para_insert_f1"
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+ QUERY PLAN
+----------------------------------------------
+ Insert on para_insert_p1
+ -> Gather Merge
+ Workers Planned: 4
+ -> Sort
+ Sort Key: tenk1.unique1
+ -> Parallel Seq Scan on tenk1
+(6 rows)
+
+insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+ count
+-------
+ 1
+(1 row)
+
+--
+-- Test INSERT with RETURNING clause.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+create table test_data1(like test_data);
+explain (costs off) insert into test_data1 select * from test_data where a = 10 returning a as data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on test_data1
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+ Filter: (a = 10)
+(5 rows)
+
+insert into test_data1 select * from test_data where a = 10 returning a as data;
+ data
+------
+ 10
+(1 row)
+
+--
+-- Test INSERT into a table with a foreign key.
+-- (Insert into a table with a foreign key is parallel-restricted,
+-- as doing this in a parallel worker would create a new commandId
+-- and within a worker this is not currently supported)
+--
+explain (costs off) insert into para_insert_f1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_f1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into para_insert_f1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the insert worked
+select count(*), sum(unique1) from para_insert_f1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+--
+-- Test INSERT with ON CONFLICT ... DO UPDATE ...
+-- (should not create a parallel plan)
+--
+create table test_conflict_table(id serial primary key, somedata int);
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on test_conflict_table
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+(4 rows)
+
+insert into test_conflict_table(id, somedata) select a, a from test_data;
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data ON CONFLICT(id) DO UPDATE SET somedata = EXCLUDED.somedata + 1;
+ QUERY PLAN
+------------------------------------------------------
+ Insert on test_conflict_table
+ Conflict Resolution: UPDATE
+ Conflict Arbiter Indexes: test_conflict_table_pkey
+ -> Seq Scan on test_data
+(4 rows)
+
+--
+-- Test INSERT with parallel-unsafe index expression
+-- (should not create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names2');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_index | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names2');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into names2 select * from names;
+ QUERY PLAN
+-------------------------
+ Insert on names2
+ -> Seq Scan on names
+(2 rows)
+
+--
+-- Test INSERT with parallel-restricted index expression
+-- (should create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names4');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | r
+ pg_index | r
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names4');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+explain (costs off) insert into names4 select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on names4
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+--
+-- Test INSERT with underlying query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names5 (like names);
+explain (costs off) insert into names5 select * from names returning *;
+ QUERY PLAN
+----------------------------------------
+ Insert on names5
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names6 (like names);
+explain (costs off) insert into names6 select * from names order by last_name returning *;
+ QUERY PLAN
+----------------------------------------------
+ Insert on names6
+ -> Gather Merge
+ Workers Planned: 3
+ -> Sort
+ Sort Key: names.last_name
+ -> Parallel Seq Scan on names
+(6 rows)
+
+insert into names6 select * from names order by last_name returning *;
+ index | first_name | last_name
+-------+------------+-------------
+ 2 | niels | bohr
+ 1 | albert | einstein
+ 4 | leonhard | euler
+ 8 | richard | feynman
+ 5 | stephen | hawking
+ 6 | isaac | newton
+ 3 | erwin | schrodinger
+ 7 | alan | turing
+(8 rows)
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (with projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names7 (like names);
+explain (costs off) insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+ QUERY PLAN
+----------------------------------------------
+ Insert on names7
+ -> Gather Merge
+ Workers Planned: 3
+ -> Sort
+ Sort Key: names.last_name
+ -> Parallel Seq Scan on names
+(6 rows)
+
+insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+ last_name_then_first_name
+---------------------------
+ bohr, niels
+ einstein, albert
+ euler, leonhard
+ feynman, richard
+ hawking, stephen
+ newton, isaac
+ schrodinger, erwin
+ turing, alan
+(8 rows)
+
+--
+-- Test INSERT into temporary table with underlying query.
+-- (Insert into a temp table is parallel-restricted;
+-- should create a parallel plan; parallel SELECT)
+--
+create temporary table temp_names (like names);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('temp_names');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_class | r
+(1 row)
+
+select pg_get_table_max_parallel_dml_hazard('temp_names');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+explain (costs off) insert into temp_names select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on temp_names
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+insert into temp_names select * from names;
+--
+-- Test INSERT with column defaults
+--
+--
+--
+-- Parallel INSERT with unsafe column default, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,c,d) select a,a*4,a*8 from test_data;
+ QUERY PLAN
+-----------------------------
+ Insert on testdef
+ -> Seq Scan on test_data
+(2 rows)
+
+--
+-- Parallel INSERT with restricted column default, should use parallel SELECT
+--
+explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on testdef
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+(4 rows)
+
+insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+select * from testdef order by a;
+ a | b | c | d
+----+----+----+----
+ 1 | 2 | 10 | 8
+ 2 | 4 | 10 | 16
+ 3 | 6 | 10 | 24
+ 4 | 8 | 10 | 32
+ 5 | 10 | 10 | 40
+ 6 | 12 | 10 | 48
+ 7 | 14 | 10 | 56
+ 8 | 16 | 10 | 64
+ 9 | 18 | 10 | 72
+ 10 | 20 | 10 | 80
+(10 rows)
+
+truncate testdef;
+--
+-- Parallel INSERT with restricted and unsafe column defaults, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,d) select a,a*8 from test_data;
+ QUERY PLAN
+-----------------------------
+ Insert on testdef
+ -> Seq Scan on test_data
+(2 rows)
+
+--
+-- Test INSERT into partition with underlying query.
+--
+create table parttable1 (a int, b name) partition by range (a);
+create table parttable1_1 partition of parttable1 for values from (0) to (5000);
+create table parttable1_2 partition of parttable1 for values from (5000) to (10000);
+alter table parttable1 parallel dml safe;
+explain (costs off) insert into parttable1 select unique1,stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on parttable1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into parttable1 select unique1,stringu1 from tenk1;
+select count(*) from parttable1_1;
+ count
+-------
+ 5000
+(1 row)
+
+select count(*) from parttable1_2;
+ count
+-------
+ 5000
+(1 row)
+
+--
+-- Test table with parallel-unsafe check constraint
+--
+create or replace function check_b_unsafe(b name) returns boolean as $$
+ begin
+ return (b <> 'XXXXXX');
+ end;
+$$ language plpgsql parallel unsafe;
+create table table_check_b(a int4, b name check (check_b_unsafe(b)), c name);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('table_check_b');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_constraint | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('table_check_b');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into table_check_b(a,b,c) select unique1, unique2, stringu1 from tenk1;
+ QUERY PLAN
+-------------------------
+ Insert on table_check_b
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Test table with parallel-safe before stmt-level triggers
+-- (should create a parallel SELECT plan; triggers should fire)
+--
+create table names_with_safe_trigger (like names);
+create or replace function insert_before_trigger_safe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_safe';
+ return new;
+ end;
+$$ language plpgsql parallel safe;
+create trigger insert_before_trigger_safe before insert on names_with_safe_trigger
+ for each statement execute procedure insert_before_trigger_safe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_safe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+(0 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names_with_safe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ s
+(1 row)
+
+explain (costs off) insert into names_with_safe_trigger select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on names_with_safe_trigger
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+insert into names_with_safe_trigger select * from names;
+NOTICE: hello from insert_before_trigger_safe
+--
+-- Test table with parallel-unsafe before stmt-level triggers
+-- (should not create a parallel plan; triggers should fire)
+--
+create table names_with_unsafe_trigger (like names);
+create or replace function insert_before_trigger_unsafe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_unsafe';
+ return new;
+ end;
+$$ language plpgsql parallel unsafe;
+create trigger insert_before_trigger_unsafe before insert on names_with_unsafe_trigger
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_unsafe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_trigger | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names_with_unsafe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into names_with_unsafe_trigger select * from names;
+ QUERY PLAN
+-------------------------------------
+ Insert on names_with_unsafe_trigger
+ -> Seq Scan on names
+(2 rows)
+
+insert into names_with_unsafe_trigger select * from names;
+NOTICE: hello from insert_before_trigger_unsafe
+--
+-- Test partition with parallel-unsafe trigger
+-- (should not create a parallel plan)
+--
+create table part_unsafe_trigger (a int4, b name) partition by range (a);
+create table part_unsafe_trigger_1 partition of part_unsafe_trigger for values from (0) to (5000);
+create table part_unsafe_trigger_2 partition of part_unsafe_trigger for values from (5000) to (10000);
+create trigger part_insert_before_trigger_unsafe before insert on part_unsafe_trigger_1
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('part_unsafe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_trigger | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('part_unsafe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into part_unsafe_trigger select unique1, stringu1 from tenk1;
+ QUERY PLAN
+-------------------------------
+ Insert on part_unsafe_trigger
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Test DOMAIN column with a CHECK constraint
+--
+create function sql_is_distinct_from_u(anyelement, anyelement)
+returns boolean language sql parallel unsafe
+as 'select $1 is distinct from $2 limit 1';
+create domain inotnull_u int
+ check (sql_is_distinct_from_u(value, null));
+create table dom_table_u (x inotnull_u, y int);
+-- Test DOMAIN column with parallel-unsafe CHECK constraint
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('dom_table_u');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_constraint | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('dom_table_u');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into dom_table_u select unique1, unique2 from tenk1;
+ QUERY PLAN
+-------------------------
+ Insert on dom_table_u
+ -> Seq Scan on tenk1
+(2 rows)
+
+rollback;
+--
+-- Clean up anything not created in the transaction
+--
+drop table names;
+drop index names2_fullname_idx;
+drop table names2;
+drop index names4_fullname_idx;
+drop table names4;
+drop table testdef;
+drop table test_data;
+drop function bdefault_unsafe;
+drop function cdefault_restricted;
+drop function ddefault_safe;
+drop function fullname_parallel_unsafe;
+drop function fullname_parallel_restricted;
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 1b2f6bc418..1fedebcd9b 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -2818,6 +2818,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
--------+----------------+-----------+----------+---------+----------+--------------+-------------
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
+Parallel DML: default
\d+ tbl_heap
Table "tableam_display.tbl_heap"
@@ -2825,6 +2826,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
--------+----------------+-----------+----------+---------+----------+--------------+-------------
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
+Parallel DML: default
\set HIDE_TABLEAM off
\d+ tbl_heap_psql
@@ -2834,6 +2836,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
Access method: heap_psql
+Parallel DML: default
\d+ tbl_heap
Table "tableam_display.tbl_heap"
@@ -2842,50 +2845,51 @@ Access method: heap_psql
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
Access method: heap
+Parallel DML: default
-- AM is displayed for tables, indexes and materialized views.
\d+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | 0 bytes |
- tableam_display | tbl_heap | table | regress_display_role | permanent | heap | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | 0 bytes |
- tableam_display | view_heap_psql | view | regress_display_role | permanent | | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | default | 0 bytes |
+ tableam_display | tbl_heap | table | regress_display_role | permanent | heap | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | default | 0 bytes |
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | | default | 0 bytes |
(4 rows)
\dt+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+---------------+-------+----------------------+-------------+---------------+---------+-------------
- tableam_display | tbl_heap | table | regress_display_role | permanent | heap | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+---------------+-------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | tbl_heap | table | regress_display_role | permanent | heap | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | default | 0 bytes |
(2 rows)
\dm+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | default | 0 bytes |
(1 row)
-- But not for views and sequences.
\dv+
- List of relations
- Schema | Name | Type | Owner | Persistence | Size | Description
------------------+----------------+------+----------------------+-------------+---------+-------------
- tableam_display | view_heap_psql | view | regress_display_role | permanent | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Parallel DML | Size | Description
+-----------------+----------------+------+----------------------+-------------+--------------+---------+-------------
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | default | 0 bytes |
(1 row)
\set HIDE_TABLEAM on
\d+
- List of relations
- Schema | Name | Type | Owner | Persistence | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | 0 bytes |
- tableam_display | tbl_heap | table | regress_display_role | permanent | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | 0 bytes |
- tableam_display | view_heap_psql | view | regress_display_role | permanent | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | tbl_heap | table | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | default | 0 bytes |
(4 rows)
RESET ROLE;
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4a5ef0bc24..f448b80856 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -85,6 +85,7 @@ Indexes:
"testpub_tbl2_pkey" PRIMARY KEY, btree (id)
Publications:
"testpub_foralltables"
+Parallel DML: default
\dRp+ testpub_foralltables
Publication testpub_foralltables
@@ -198,6 +199,7 @@ Publications:
"testpib_ins_trunct"
"testpub_default"
"testpub_fortbl"
+Parallel DML: default
\d+ testpub_tbl1
Table "public.testpub_tbl1"
@@ -211,6 +213,7 @@ Publications:
"testpib_ins_trunct"
"testpub_default"
"testpub_fortbl"
+Parallel DML: default
\dRp+ testpub_default
Publication testpub_default
@@ -236,6 +239,7 @@ Indexes:
Publications:
"testpib_ins_trunct"
"testpub_fortbl"
+Parallel DML: default
-- permissions
SET ROLE regress_publication_user2;
diff --git a/src/test/regress/expected/replica_identity.out b/src/test/regress/expected/replica_identity.out
index 79002197a7..8fce774332 100644
--- a/src/test/regress/expected/replica_identity.out
+++ b/src/test/regress/expected/replica_identity.out
@@ -171,6 +171,7 @@ Indexes:
"test_replica_identity_unique_defer" UNIQUE CONSTRAINT, btree (keya, keyb) DEFERRABLE
"test_replica_identity_unique_nondefer" UNIQUE CONSTRAINT, btree (keya, keyb)
Replica Identity: FULL
+Parallel DML: default
ALTER TABLE test_replica_identity REPLICA IDENTITY NOTHING;
SELECT relreplident FROM pg_class WHERE oid = 'test_replica_identity'::regclass;
diff --git a/src/test/regress/expected/rowsecurity.out b/src/test/regress/expected/rowsecurity.out
index 89397e41f0..5e6807f90a 100644
--- a/src/test/regress/expected/rowsecurity.out
+++ b/src/test/regress/expected/rowsecurity.out
@@ -958,6 +958,7 @@ Policies:
Partitions: part_document_fiction FOR VALUES FROM (11) TO (12),
part_document_nonfiction FOR VALUES FROM (99) TO (100),
part_document_satire FOR VALUES FROM (55) TO (56)
+Parallel DML: default
SELECT * FROM pg_policies WHERE schemaname = 'regress_rls_schema' AND tablename like '%part_document%' ORDER BY policyname;
schemaname | tablename | policyname | permissive | roles | cmd | qual | with_check
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index e5ab11275d..0ae35e1662 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -3155,6 +3155,7 @@ Rules:
r3 AS
ON DELETE TO rules_src DO
NOTIFY rules_src_deletion
+Parallel DML: default
--
-- Ensure an aliased target relation for insert is correctly deparsed.
@@ -3183,6 +3184,7 @@ Rules:
r5 AS
ON UPDATE TO rules_src DO INSTEAD UPDATE rules_log trgt SET tag = 'updated'::text
WHERE trgt.f1 = new.f1
+Parallel DML: default
--
-- Also check multiassignment deparsing.
@@ -3206,6 +3208,7 @@ Rules:
WHERE trgt.f1 = new.f1
RETURNING new.f1,
new.f2
+Parallel DML: default
drop table rule_t1, rule_dest;
--
diff --git a/src/test/regress/expected/stats_ext.out b/src/test/regress/expected/stats_ext.out
index 7fb54de53d..e4fa545c8c 100644
--- a/src/test/regress/expected/stats_ext.out
+++ b/src/test/regress/expected/stats_ext.out
@@ -145,6 +145,7 @@ ALTER STATISTICS ab1_a_b_stats SET STATISTICS -1;
b | integer | | | | plain | |
Statistics objects:
"public"."ab1_a_b_stats" ON a, b FROM ab1
+Parallel DML: default
-- partial analyze doesn't build stats either
ANALYZE ab1 (a);
diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out
index 5d124cf96f..9d39fad795 100644
--- a/src/test/regress/expected/triggers.out
+++ b/src/test/regress/expected/triggers.out
@@ -3483,6 +3483,7 @@ alter trigger parenttrig on parent rename to anothertrig;
Triggers:
parenttrig AFTER INSERT ON child FOR EACH ROW EXECUTE FUNCTION f()
Inherits: parent
+Parallel DML: default
drop table parent, child;
drop function f();
diff --git a/src/test/regress/expected/update.out b/src/test/regress/expected/update.out
index c809f88f54..d99b133644 100644
--- a/src/test/regress/expected/update.out
+++ b/src/test/regress/expected/update.out
@@ -753,6 +753,7 @@ create table part_def partition of range_parted default;
e | character varying | | | | extended | |
Partition of: range_parted DEFAULT
Partition constraint: (NOT ((a IS NOT NULL) AND (b IS NOT NULL) AND (((a = 'a'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'a'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'b'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '20'::bigint) AND (b < '30'::bigint)))))
+Parallel DML: default
insert into range_parted values ('c', 9);
-- ok
diff --git a/src/test/regress/output/tablespace.source b/src/test/regress/output/tablespace.source
index 1bbe7e0323..19c65ce435 100644
--- a/src/test/regress/output/tablespace.source
+++ b/src/test/regress/output/tablespace.source
@@ -339,6 +339,7 @@ Indexes:
"part_a_idx" btree (a), tablespace "regress_tblspace"
Partitions: testschema.part1 FOR VALUES IN (1),
testschema.part2 FOR VALUES IN (2)
+Parallel DML: default
\d testschema.part1
Table "testschema.part1"
@@ -358,6 +359,7 @@ Partition of: testschema.part FOR VALUES IN (1)
Partition constraint: ((a IS NOT NULL) AND (a = 1))
Indexes:
"part1_a_idx" btree (a), tablespace "regress_tblspace"
+Parallel DML: default
\d testschema.part_a_idx
Partitioned index "testschema.part_a_idx"
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 7be89178f0..daf0bad4d5 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -96,6 +96,7 @@ test: rules psql psql_crosstab amutils stats_ext collate.linux.utf8
# run by itself so it can run parallel workers
test: select_parallel
test: write_parallel
+test: insert_parallel
# no relation related tests can be put in this group
test: publication subscription
diff --git a/src/test/regress/sql/insert_parallel.sql b/src/test/regress/sql/insert_parallel.sql
new file mode 100644
index 0000000000..9bf1809ccd
--- /dev/null
+++ b/src/test/regress/sql/insert_parallel.sql
@@ -0,0 +1,381 @@
+--
+-- PARALLEL
+--
+
+--
+-- START: setup some tables and data needed by the tests.
+--
+
+-- Setup - index expressions test
+
+create function pg_class_relname(Oid)
+returns name language sql parallel unsafe
+as 'select relname from pg_class where $1 = oid';
+
+-- For testing purposes, we'll mark this function as parallel-unsafe
+create or replace function fullname_parallel_unsafe(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel unsafe;
+
+create or replace function fullname_parallel_restricted(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel restricted;
+
+create table names(index int, first_name text, last_name text);
+create table names2(index int, first_name text, last_name text);
+create index names2_fullname_idx on names2 (fullname_parallel_unsafe(first_name, last_name));
+create table names4(index int, first_name text, last_name text);
+create index names4_fullname_idx on names4 (fullname_parallel_restricted(first_name, last_name));
+
+
+insert into names values
+ (1, 'albert', 'einstein'),
+ (2, 'niels', 'bohr'),
+ (3, 'erwin', 'schrodinger'),
+ (4, 'leonhard', 'euler'),
+ (5, 'stephen', 'hawking'),
+ (6, 'isaac', 'newton'),
+ (7, 'alan', 'turing'),
+ (8, 'richard', 'feynman');
+
+-- Setup - column default tests
+
+create or replace function bdefault_unsafe ()
+returns int language plpgsql parallel unsafe as $$
+begin
+ RETURN 5;
+end $$;
+
+create or replace function cdefault_restricted ()
+returns int language plpgsql parallel restricted as $$
+begin
+ RETURN 10;
+end $$;
+
+create or replace function ddefault_safe ()
+returns int language plpgsql parallel safe as $$
+begin
+ RETURN 20;
+end $$;
+
+create table testdef(a int, b int default bdefault_unsafe(), c int default cdefault_restricted(), d int default ddefault_safe());
+create table test_data(a int);
+insert into test_data select * from generate_series(1,10);
+
+--
+-- END: setup some tables and data needed by the tests.
+--
+
+begin;
+
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+
+create table para_insert_p1 (
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+);
+
+create table para_insert_f1 (
+ unique1 int4 REFERENCES para_insert_p1(unique1),
+ stringu1 name
+);
+
+create table para_insert_with_parallel_unsafe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml unsafe;
+
+create table para_insert_with_parallel_restricted(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml restricted;
+
+create table para_insert_with_parallel_safe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml safe;
+
+create table para_insert_with_parallel_auto(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml default;
+
+-- Check FK trigger
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('para_insert_f1');
+select pg_get_table_max_parallel_dml_hazard('para_insert_f1');
+
+--
+-- Test INSERT with underlying query.
+-- Set parallel dml safe.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+alter table para_insert_p1 parallel dml safe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+insert into para_insert_p1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+explain (costs off) insert into para_insert_with_parallel_safe select unique1, stringu1 from tenk1;
+
+--
+-- Set parallel dml unsafe.
+-- (should not create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml unsafe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_unsafe select unique1, stringu1 from tenk1;
+
+--
+-- Set parallel dml restricted.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml restricted;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_restricted select unique1, stringu1 from tenk1;
+
+--
+-- Reset parallel dml.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml default;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_auto select unique1, stringu1 from tenk1;
+
+--
+-- Test INSERT with ordered underlying query.
+-- (should create plan with parallel SELECT, GatherMerge parent node)
+--
+truncate para_insert_p1 cascade;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+
+--
+-- Test INSERT with RETURNING clause.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+create table test_data1(like test_data);
+explain (costs off) insert into test_data1 select * from test_data where a = 10 returning a as data;
+insert into test_data1 select * from test_data where a = 10 returning a as data;
+
+--
+-- Test INSERT into a table with a foreign key.
+-- (Insert into a table with a foreign key is parallel-restricted,
+-- as doing this in a parallel worker would create a new commandId
+-- and within a worker this is not currently supported)
+--
+explain (costs off) insert into para_insert_f1 select unique1, stringu1 from tenk1;
+insert into para_insert_f1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the insert worked
+select count(*), sum(unique1) from para_insert_f1;
+
+--
+-- Test INSERT with ON CONFLICT ... DO UPDATE ...
+-- (should not create a parallel plan)
+--
+create table test_conflict_table(id serial primary key, somedata int);
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data;
+insert into test_conflict_table(id, somedata) select a, a from test_data;
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data ON CONFLICT(id) DO UPDATE SET somedata = EXCLUDED.somedata + 1;
+
+--
+-- Test INSERT with parallel-unsafe index expression
+-- (should not create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names2');
+select pg_get_table_max_parallel_dml_hazard('names2');
+explain (costs off) insert into names2 select * from names;
+
+--
+-- Test INSERT with parallel-restricted index expression
+-- (should create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names4');
+select pg_get_table_max_parallel_dml_hazard('names4');
+explain (costs off) insert into names4 select * from names;
+
+--
+-- Test INSERT with underlying query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names5 (like names);
+explain (costs off) insert into names5 select * from names returning *;
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names6 (like names);
+explain (costs off) insert into names6 select * from names order by last_name returning *;
+insert into names6 select * from names order by last_name returning *;
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (with projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names7 (like names);
+explain (costs off) insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+
+
+--
+-- Test INSERT into temporary table with underlying query.
+-- (Insert into a temp table is parallel-restricted;
+-- should create a parallel plan; parallel SELECT)
+--
+create temporary table temp_names (like names);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('temp_names');
+select pg_get_table_max_parallel_dml_hazard('temp_names');
+explain (costs off) insert into temp_names select * from names;
+insert into temp_names select * from names;
+
+--
+-- Test INSERT with column defaults
+--
+--
+
+--
+-- Parallel INSERT with unsafe column default, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,c,d) select a,a*4,a*8 from test_data;
+
+--
+-- Parallel INSERT with restricted column default, should use parallel SELECT
+--
+explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+select * from testdef order by a;
+truncate testdef;
+
+--
+-- Parallel INSERT with restricted and unsafe column defaults, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,d) select a,a*8 from test_data;
+
+--
+-- Test INSERT into partition with underlying query.
+--
+create table parttable1 (a int, b name) partition by range (a);
+create table parttable1_1 partition of parttable1 for values from (0) to (5000);
+create table parttable1_2 partition of parttable1 for values from (5000) to (10000);
+
+alter table parttable1 parallel dml safe;
+
+explain (costs off) insert into parttable1 select unique1,stringu1 from tenk1;
+insert into parttable1 select unique1,stringu1 from tenk1;
+select count(*) from parttable1_1;
+select count(*) from parttable1_2;
+
+--
+-- Test table with parallel-unsafe check constraint
+--
+create or replace function check_b_unsafe(b name) returns boolean as $$
+ begin
+ return (b <> 'XXXXXX');
+ end;
+$$ language plpgsql parallel unsafe;
+
+create table table_check_b(a int4, b name check (check_b_unsafe(b)), c name);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('table_check_b');
+select pg_get_table_max_parallel_dml_hazard('table_check_b');
+explain (costs off) insert into table_check_b(a,b,c) select unique1, unique2, stringu1 from tenk1;
+
+--
+-- Test table with parallel-safe before stmt-level triggers
+-- (should create a parallel SELECT plan; triggers should fire)
+--
+create table names_with_safe_trigger (like names);
+
+create or replace function insert_before_trigger_safe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_safe';
+ return new;
+ end;
+$$ language plpgsql parallel safe;
+create trigger insert_before_trigger_safe before insert on names_with_safe_trigger
+ for each statement execute procedure insert_before_trigger_safe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_safe_trigger');
+select pg_get_table_max_parallel_dml_hazard('names_with_safe_trigger');
+explain (costs off) insert into names_with_safe_trigger select * from names;
+insert into names_with_safe_trigger select * from names;
+
+--
+-- Test table with parallel-unsafe before stmt-level triggers
+-- (should not create a parallel plan; triggers should fire)
+--
+create table names_with_unsafe_trigger (like names);
+create or replace function insert_before_trigger_unsafe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_unsafe';
+ return new;
+ end;
+$$ language plpgsql parallel unsafe;
+create trigger insert_before_trigger_unsafe before insert on names_with_unsafe_trigger
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_unsafe_trigger');
+select pg_get_table_max_parallel_dml_hazard('names_with_unsafe_trigger');
+explain (costs off) insert into names_with_unsafe_trigger select * from names;
+insert into names_with_unsafe_trigger select * from names;
+
+--
+-- Test partition with parallel-unsafe trigger
+-- (should not create a parallel plan)
+--
+create table part_unsafe_trigger (a int4, b name) partition by range (a);
+create table part_unsafe_trigger_1 partition of part_unsafe_trigger for values from (0) to (5000);
+create table part_unsafe_trigger_2 partition of part_unsafe_trigger for values from (5000) to (10000);
+create trigger part_insert_before_trigger_unsafe before insert on part_unsafe_trigger_1
+ for each statement execute procedure insert_before_trigger_unsafe();
+
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('part_unsafe_trigger');
+select pg_get_table_max_parallel_dml_hazard('part_unsafe_trigger');
+explain (costs off) insert into part_unsafe_trigger select unique1, stringu1 from tenk1;
+
+--
+-- Test DOMAIN column with a CHECK constraint
+--
+create function sql_is_distinct_from_u(anyelement, anyelement)
+returns boolean language sql parallel unsafe
+as 'select $1 is distinct from $2 limit 1';
+
+create domain inotnull_u int
+ check (sql_is_distinct_from_u(value, null));
+
+create table dom_table_u (x inotnull_u, y int);
+
+-- Test DOMAIN column with parallel-unsafe CHECK constraint
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('dom_table_u');
+select pg_get_table_max_parallel_dml_hazard('dom_table_u');
+explain (costs off) insert into dom_table_u select unique1, unique2 from tenk1;
+
+rollback;
+
+--
+-- Clean up anything not created in the transaction
+--
+
+drop table names;
+drop index names2_fullname_idx;
+drop table names2;
+drop index names4_fullname_idx;
+drop table names4;
+drop table testdef;
+drop table test_data;
+
+drop function bdefault_unsafe;
+drop function cdefault_restricted;
+drop function ddefault_safe;
+drop function fullname_parallel_unsafe;
+drop function fullname_parallel_restricted;
--
2.27.0
v17-0006-Workaround-for-query-rewriter-hasModifyingCTE-bug.patchapplication/octet-stream; name=v17-0006-Workaround-for-query-rewriter-hasModifyingCTE-bug.patchDownload
From 0b7733c62a4bc80aab9dd36bd593982da1586429 Mon Sep 17 00:00:00 2001
From: Greg Nancarrow <gregn4422@gmail.com>
Date: Fri, 6 Aug 2021 13:39:45 +1000
Subject: [PATCH] Workaround for query rewriter bug which results in
modifyingCTE flag not being set.
If a query uses a modifying CTE, the hasModifyingCTE flag should be set in the
query tree, and the query will be regarded as parallel-unsafe. However, in some
cases, a re-written query with a modifying CTE does not have that flag set, due
to a bug in the query rewriter. The workaround is to update the
max_parallel_hazard_walker() to detect a modifying CTE in the query and indicate
in this case that the query is parallel-unsafe.
Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
---
src/backend/optimizer/util/clauses.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 7187f17da5..7eb305ffda 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -758,6 +758,30 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
return true;
}
+ /*
+ * ModifyingCTE expressions are treated as parallel-unsafe.
+ *
+ * XXX Normally, if the Query has a modifying CTE, the hasModifyingCTE
+ * flag is set in the Query tree, and the query will be regarded as
+ * parallel-usafe. However, in some cases, a re-written query with a
+ * modifying CTE does not have that flag set, due to a bug in the query
+ * rewriter. The following else-if is a workaround for this bug, to detect
+ * a modifying CTE in the query and regard it as parallel-unsafe. This
+ * comment, and the else-if block immediately below, may be removed once
+ * the bug in the query rewriter is fixed.
+ */
+ else if (IsA(node, CommonTableExpr))
+ {
+ CommonTableExpr *cte = (CommonTableExpr *) node;
+ Query *ctequery = castNode(Query, cte->ctequery);
+
+ if (ctequery->commandType != CMD_SELECT)
+ {
+ context->max_hazard = PROPARALLEL_UNSAFE;
+ return true;
+ }
+ }
+
/*
* As a notational convenience for callers, look through RestrictInfo.
*/
--
2.27.0
v17-0004-Cache-parallel-dml-safety.patchapplication/octet-stream; name=v17-0004-Cache-parallel-dml-safety.patchDownload
From dffebd8f53ffe275814f151ed1ff2dd4dac05707 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@fujitsu.com>
Date: Thu, 19 Aug 2021 13:48:50 +0800
Subject: [PATCH] Cache parallel dml safety
The planner is updated to perform additional parallel-safety checks For
non-partitioned table if pg_class.relparalleldml is DEFAULT('d'), and cache the
parallel safety for the relation.
Whenever any function's parallel-safety is changed, invalidate the cached
parallel-safety for all relations in relcache for a particular database.
For partitioned table, If pg_class.relparalleldml is DEFAULT('d'), assume that
the table is UNSAFE to be modified in parallel mode.
If pg_class.relparalleldml is SAFE/RESTRICTED/UNSAFE, respect the specified
parallel dml safety instead of checking it again.
---
src/backend/catalog/pg_proc.c | 13 +++++
src/backend/commands/functioncmds.c | 18 ++++++-
src/backend/optimizer/util/clauses.c | 78 ++++++++++++++++++++++------
src/backend/utils/cache/inval.c | 53 +++++++++++++++++++
src/backend/utils/cache/relcache.c | 19 +++++++
src/include/storage/sinval.h | 8 +++
src/include/utils/inval.h | 2 +
src/include/utils/rel.h | 1 +
src/include/utils/relcache.h | 2 +
9 files changed, 176 insertions(+), 18 deletions(-)
diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c
index 1454d2fb67..9745ee8558 100644
--- a/src/backend/catalog/pg_proc.c
+++ b/src/backend/catalog/pg_proc.c
@@ -39,6 +39,7 @@
#include "tcop/tcopprot.h"
#include "utils/acl.h"
#include "utils/builtins.h"
+#include "utils/inval.h"
#include "utils/lsyscache.h"
#include "utils/regproc.h"
#include "utils/rel.h"
@@ -367,6 +368,9 @@ ProcedureCreate(const char *procedureName,
Datum proargnames;
bool isnull;
const char *dropcmd;
+ char old_proparallel;
+
+ old_proparallel = oldproc->proparallel;
if (!replace)
ereport(ERROR,
@@ -559,6 +563,15 @@ ProcedureCreate(const char *procedureName,
tup = heap_modify_tuple(oldtup, tupDesc, values, nulls, replaces);
CatalogTupleUpdate(rel, &tup->t_self, tup);
+ /*
+ * If the function's parallel safety changed, the tables that depend
+ * on this function won't be safe to be modified in parallel mode
+ * anymore. So, we need to invalidate the parallel dml flag in
+ * relcache.
+ */
+ if (old_proparallel != parallel)
+ CacheInvalidateParallelDML();
+
ReleaseSysCache(oldtup);
is_update = true;
}
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index 79d875ab10..57d9ca52e5 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -70,6 +70,7 @@
#include "utils/builtins.h"
#include "utils/fmgroids.h"
#include "utils/guc.h"
+#include "utils/inval.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
#include "utils/rel.h"
@@ -1504,7 +1505,22 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
repl_val, repl_null, repl_repl);
}
if (parallel_item)
- procForm->proparallel = interpret_func_parallel(parallel_item);
+ {
+ char proparallel;
+
+ proparallel = interpret_func_parallel(parallel_item);
+
+ /*
+ * If the function's parallel safety changed, the tables that depends
+ * on this function won't be safe to be modified in parallel mode
+ * anymore. So, we need to invalidate the parallel dml flag in
+ * relcache.
+ */
+ if (proparallel != procForm->proparallel)
+ CacheInvalidateParallelDML();
+
+ procForm->proparallel = proparallel;
+ }
/* Do the update */
CatalogTupleUpdate(rel, &tup->t_self, tup);
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 749cb0dacd..5c27fc222e 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -187,7 +187,7 @@ static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
static safety_object *make_safety_object(Oid objid, Oid classid, char proparallel);
-
+static char max_parallel_dml_hazard(Query *parse, max_parallel_hazard_context *context);
/*****************************************************************************
* Aggregate-function clause manipulation
@@ -654,7 +654,6 @@ contain_volatile_functions_not_nextval_walker(Node *node, void *context)
char
max_parallel_hazard(Query *parse)
{
- bool max_hazard_found;
max_parallel_hazard_context context;
context.max_hazard = PROPARALLEL_SAFE;
@@ -664,28 +663,73 @@ max_parallel_hazard(Query *parse)
context.objects = NIL;
context.partition_directory = NULL;
- max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
+ if(!max_parallel_hazard_walker((Node *) parse, &context))
+ (void) max_parallel_dml_hazard(parse, &context);
+
+ return context.max_hazard;
+}
+
+/* Check the safety of parallel data modification */
+static char
+max_parallel_dml_hazard(Query *parse,
+ max_parallel_hazard_context *context)
+{
+ RangeTblEntry *rte;
+ Relation target_rel;
+ char hazard;
+
+ if (!IsModifySupportedInParallelMode(parse->commandType))
+ return context->max_hazard;
+
+ /*
+ * The target table is already locked by the caller (this is done in the
+ * parse/analyze phase), and remains locked until end-of-transaction.
+ */
+ rte = rt_fetch(parse->resultRelation, parse->rtable);
+ target_rel = table_open(rte->relid, NoLock);
+
+ /*
+ * If user set specific parallel dml safety safe/restricted/unsafe, we
+ * respect what user has set. If not set, for non-partitioned table, check
+ * the safety automatically, for partitioned table, consider it as unsafe.
+ */
+ hazard = target_rel->rd_rel->relparalleldml;
+ if (target_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE &&
+ hazard == PROPARALLEL_DEFAULT)
+ hazard = PROPARALLEL_UNSAFE;
+
+ if (hazard != PROPARALLEL_DEFAULT)
+ (void) max_parallel_hazard_test(hazard, context);
- if (!max_hazard_found &&
- IsModifySupportedInParallelMode(parse->commandType))
+ /* Do parallel safety check for the target relation */
+ else if (!target_rel->rd_paralleldml)
{
- RangeTblEntry *rte;
- Relation target_rel;
+ bool max_hazard_found;
+ char pre_max_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
- rte = rt_fetch(parse->resultRelation, parse->rtable);
+ max_hazard_found = target_rel_parallel_hazard_recurse(target_rel,
+ context,
+ false,
+ false);
- /*
- * The target table is already locked by the caller (this is done in the
- * parse/analyze phase), and remains locked until end-of-transaction.
- */
- target_rel = table_open(rte->relid, NoLock);
+ /* Cache the parallel dml safety of this relation */
+ target_rel->rd_paralleldml = context->max_hazard;
- (void) max_parallel_hazard_test(target_rel->rd_rel->relparalleldml,
- &context);
- table_close(target_rel, NoLock);
+ if (!max_hazard_found)
+ (void) max_parallel_hazard_test(pre_max_hazard, context);
}
- return context.max_hazard;
+ /*
+ * If we already cached the parallel dml safety of this relation, we don't
+ * need to check it again.
+ */
+ else
+ (void) max_parallel_hazard_test(target_rel->rd_paralleldml, context);
+
+ table_close(target_rel, NoLock);
+
+ return context->max_hazard;
}
/*
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 9352c68090..bacb18e10e 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -478,6 +478,27 @@ AddSnapshotInvalidationMessage(InvalidationMsgsGroup *group,
AddInvalidationMessage(group, RelCacheMsgs, &msg);
}
+/*
+ * Add a parallel dml inval entry
+ */
+static void
+AddParallelDMLInvalidationMessage(InvalidationMsgsGroup *group)
+{
+ SharedInvalidationMessage msg;
+
+ /* Don't add a duplicate item. */
+ ProcessMessageSubGroup(group, RelCacheMsgs,
+ if (msg->rc.id == SHAREDINVALPARALLELDML_ID)
+ return);
+
+ /* OK, add the item */
+ msg.pd.id = SHAREDINVALPARALLELDML_ID;
+ /* check AddCatcacheInvalidationMessage() for an explanation */
+ VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
+
+ AddInvalidationMessage(group, RelCacheMsgs, &msg);
+}
+
/*
* Append one group of invalidation messages to another, resetting
* the source group to empty.
@@ -576,6 +597,21 @@ RegisterRelcacheInvalidation(Oid dbId, Oid relId)
transInvalInfo->RelcacheInitFileInval = true;
}
+/*
+ * RegisterParallelDMLInvalidation
+ *
+ * As above, but register a invalidation event for paralleldml in all relcache.
+ */
+static void
+RegisterParallelDMLInvalidation()
+{
+ AddParallelDMLInvalidationMessage(&transInvalInfo->CurrentCmdInvalidMsgs);
+
+ (void) GetCurrentCommandId(true);
+
+ transInvalInfo->RelcacheInitFileInval = true;
+}
+
/*
* RegisterSnapshotInvalidation
*
@@ -668,6 +704,11 @@ LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
else if (msg->sn.dbId == MyDatabaseId)
InvalidateCatalogSnapshot();
}
+ else if (msg->id == SHAREDINVALPARALLELDML_ID)
+ {
+ /* Invalid all the relcache's parallel dml flag */
+ ParallelDMLInvalidate();
+ }
else
elog(FATAL, "unrecognized SI message ID: %d", msg->id);
}
@@ -1370,6 +1411,18 @@ CacheInvalidateRelcacheAll(void)
RegisterRelcacheInvalidation(InvalidOid, InvalidOid);
}
+/*
+ * CacheInvalidateParallelDML
+ * Register invalidation of the whole relcache at the end of command.
+ */
+void
+CacheInvalidateParallelDML(void)
+{
+ PrepareInvalidationState();
+
+ RegisterParallelDMLInvalidation();
+}
+
/*
* CacheInvalidateRelcacheByTuple
* As above, but relation is identified by passing its pg_class tuple.
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 70d8ecb1dd..57fe97dcd4 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2934,6 +2934,25 @@ RelationCacheInvalidate(void)
list_free(rebuildList);
}
+/*
+ * ParallelDMLInvalidate
+ * Invalidate all the relcache's parallel dml flag.
+ */
+void
+ParallelDMLInvalidate(void)
+{
+ HASH_SEQ_STATUS status;
+ RelIdCacheEnt *idhentry;
+ Relation relation;
+
+ hash_seq_init(&status, RelationIdCache);
+
+ while ((idhentry = (RelIdCacheEnt *) hash_seq_search(&status)) != NULL)
+ {
+ relation = idhentry->reldesc;
+ relation->rd_paralleldml = 0;
+ }
+}
/*
* RelationCloseSmgrByOid - close a relcache entry's smgr link
*
diff --git a/src/include/storage/sinval.h b/src/include/storage/sinval.h
index f03dc23b14..9859a3bea0 100644
--- a/src/include/storage/sinval.h
+++ b/src/include/storage/sinval.h
@@ -110,6 +110,13 @@ typedef struct
Oid relId; /* relation ID */
} SharedInvalSnapshotMsg;
+#define SHAREDINVALPARALLELDML_ID (-6)
+
+typedef struct
+{
+ int8 id; /* type field --- must be first */
+} SharedInvalParallelDMLMsg;
+
typedef union
{
int8 id; /* type field --- must be first */
@@ -119,6 +126,7 @@ typedef union
SharedInvalSmgrMsg sm;
SharedInvalRelmapMsg rm;
SharedInvalSnapshotMsg sn;
+ SharedInvalParallelDMLMsg pd;
} SharedInvalidationMessage;
diff --git a/src/include/utils/inval.h b/src/include/utils/inval.h
index 770672890b..f1ce1462c1 100644
--- a/src/include/utils/inval.h
+++ b/src/include/utils/inval.h
@@ -64,4 +64,6 @@ extern void CallSyscacheCallbacks(int cacheid, uint32 hashvalue);
extern void InvalidateSystemCaches(void);
extern void LogLogicalInvalidations(void);
+
+extern void CacheInvalidateParallelDML(void);
#endif /* INVAL_H */
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index b4faa1c123..52574e9d40 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -63,6 +63,7 @@ typedef struct RelationData
bool rd_indexvalid; /* is rd_indexlist valid? (also rd_pkindex and
* rd_replidindex) */
bool rd_statvalid; /* is rd_statlist valid? */
+ char rd_paralleldml; /* parallel dml safety */
/*----------
* rd_createSubid is the ID of the highest subtransaction the rel has
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 5ea225ac2d..5813aa50a0 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -128,6 +128,8 @@ extern void RelationCacheInvalidate(void);
extern void RelationCloseSmgrByOid(Oid relationId);
+extern void ParallelDMLInvalidate(void);
+
#ifdef USE_ASSERT_CHECKING
extern void AssertPendingSyncs_RelationCache(void);
#else
--
2.18.4
v17-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patchapplication/octet-stream; name=v17-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patchDownload
From 01bdde01fb66e93928cb84b6aeee7dd31ea9ad83 Mon Sep 17 00:00:00 2001
From: Hou Zhijie <HouZhijie@foxmail.com>
Date: Tue, 3 Aug 2021 14:13:39 +0800
Subject: [PATCH] CREATE-ALTER-TABLE-PARALLEL-DML
Enable users to declare a table's parallel data-modification safety
(DEFAULT/SAFE/RESTRICTED/UNSAFE).
Add a table property that represents parallel safety of a table for
DML statement execution.
It can be specified as follows:
CREATE TABLE table_name PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE };
ALTER TABLE table_name PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE };
This property is recorded in pg_class's relparalleldml column as 'u',
'r', or 's' like pg_proc's proparallel and as 'd' if not set.
The default is 'd'.
If relparalleldml is specific(safe/restricted/unsafe), then
the planner assumes that all of the table, its descendant partitions,
and their ancillary objects have, at worst, the specified parallel
safety. The user is responsible for its correctness.
If relparalleldml is not set or set to DEFAULT, for non-partitioned table,
planner will check the parallel safety automatically(see 0004 patch).
But for partitioned table, planner will assume that the table is UNSAFE
to be modified in parallel mode.
---
src/backend/bootstrap/bootparse.y | 3 +
src/backend/catalog/heap.c | 7 +-
src/backend/catalog/index.c | 2 +
src/backend/catalog/toasting.c | 1 +
src/backend/commands/cluster.c | 1 +
src/backend/commands/createas.c | 1 +
src/backend/commands/sequence.c | 1 +
src/backend/commands/tablecmds.c | 97 +++++++++++++++++++
src/backend/commands/typecmds.c | 1 +
src/backend/commands/view.c | 1 +
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 2 +
src/backend/nodes/outfuncs.c | 2 +
src/backend/nodes/readfuncs.c | 1 +
src/backend/parser/gram.y | 73 ++++++++++----
src/backend/utils/cache/relcache.c | 6 +-
src/bin/pg_dump/pg_dump.c | 50 ++++++++--
src/bin/pg_dump/pg_dump.h | 1 +
src/bin/psql/describe.c | 71 ++++++++++++--
src/include/catalog/heap.h | 2 +
src/include/catalog/pg_class.h | 3 +
src/include/catalog/pg_proc.h | 2 +
src/include/nodes/parsenodes.h | 4 +-
src/include/nodes/primnodes.h | 1 +
src/include/parser/kwlist.h | 1 +
src/include/utils/relcache.h | 3 +-
.../test_ddl_deparse/test_ddl_deparse.c | 3 +
27 files changed, 302 insertions(+), 39 deletions(-)
diff --git a/src/backend/bootstrap/bootparse.y b/src/backend/bootstrap/bootparse.y
index 5fcd004e1b..4712536088 100644
--- a/src/backend/bootstrap/bootparse.y
+++ b/src/backend/bootstrap/bootparse.y
@@ -25,6 +25,7 @@
#include "catalog/pg_authid.h"
#include "catalog/pg_class.h"
#include "catalog/pg_namespace.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/toasting.h"
#include "commands/defrem.h"
@@ -208,6 +209,7 @@ Boot_CreateStmt:
tupdesc,
RELKIND_RELATION,
RELPERSISTENCE_PERMANENT,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
true,
@@ -231,6 +233,7 @@ Boot_CreateStmt:
NIL,
RELKIND_RELATION,
RELPERSISTENCE_PERMANENT,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
ONCOMMIT_NOOP,
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 83746d3fd9..135df961c9 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -302,6 +302,7 @@ heap_create(const char *relname,
TupleDesc tupDesc,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
bool allow_system_table_mods,
@@ -404,7 +405,8 @@ heap_create(const char *relname,
shared_relation,
mapped_relation,
relpersistence,
- relkind);
+ relkind,
+ relparalleldml);
/*
* Have the storage manager create the relation's disk file, if needed.
@@ -959,6 +961,7 @@ InsertPgClassTuple(Relation pg_class_desc,
values[Anum_pg_class_relhassubclass - 1] = BoolGetDatum(rd_rel->relhassubclass);
values[Anum_pg_class_relispopulated - 1] = BoolGetDatum(rd_rel->relispopulated);
values[Anum_pg_class_relreplident - 1] = CharGetDatum(rd_rel->relreplident);
+ values[Anum_pg_class_relparalleldml - 1] = CharGetDatum(rd_rel->relparalleldml);
values[Anum_pg_class_relispartition - 1] = BoolGetDatum(rd_rel->relispartition);
values[Anum_pg_class_relrewrite - 1] = ObjectIdGetDatum(rd_rel->relrewrite);
values[Anum_pg_class_relfrozenxid - 1] = TransactionIdGetDatum(rd_rel->relfrozenxid);
@@ -1152,6 +1155,7 @@ heap_create_with_catalog(const char *relname,
List *cooked_constraints,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
OnCommitAction oncommit,
@@ -1299,6 +1303,7 @@ heap_create_with_catalog(const char *relname,
tupdesc,
relkind,
relpersistence,
+ relparalleldml,
shared_relation,
mapped_relation,
allow_system_table_mods,
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 26bfa74ce7..18f3a51686 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -50,6 +50,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_opclass.h"
#include "catalog/pg_operator.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_trigger.h"
#include "catalog/pg_type.h"
@@ -935,6 +936,7 @@ index_create(Relation heapRelation,
indexTupDesc,
relkind,
relpersistence,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
allow_system_table_mods,
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index 147b5abc19..b32d2d4132 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -251,6 +251,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
NIL,
RELKIND_TOASTVALUE,
rel->rd_rel->relpersistence,
+ rel->rd_rel->relparalleldml,
shared_relation,
mapped_relation,
ONCOMMIT_NOOP,
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index b3d8b6deb0..d1a7603d90 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -693,6 +693,7 @@ make_new_heap(Oid OIDOldHeap, Oid NewTableSpace, Oid NewAccessMethod,
NIL,
RELKIND_RELATION,
relpersistence,
+ OldHeap->rd_rel->relparalleldml,
false,
RelationIsMapped(OldHeap),
ONCOMMIT_NOOP,
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 0982851715..7607b91ae8 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -107,6 +107,7 @@ create_ctas_internal(List *attrList, IntoClause *into)
create->options = into->options;
create->oncommit = into->onCommit;
create->tablespacename = into->tableSpaceName;
+ create->paralleldmlsafety = into->paralleldmlsafety;
create->if_not_exists = false;
create->accessMethod = into->accessMethod;
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..384770050a 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -211,6 +211,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
stmt->options = NIL;
stmt->oncommit = ONCOMMIT_NOOP;
stmt->tablespacename = NULL;
+ stmt->paralleldmlsafety = NULL;
stmt->if_not_exists = seq->if_not_exists;
address = DefineRelation(stmt, RELKIND_SEQUENCE, seq->ownerId, NULL, NULL);
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fcd778c62a..5968252648 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -40,6 +40,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_namespace.h"
#include "catalog/pg_opclass.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_statistic_ext.h"
#include "catalog/pg_trigger.h"
@@ -603,6 +604,7 @@ static void refuseDupeIndexAttach(Relation parentIdx, Relation partIdx,
static List *GetParentedForeignKeyRefs(Relation partition);
static void ATDetachCheckNoForeignKeyRefs(Relation partition);
static char GetAttributeCompression(Oid atttypid, char *compression);
+static void ATExecParallelDMLSafety(Relation rel, Node *def);
/* ----------------------------------------------------------------
@@ -648,6 +650,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
LOCKMODE parentLockmode;
const char *accessMethod = NULL;
Oid accessMethodId = InvalidOid;
+ char relparalleldml = PROPARALLEL_DEFAULT;
/*
* Truncate relname to appropriate length (probably a waste of time, as
@@ -926,6 +929,32 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
if (accessMethod != NULL)
accessMethodId = get_table_am_oid(accessMethod, false);
+ if (stmt->paralleldmlsafety != NULL)
+ {
+ if (strcmp(stmt->paralleldmlsafety, "safe") == 0)
+ {
+ if (relkind == RELKIND_FOREIGN_TABLE ||
+ stmt->relation->relpersistence == RELPERSISTENCE_TEMP)
+ ereport(ERROR,
+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ errmsg("cannot perform parallel data modification on relation \"%s\"",
+ relname),
+ errdetail_relkind_not_supported(relkind)));
+
+ relparalleldml = PROPARALLEL_SAFE;
+ }
+ else if (strcmp(stmt->paralleldmlsafety, "restricted") == 0)
+ relparalleldml = PROPARALLEL_RESTRICTED;
+ else if (strcmp(stmt->paralleldmlsafety, "unsafe") == 0)
+ relparalleldml = PROPARALLEL_UNSAFE;
+ else if (strcmp(stmt->paralleldmlsafety, "default") == 0)
+ relparalleldml = PROPARALLEL_DEFAULT;
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("parameter \"parallel dml\" must be SAFE, RESTRICTED, UNSAFE or DEFAULT")));
+ }
+
/*
* Create the relation. Inherited defaults and constraints are passed in
* for immediate handling --- since they don't need parsing, they can be
@@ -944,6 +973,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
old_constraints),
relkind,
stmt->relation->relpersistence,
+ relparalleldml,
false,
false,
stmt->oncommit,
@@ -4187,6 +4217,7 @@ AlterTableGetLockLevel(List *cmds)
case AT_SetIdentity:
case AT_DropExpression:
case AT_SetCompression:
+ case AT_ParallelDMLSafety:
cmd_lockmode = AccessExclusiveLock;
break;
@@ -4737,6 +4768,11 @@ ATPrepCmd(List **wqueue, Relation rel, AlterTableCmd *cmd,
/* No command-specific prep needed */
pass = AT_PASS_MISC;
break;
+ case AT_ParallelDMLSafety:
+ ATSimplePermissions(cmd->subtype, rel, ATT_TABLE | ATT_FOREIGN_TABLE);
+ /* No command-specific prep needed */
+ pass = AT_PASS_MISC;
+ break;
default: /* oops */
elog(ERROR, "unrecognized alter table type: %d",
(int) cmd->subtype);
@@ -5142,6 +5178,9 @@ ATExecCmd(List **wqueue, AlteredTableInfo *tab,
case AT_DetachPartitionFinalize:
ATExecDetachPartitionFinalize(rel, ((PartitionCmd *) cmd->def)->name);
break;
+ case AT_ParallelDMLSafety:
+ ATExecParallelDMLSafety(rel, cmd->def);
+ break;
default: /* oops */
elog(ERROR, "unrecognized alter table type: %d",
(int) cmd->subtype);
@@ -6113,6 +6152,8 @@ alter_table_type_to_string(AlterTableType cmdtype)
return "ALTER COLUMN ... DROP IDENTITY";
case AT_ReAddStatistics:
return NULL; /* not real grammar */
+ case AT_ParallelDMLSafety:
+ return "PARALLEL DML SAFETY";
}
return NULL;
@@ -18773,3 +18814,59 @@ GetAttributeCompression(Oid atttypid, char *compression)
return cmethod;
}
+
+static void
+ATExecParallelDMLSafety(Relation rel, Node *def)
+{
+ Relation pg_class;
+ Oid relid;
+ HeapTuple tuple;
+ char relparallel = PROPARALLEL_DEFAULT;
+ char *parallel = strVal(def);
+
+ if (parallel)
+ {
+ if (strcmp(parallel, "safe") == 0)
+ {
+ /*
+ * We can't support table modification in a parallel worker if it's
+ * a foreign table/partition (no FDW API for supporting parallel
+ * access) or a temporary table.
+ */
+ if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||
+ RelationUsesLocalBuffers(rel))
+ ereport(ERROR,
+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ errmsg("cannot perform parallel data modification on relation \"%s\"",
+ RelationGetRelationName(rel)),
+ errdetail_relkind_not_supported(rel->rd_rel->relkind)));
+
+ relparallel = PROPARALLEL_SAFE;
+ }
+ else if (strcmp(parallel, "restricted") == 0)
+ relparallel = PROPARALLEL_RESTRICTED;
+ else if (strcmp(parallel, "unsafe") == 0)
+ relparallel = PROPARALLEL_UNSAFE;
+ else if (strcmp(parallel, "default") == 0)
+ relparallel = PROPARALLEL_DEFAULT;
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("parameter \"parallel dml\" must be SAFE, RESTRICTED, UNSAFE or DEFAULT")));
+ }
+
+ relid = RelationGetRelid(rel);
+
+ pg_class = table_open(RelationRelationId, RowExclusiveLock);
+
+ tuple = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(relid));
+
+ if (!HeapTupleIsValid(tuple))
+ elog(ERROR, "cache lookup failed for relation %u", relid);
+
+ ((Form_pg_class) GETSTRUCT(tuple))->relparalleldml = relparallel;
+ CatalogTupleUpdate(pg_class, &tuple->t_self, tuple);
+
+ table_close(pg_class, RowExclusiveLock);
+ heap_freetuple(tuple);
+}
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index 93eeff950b..a2f06c3e79 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2525,6 +2525,7 @@ DefineCompositeType(RangeVar *typevar, List *coldeflist)
createStmt->options = NIL;
createStmt->oncommit = ONCOMMIT_NOOP;
createStmt->tablespacename = NULL;
+ createStmt->paralleldmlsafety = NULL;
createStmt->if_not_exists = false;
/*
diff --git a/src/backend/commands/view.c b/src/backend/commands/view.c
index 4df05a0b33..65f33a95d8 100644
--- a/src/backend/commands/view.c
+++ b/src/backend/commands/view.c
@@ -227,6 +227,7 @@ DefineVirtualRelation(RangeVar *relation, List *tlist, bool replace,
createStmt->options = options;
createStmt->oncommit = ONCOMMIT_NOOP;
createStmt->tablespacename = NULL;
+ createStmt->paralleldmlsafety = NULL;
createStmt->if_not_exists = false;
/*
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 29020c908e..df41165c5f 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -3534,6 +3534,7 @@ CopyCreateStmtFields(const CreateStmt *from, CreateStmt *newnode)
COPY_SCALAR_FIELD(oncommit);
COPY_STRING_FIELD(tablespacename);
COPY_STRING_FIELD(accessMethod);
+ COPY_STRING_FIELD(paralleldmlsafety);
COPY_SCALAR_FIELD(if_not_exists);
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 8a1762000c..67b1966f18 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -146,6 +146,7 @@ _equalIntoClause(const IntoClause *a, const IntoClause *b)
COMPARE_NODE_FIELD(options);
COMPARE_SCALAR_FIELD(onCommit);
COMPARE_STRING_FIELD(tableSpaceName);
+ COMPARE_STRING_FIELD(paralleldmlsafety);
COMPARE_NODE_FIELD(viewQuery);
COMPARE_SCALAR_FIELD(skipData);
@@ -1292,6 +1293,7 @@ _equalCreateStmt(const CreateStmt *a, const CreateStmt *b)
COMPARE_SCALAR_FIELD(oncommit);
COMPARE_STRING_FIELD(tablespacename);
COMPARE_STRING_FIELD(accessMethod);
+ COMPARE_STRING_FIELD(paralleldmlsafety);
COMPARE_SCALAR_FIELD(if_not_exists);
return true;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 48202d2232..fdc5b63c28 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -1107,6 +1107,7 @@ _outIntoClause(StringInfo str, const IntoClause *node)
WRITE_NODE_FIELD(options);
WRITE_ENUM_FIELD(onCommit, OnCommitAction);
WRITE_STRING_FIELD(tableSpaceName);
+ WRITE_STRING_FIELD(paralleldmlsafety);
WRITE_NODE_FIELD(viewQuery);
WRITE_BOOL_FIELD(skipData);
}
@@ -2714,6 +2715,7 @@ _outCreateStmtInfo(StringInfo str, const CreateStmt *node)
WRITE_ENUM_FIELD(oncommit, OnCommitAction);
WRITE_STRING_FIELD(tablespacename);
WRITE_STRING_FIELD(accessMethod);
+ WRITE_STRING_FIELD(paralleldmlsafety);
WRITE_BOOL_FIELD(if_not_exists);
}
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 77d082d8b4..ba725cb290 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -563,6 +563,7 @@ _readIntoClause(void)
READ_NODE_FIELD(options);
READ_ENUM_FIELD(onCommit, OnCommitAction);
READ_STRING_FIELD(tableSpaceName);
+ READ_STRING_FIELD(paralleldmlsafety);
READ_NODE_FIELD(viewQuery);
READ_BOOL_FIELD(skipData);
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 39a2849eba..f74a7cac60 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -609,7 +609,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <partboundspec> PartitionBoundSpec
%type <list> hash_partbound
%type <defelt> hash_partbound_elem
-
+%type <str> ParallelDMLSafety
/*
* Non-keyword token types. These are hard-wired into the "flex" lexer.
@@ -654,7 +654,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
DATA_P DATABASE DAY_P DEALLOCATE DEC DECIMAL_P DECLARE DEFAULT DEFAULTS
DEFERRABLE DEFERRED DEFINER DELETE_P DELIMITER DELIMITERS DEPENDS DEPTH DESC
- DETACH DICTIONARY DISABLE_P DISCARD DISTINCT DO DOCUMENT_P DOMAIN_P
+ DETACH DICTIONARY DISABLE_P DISCARD DISTINCT DML DO DOCUMENT_P DOMAIN_P
DOUBLE_P DROP
EACH ELSE ENABLE_P ENCODING ENCRYPTED END_P ENUM_P ESCAPE EVENT EXCEPT
@@ -2691,6 +2691,21 @@ alter_table_cmd:
n->subtype = AT_NoForceRowSecurity;
$$ = (Node *)n;
}
+ /* ALTER TABLE <name> PARALLEL DML SAFE/RESTRICTED/UNSAFE/DEFAULT */
+ | PARALLEL DML ColId
+ {
+ AlterTableCmd *n = makeNode(AlterTableCmd);
+ n->subtype = AT_ParallelDMLSafety;
+ n->def = (Node *)makeString($3);
+ $$ = (Node *)n;
+ }
+ | PARALLEL DML DEFAULT
+ {
+ AlterTableCmd *n = makeNode(AlterTableCmd);
+ n->subtype = AT_ParallelDMLSafety;
+ n->def = (Node *)makeString("default");
+ $$ = (Node *)n;
+ }
| alter_generic_options
{
AlterTableCmd *n = makeNode(AlterTableCmd);
@@ -3276,7 +3291,7 @@ copy_generic_opt_arg_list_item:
CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
OptInherit OptPartitionSpec table_access_method_clause OptWith
- OnCommitOption OptTableSpace
+ OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3290,12 +3305,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $11;
n->oncommit = $12;
n->tablespacename = $13;
+ n->paralleldmlsafety = $14;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name '('
OptTableElementList ')' OptInherit OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3309,12 +3325,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $14;
n->oncommit = $15;
n->tablespacename = $16;
+ n->paralleldmlsafety = $17;
n->if_not_exists = true;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE qualified_name OF any_name
OptTypedTableElementList OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3329,12 +3346,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $10;
n->oncommit = $11;
n->tablespacename = $12;
+ n->paralleldmlsafety = $13;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name OF any_name
OptTypedTableElementList OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3349,12 +3367,14 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $13;
n->oncommit = $14;
n->tablespacename = $15;
+ n->paralleldmlsafety = $16;
n->if_not_exists = true;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE qualified_name PARTITION OF qualified_name
OptTypedTableElementList PartitionBoundSpec OptPartitionSpec
table_access_method_clause OptWith OnCommitOption OptTableSpace
+ ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3369,12 +3389,14 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $12;
n->oncommit = $13;
n->tablespacename = $14;
+ n->paralleldmlsafety = $15;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name PARTITION OF
qualified_name OptTypedTableElementList PartitionBoundSpec OptPartitionSpec
table_access_method_clause OptWith OnCommitOption OptTableSpace
+ ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3389,6 +3411,7 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $15;
n->oncommit = $16;
n->tablespacename = $17;
+ n->paralleldmlsafety = $18;
n->if_not_exists = true;
$$ = (Node *)n;
}
@@ -4089,6 +4112,11 @@ OptTableSpace: TABLESPACE name { $$ = $2; }
| /*EMPTY*/ { $$ = NULL; }
;
+ParallelDMLSafety: PARALLEL DML name { $$ = $3; }
+ | PARALLEL DML DEFAULT { $$ = pstrdup("default"); }
+ | /*EMPTY*/ { $$ = NULL; }
+ ;
+
OptConsTableSpace: USING INDEX TABLESPACE name { $$ = $4; }
| /*EMPTY*/ { $$ = NULL; }
;
@@ -4236,7 +4264,7 @@ CreateAsStmt:
create_as_target:
qualified_name opt_column_list table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
$$ = makeNode(IntoClause);
$$->rel = $1;
@@ -4245,6 +4273,7 @@ create_as_target:
$$->options = $4;
$$->onCommit = $5;
$$->tableSpaceName = $6;
+ $$->paralleldmlsafety = $7;
$$->viewQuery = NULL;
$$->skipData = false; /* might get changed later */
}
@@ -5024,7 +5053,7 @@ AlterForeignServerStmt: ALTER SERVER name foreign_server_version alter_generic_o
CreateForeignTableStmt:
CREATE FOREIGN TABLE qualified_name
'(' OptTableElementList ')'
- OptInherit SERVER name create_generic_options
+ OptInherit ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$4->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5036,15 +5065,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $9;
n->base.if_not_exists = false;
/* FDW-specific data */
- n->servername = $10;
- n->options = $11;
+ n->servername = $11;
+ n->options = $12;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE IF_P NOT EXISTS qualified_name
'(' OptTableElementList ')'
- OptInherit SERVER name create_generic_options
+ OptInherit ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$7->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5056,15 +5086,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $12;
n->base.if_not_exists = true;
/* FDW-specific data */
- n->servername = $13;
- n->options = $14;
+ n->servername = $14;
+ n->options = $15;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE qualified_name
PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec
- SERVER name create_generic_options
+ ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$4->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5077,15 +5108,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $10;
n->base.if_not_exists = false;
/* FDW-specific data */
- n->servername = $11;
- n->options = $12;
+ n->servername = $12;
+ n->options = $13;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE IF_P NOT EXISTS qualified_name
PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec
- SERVER name create_generic_options
+ ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$7->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5098,10 +5130,11 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $13;
n->base.if_not_exists = true;
/* FDW-specific data */
- n->servername = $14;
- n->options = $15;
+ n->servername = $15;
+ n->options = $16;
$$ = (Node *) n;
}
;
@@ -15547,6 +15580,7 @@ unreserved_keyword:
| DICTIONARY
| DISABLE_P
| DISCARD
+ | DML
| DOCUMENT_P
| DOMAIN_P
| DOUBLE_P
@@ -16087,6 +16121,7 @@ bare_label_keyword:
| DISABLE_P
| DISCARD
| DISTINCT
+ | DML
| DO
| DOCUMENT_P
| DOMAIN_P
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 13d9994af3..70d8ecb1dd 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -1873,6 +1873,7 @@ formrdesc(const char *relationName, Oid relationReltype,
relation->rd_rel->relkind = RELKIND_RELATION;
relation->rd_rel->relnatts = (int16) natts;
relation->rd_rel->relam = HEAP_TABLE_AM_OID;
+ relation->rd_rel->relparalleldml = PROPARALLEL_DEFAULT;
/*
* initialize attribute tuple form
@@ -3359,7 +3360,8 @@ RelationBuildLocalRelation(const char *relname,
bool shared_relation,
bool mapped_relation,
char relpersistence,
- char relkind)
+ char relkind,
+ char relparalleldml)
{
Relation rel;
MemoryContext oldcxt;
@@ -3509,6 +3511,8 @@ RelationBuildLocalRelation(const char *relname,
else
rel->rd_rel->relreplident = REPLICA_IDENTITY_NOTHING;
+ rel->rd_rel->relparalleldml = relparalleldml;
+
/*
* Insert relation physical and logical identifiers (OIDs) into the right
* places. For a mapped relation, we set relfilenode to zero and rely on
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 90ac445bcd..5165202e84 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -6253,6 +6253,7 @@ getTables(Archive *fout, int *numTables)
int i_relpersistence;
int i_relispopulated;
int i_relreplident;
+ int i_relparalleldml;
int i_owning_tab;
int i_owning_col;
int i_reltablespace;
@@ -6358,7 +6359,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, am.amname, "
+ "c.relreplident, c.relparalleldml, c.relpages, am.amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
"ELSE 0 END AS foreignserver, "
@@ -6450,7 +6451,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, "
+ "c.relreplident, c.relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6503,7 +6504,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, "
+ "c.relreplident, c.relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6556,7 +6557,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6609,7 +6610,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"c.relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6660,7 +6661,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, "
@@ -6708,7 +6709,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6756,7 +6757,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6803,7 +6804,7 @@ getTables(Archive *fout, int *numTables)
"0 AS toid, "
"0 AS tfrozenxid, 0 AS tminmxid,"
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6872,6 +6873,7 @@ getTables(Archive *fout, int *numTables)
i_relpersistence = PQfnumber(res, "relpersistence");
i_relispopulated = PQfnumber(res, "relispopulated");
i_relreplident = PQfnumber(res, "relreplident");
+ i_relparalleldml = PQfnumber(res, "relparalleldml");
i_relpages = PQfnumber(res, "relpages");
i_foreignserver = PQfnumber(res, "foreignserver");
i_owning_tab = PQfnumber(res, "owning_tab");
@@ -6927,6 +6929,7 @@ getTables(Archive *fout, int *numTables)
tblinfo[i].hasoids = (strcmp(PQgetvalue(res, i, i_relhasoids), "t") == 0);
tblinfo[i].relispopulated = (strcmp(PQgetvalue(res, i, i_relispopulated), "t") == 0);
tblinfo[i].relreplident = *(PQgetvalue(res, i, i_relreplident));
+ tblinfo[i].relparalleldml = *(PQgetvalue(res, i, i_relparalleldml));
tblinfo[i].relpages = atoi(PQgetvalue(res, i, i_relpages));
tblinfo[i].frozenxid = atooid(PQgetvalue(res, i, i_relfrozenxid));
tblinfo[i].minmxid = atooid(PQgetvalue(res, i, i_relminmxid));
@@ -16555,6 +16558,35 @@ dumpTableSchema(Archive *fout, const TableInfo *tbinfo)
}
}
+ if (tbinfo->relkind == RELKIND_RELATION ||
+ tbinfo->relkind == RELKIND_PARTITIONED_TABLE ||
+ tbinfo->relkind == RELKIND_FOREIGN_TABLE)
+ {
+ appendPQExpBuffer(q, "\nALTER %sTABLE %s PARALLEL DML ",
+ tbinfo->relkind == RELKIND_FOREIGN_TABLE ? "FOREIGN " : "",
+ qualrelname);
+
+ switch (tbinfo->relparalleldml)
+ {
+ case 's':
+ appendPQExpBuffer(q, "SAFE;\n");
+ break;
+ case 'r':
+ appendPQExpBuffer(q, "RESTRICTED;\n");
+ break;
+ case 'u':
+ appendPQExpBuffer(q, "UNSAFE;\n");
+ break;
+ case 'd':
+ appendPQExpBuffer(q, "DEFAULT;\n");
+ break;
+ default:
+ /* should not reach here */
+ appendPQExpBuffer(q, "DEFAULT;\n");
+ break;
+ }
+ }
+
if (tbinfo->forcerowsec)
appendPQExpBuffer(q, "\nALTER TABLE ONLY %s FORCE ROW LEVEL SECURITY;\n",
qualrelname);
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index f5e170e0db..8175a0bc82 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -270,6 +270,7 @@ typedef struct _tableInfo
char relpersistence; /* relation persistence */
bool relispopulated; /* relation is populated */
char relreplident; /* replica identifier */
+ char relparalleldml; /* parallel safety of dml on the relation */
char *reltablespace; /* relation tablespace */
char *reloptions; /* options specified by WITH (...) */
char *checkoption; /* WITH CHECK OPTION, if any */
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 8333558bda..f896fe1793 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1656,6 +1656,7 @@ describeOneTableDetails(const char *schemaname,
char *reloftype;
char relpersistence;
char relreplident;
+ char relparalleldml;
char *relam;
} tableinfo;
bool show_column_details = false;
@@ -1669,7 +1670,25 @@ describeOneTableDetails(const char *schemaname,
initPQExpBuffer(&tmpbuf);
/* Get general table info */
- if (pset.sversion >= 120000)
+ if (pset.sversion >= 150000)
+ {
+ printfPQExpBuffer(&buf,
+ "SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, "
+ "c.relhastriggers, c.relrowsecurity, c.relforcerowsecurity, "
+ "false AS relhasoids, c.relispartition, %s, c.reltablespace, "
+ "CASE WHEN c.reloftype = 0 THEN '' ELSE c.reloftype::pg_catalog.regtype::pg_catalog.text END, "
+ "c.relpersistence, c.relreplident, am.amname, c.relparalleldml\n"
+ "FROM pg_catalog.pg_class c\n "
+ "LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)\n"
+ "LEFT JOIN pg_catalog.pg_am am ON (c.relam = am.oid)\n"
+ "WHERE c.oid = '%s';",
+ (verbose ?
+ "pg_catalog.array_to_string(c.reloptions || "
+ "array(select 'toast.' || x from pg_catalog.unnest(tc.reloptions) x), ', ')\n"
+ : "''"),
+ oid);
+ }
+ else if (pset.sversion >= 120000)
{
printfPQExpBuffer(&buf,
"SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, "
@@ -1853,6 +1872,8 @@ describeOneTableDetails(const char *schemaname,
(char *) NULL : pg_strdup(PQgetvalue(res, 0, 14));
else
tableinfo.relam = NULL;
+ tableinfo.relparalleldml = (pset.sversion >= 150000) ?
+ *(PQgetvalue(res, 0, 15)) : 0;
PQclear(res);
res = NULL;
@@ -3630,6 +3651,21 @@ describeOneTableDetails(const char *schemaname,
printfPQExpBuffer(&buf, _("Access method: %s"), tableinfo.relam);
printTableAddFooter(&cont, buf.data);
}
+
+ if (verbose &&
+ (tableinfo.relkind == RELKIND_RELATION ||
+ tableinfo.relkind == RELKIND_PARTITIONED_TABLE ||
+ tableinfo.relkind == RELKIND_FOREIGN_TABLE) &&
+ tableinfo.relparalleldml != 0)
+ {
+ printfPQExpBuffer(&buf, _("Parallel DML: %s"),
+ tableinfo.relparalleldml == 'd' ? "default" :
+ tableinfo.relparalleldml == 'u' ? "unsafe" :
+ tableinfo.relparalleldml == 'r' ? "restricted" :
+ tableinfo.relparalleldml == 's' ? "safe" :
+ "???");
+ printTableAddFooter(&cont, buf.data);
+ }
}
/* reloptions, if verbose */
@@ -4005,7 +4041,7 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys
PGresult *res;
printQueryOpt myopt = pset.popt;
int cols_so_far;
- bool translate_columns[] = {false, false, true, false, false, false, false, false, false};
+ bool translate_columns[] = {false, false, true, false, false, false, false, false, false, false};
/* If tabtypes is empty, we default to \dtvmsE (but see also command.c) */
if (!(showTables || showIndexes || showViews || showMatViews || showSeq || showForeign))
@@ -4073,22 +4109,43 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys
gettext_noop("unlogged"),
gettext_noop("Persistence"));
translate_columns[cols_so_far] = true;
+ cols_so_far++;
}
- /*
- * We don't bother to count cols_so_far below here, as there's no need
- * to; this might change with future additions to the output columns.
- */
-
/*
* Access methods exist for tables, materialized views and indexes.
* This has been introduced in PostgreSQL 12 for tables.
*/
if (pset.sversion >= 120000 && !pset.hide_tableam &&
(showTables || showMatViews || showIndexes))
+ {
appendPQExpBuffer(&buf,
",\n am.amname as \"%s\"",
gettext_noop("Access method"));
+ cols_so_far++;
+ }
+
+ /*
+ * Show whether the data in the relation is default('d') unsafe('u'),
+ * restricted('r'), or safe('s') can be modified in parallel mode.
+ * This has been introduced in PostgreSQL 15 for tables.
+ */
+ if (pset.sversion >= 150000)
+ {
+ appendPQExpBuffer(&buf,
+ ",\n CASE c.relparalleldml WHEN 'd' THEN '%s' WHEN 'u' THEN '%s' WHEN 'r' THEN '%s' WHEN 's' THEN '%s' END as \"%s\"",
+ gettext_noop("default"),
+ gettext_noop("unsafe"),
+ gettext_noop("restricted"),
+ gettext_noop("safe"),
+ gettext_noop("Parallel DML"));
+ translate_columns[cols_so_far] = true;
+ }
+
+ /*
+ * We don't bother to count cols_so_far below here, as there's no need
+ * to; this might change with future additions to the output columns.
+ */
/*
* As of PostgreSQL 9.0, use pg_table_size() to show a more accurate
diff --git a/src/include/catalog/heap.h b/src/include/catalog/heap.h
index 6ce480b49c..b59975919b 100644
--- a/src/include/catalog/heap.h
+++ b/src/include/catalog/heap.h
@@ -55,6 +55,7 @@ extern Relation heap_create(const char *relname,
TupleDesc tupDesc,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
bool allow_system_table_mods,
@@ -73,6 +74,7 @@ extern Oid heap_create_with_catalog(const char *relname,
List *cooked_constraints,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
OnCommitAction oncommit,
diff --git a/src/include/catalog/pg_class.h b/src/include/catalog/pg_class.h
index fef9945ed8..244eac6bd8 100644
--- a/src/include/catalog/pg_class.h
+++ b/src/include/catalog/pg_class.h
@@ -116,6 +116,9 @@ CATALOG(pg_class,1259,RelationRelationId) BKI_BOOTSTRAP BKI_ROWTYPE_OID(83,Relat
/* see REPLICA_IDENTITY_xxx constants */
char relreplident BKI_DEFAULT(n);
+ /* parallel safety of the dml on the relation */
+ char relparalleldml BKI_DEFAULT(d);
+
/* is relation a partition? */
bool relispartition BKI_DEFAULT(f);
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index b33b8b0134..cd52c0e254 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -171,6 +171,8 @@ DECLARE_UNIQUE_INDEX(pg_proc_proname_args_nsp_index, 2691, ProcedureNameArgsNspI
#define PROPARALLEL_RESTRICTED 'r' /* can run in parallel leader only */
#define PROPARALLEL_UNSAFE 'u' /* banned while in parallel mode */
+#define PROPARALLEL_DEFAULT 'd' /* only used for parallel dml safety */
+
/*
* Symbolic values for proargmodes column. Note that these must agree with
* the FunctionParameterMode enum in parsenodes.h; we declare them here to
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index e28248af32..0352e41c6e 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -1934,7 +1934,8 @@ typedef enum AlterTableType
AT_AddIdentity, /* ADD IDENTITY */
AT_SetIdentity, /* SET identity column options */
AT_DropIdentity, /* DROP IDENTITY */
- AT_ReAddStatistics /* internal to commands/tablecmds.c */
+ AT_ReAddStatistics, /* internal to commands/tablecmds.c */
+ AT_ParallelDMLSafety /* PARALLEL DML SAFE/RESTRICTED/UNSAFE/DEFAULT */
} AlterTableType;
typedef struct ReplicaIdentityStmt
@@ -2180,6 +2181,7 @@ typedef struct CreateStmt
OnCommitAction oncommit; /* what do we do at COMMIT? */
char *tablespacename; /* table space to use, or NULL */
char *accessMethod; /* table access method */
+ char *paralleldmlsafety; /* parallel dml safety */
bool if_not_exists; /* just do nothing if it already exists? */
} CreateStmt;
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index c04282f91f..6e679d9f97 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -115,6 +115,7 @@ typedef struct IntoClause
List *options; /* options from WITH clause */
OnCommitAction onCommit; /* what do we do at COMMIT? */
char *tableSpaceName; /* table space to use, or NULL */
+ char *paralleldmlsafety; /* parallel dml safety */
Node *viewQuery; /* materialized view's SELECT query */
bool skipData; /* true for WITH NO DATA */
} IntoClause;
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index f836acf876..05222faccd 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -139,6 +139,7 @@ PG_KEYWORD("dictionary", DICTIONARY, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("disable", DISABLE_P, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("discard", DISCARD, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("distinct", DISTINCT, RESERVED_KEYWORD, BARE_LABEL)
+PG_KEYWORD("dml", DML, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("do", DO, RESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("document", DOCUMENT_P, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("domain", DOMAIN_P, UNRESERVED_KEYWORD, BARE_LABEL)
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index f772855ac6..5ea225ac2d 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -108,7 +108,8 @@ extern Relation RelationBuildLocalRelation(const char *relname,
bool shared_relation,
bool mapped_relation,
char relpersistence,
- char relkind);
+ char relkind,
+ char relparalleldml);
/*
* Routines to manage assignment of new relfilenode to a relation
diff --git a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
index 1bae1e5438..e1f5678eef 100644
--- a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
+++ b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
@@ -276,6 +276,9 @@ get_altertable_subcmdtypes(PG_FUNCTION_ARGS)
case AT_NoForceRowSecurity:
strtype = "NO FORCE ROW SECURITY";
break;
+ case AT_ParallelDMLSafety:
+ strtype = "PARALLEL DML SAFETY";
+ break;
case AT_GenericOptions:
strtype = "SET OPTIONS";
break;
--
2.27.0
Thursday, August 19, 2021 4:16 PM Hou zhijie <houzj.fnst@fujitsu.com> wrote:
On Fri, Aug 6, 2021 4:23 PM Hou zhijie <houzj.fnst@fujitsu.com> wrote:
Update the commit message in patches to make it easier for others to review.
CFbot reported a compile error due to recent commit 3aafc03.
Attach rebased patches which fix the error.
The patch can't apply to the HEAD branch due a recent commit.
Attach rebased patches.
Best regards,
Hou zj
Attachments:
v18-0005-Regression-test-and-doc-updates.patchapplication/octet-stream; name=v18-0005-Regression-test-and-doc-updates.patchDownload
From 7ec228cd54743d92b67c80c6c362938de06e6305 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@fujitsu.com>
Date: Wed, 1 Sep 2021 15:58:39 +0800
Subject: [PATCH] Regression-test-and-doc-updates
---
contrib/test_decoding/expected/ddl.out | 4 +
doc/src/sgml/func.sgml | 61 ++
doc/src/sgml/ref/alter_foreign_table.sgml | 13 +
doc/src/sgml/ref/alter_function.sgml | 2 +-
doc/src/sgml/ref/alter_table.sgml | 12 +
doc/src/sgml/ref/create_foreign_table.sgml | 39 +
doc/src/sgml/ref/create_table.sgml | 44 ++
doc/src/sgml/ref/create_table_as.sgml | 38 +
src/test/regress/expected/alter_table.out | 2 +
src/test/regress/expected/compression_1.out | 9 +
src/test/regress/expected/copy2.out | 1 +
src/test/regress/expected/create_table.out | 14 +
.../regress/expected/create_table_like.out | 8 +
src/test/regress/expected/domain.out | 2 +
src/test/regress/expected/foreign_data.out | 42 ++
src/test/regress/expected/identity.out | 1 +
src/test/regress/expected/inherit.out | 13 +
src/test/regress/expected/insert.out | 12 +
src/test/regress/expected/insert_parallel.out | 713 ++++++++++++++++++
src/test/regress/expected/psql.out | 58 +-
src/test/regress/expected/publication.out | 4 +
.../regress/expected/replica_identity.out | 1 +
src/test/regress/expected/rowsecurity.out | 1 +
src/test/regress/expected/rules.out | 3 +
src/test/regress/expected/stats_ext.out | 1 +
src/test/regress/expected/triggers.out | 1 +
src/test/regress/expected/update.out | 1 +
src/test/regress/output/tablespace.source | 2 +
src/test/regress/parallel_schedule | 1 +
src/test/regress/sql/insert_parallel.sql | 381 ++++++++++
30 files changed, 1456 insertions(+), 28 deletions(-)
create mode 100644 src/test/regress/expected/insert_parallel.out
create mode 100644 src/test/regress/sql/insert_parallel.sql
diff --git a/contrib/test_decoding/expected/ddl.out b/contrib/test_decoding/expected/ddl.out
index 4ff0044c78..45aa25bff8 100644
--- a/contrib/test_decoding/expected/ddl.out
+++ b/contrib/test_decoding/expected/ddl.out
@@ -446,6 +446,7 @@ WITH (user_catalog_table = true)
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=true
INSERT INTO replication_metadata(relation, options)
@@ -460,6 +461,7 @@ ALTER TABLE replication_metadata RESET (user_catalog_table);
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
INSERT INTO replication_metadata(relation, options)
VALUES ('bar', ARRAY['a', 'b']);
@@ -473,6 +475,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = true);
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=true
INSERT INTO replication_metadata(relation, options)
@@ -492,6 +495,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = false);
rewritemeornot | integer | | | | plain | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=false
INSERT INTO replication_metadata(relation, options)
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 78812b2dbe..49278d9e21 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -24250,6 +24250,67 @@ SELECT collation for ('foo' COLLATE "de_DE");
Undefined objects are identified with <literal>NULL</literal> values.
</para></entry>
</row>
+
+ <row>
+ <entry role="func_table_entry"><para role="func_signature">
+ <indexterm>
+ <primary>pg_get_table_parallel_dml_safety</primary>
+ </indexterm>
+ <function>pg_get_table_parallel_dml_safety</function> ( <parameter>table_name</parameter> <type>regclass</type> )
+ <returnvalue>record</returnvalue>
+ ( <parameter>objid</parameter> <type>oid</type>,
+ <parameter>classid</parameter> <type>oid</type>,
+ <parameter>proparallel</parameter> <type>char</type> )
+ </para>
+ <para>
+ Returns a row containing enough information to uniquely identify the
+ parallel unsafe/restricted table-related objects from which the
+ table's parallel DML safety is determined. The user can use this
+ information during development in order to accurately declare a
+ table's parallel DML safety, or to identify any problematic objects
+ if parallel DML fails or behaves unexpectedly. Note that when the
+ use of an object-related parallel unsafe/restricted function is
+ detected, both the function OID and the object OID are returned.
+ <parameter>classid</parameter> is the OID of the system catalog
+ containing the object;
+ <parameter>objid</parameter> is the OID of the object itself.
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="func_table_entry"><para role="func_signature">
+ <indexterm>
+ <primary>pg_get_table_max_parallel_dml_hazard</primary>
+ </indexterm>
+ <function>pg_get_table_max_parallel_dml_hazard</function> ( <type>regclass</type> )
+ <returnvalue>char</returnvalue>
+ </para>
+ <para>
+ Returns the worst parallel DML safety hazard that can be found in the
+ given relation:
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal>s</literal> safe
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>r</literal> restricted
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>u</literal> unsafe
+ </para>
+ </listitem>
+ </itemizedlist>
+ </para>
+ <para>
+ Users can use this function to do a quick check without caring about
+ specific parallel-related objects.
+ </para></entry>
+ </row>
</tbody>
</tgroup>
</table>
diff --git a/doc/src/sgml/ref/alter_foreign_table.sgml b/doc/src/sgml/ref/alter_foreign_table.sgml
index 7ca03f3ac9..ca4b1c261e 100644
--- a/doc/src/sgml/ref/alter_foreign_table.sgml
+++ b/doc/src/sgml/ref/alter_foreign_table.sgml
@@ -29,6 +29,8 @@ ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceab
RENAME TO <replaceable class="parameter">new_name</replaceable>
ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
SET SCHEMA <replaceable class="parameter">new_schema</replaceable>
+ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
<phrase>where <replaceable class="parameter">action</replaceable> is one of:</phrase>
@@ -299,6 +301,17 @@ ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceab
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML</literal></term>
+ <listitem>
+ <para>
+ Change whether the data in the table can be modified in parallel mode.
+ See the similar form of <link linkend="sql-altertable"><command>ALTER TABLE</command></link>
+ for more details.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</para>
diff --git a/doc/src/sgml/ref/alter_function.sgml b/doc/src/sgml/ref/alter_function.sgml
index 0ee756a94d..a7088bc1cb 100644
--- a/doc/src/sgml/ref/alter_function.sgml
+++ b/doc/src/sgml/ref/alter_function.sgml
@@ -38,7 +38,7 @@ ALTER FUNCTION <replaceable>name</replaceable> [ ( [ [ <replaceable class="param
IMMUTABLE | STABLE | VOLATILE
[ NOT ] LEAKPROOF
[ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER
- PARALLEL { UNSAFE | RESTRICTED | SAFE }
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
COST <replaceable class="parameter">execution_cost</replaceable>
ROWS <replaceable class="parameter">result_rows</replaceable>
SUPPORT <replaceable class="parameter">support_function</replaceable>
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index 81291577f8..53bbacf9db 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -37,6 +37,8 @@ ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
ATTACH PARTITION <replaceable class="parameter">partition_name</replaceable> { FOR VALUES <replaceable class="parameter">partition_bound_spec</replaceable> | DEFAULT }
ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
DETACH PARTITION <replaceable class="parameter">partition_name</replaceable> [ CONCURRENTLY | FINALIZE ]
+ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
<phrase>where <replaceable class="parameter">action</replaceable> is one of:</phrase>
@@ -1030,6 +1032,16 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML</literal></term>
+ <listitem>
+ <para>
+ Change whether the data in the table can be modified in parallel mode.
+ See <link linkend="sql-createtable"><command>CREATE TABLE</command></link> for details.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</para>
diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml
index f9477efe58..32372beed0 100644
--- a/doc/src/sgml/ref/create_foreign_table.sgml
+++ b/doc/src/sgml/ref/create_foreign_table.sgml
@@ -27,6 +27,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name
[, ... ]
] )
[ INHERITS ( <replaceable>parent_table</replaceable> [, ... ] ) ]
+[ PARALLEL DML { NOTESET | UNSAFE | RESTRICTED | SAFE } ]
SERVER <replaceable class="parameter">server_name</replaceable>
[ OPTIONS ( <replaceable class="parameter">option</replaceable> '<replaceable class="parameter">value</replaceable>' [, ... ] ) ]
@@ -36,6 +37,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name
| <replaceable>table_constraint</replaceable> }
[, ... ]
) ] <replaceable class="parameter">partition_bound_spec</replaceable>
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
SERVER <replaceable class="parameter">server_name</replaceable>
[ OPTIONS ( <replaceable class="parameter">option</replaceable> '<replaceable class="parameter">value</replaceable>' [, ... ] ) ]
@@ -290,6 +292,43 @@ CHECK ( <replaceable class="parameter">expression</replaceable> ) [ NO INHERIT ]
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML DEFAULT</literal> indicates that the safety of
+ parallel modification will be checked automatically. This is default.
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the
+ table can't be modified in parallel mode, and this forces a serial
+ execution plan for DML statements operating on the table.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader. <literal>PARALLEL DML
+ SAFE</literal> indicates that the data in the table can be modified in
+ parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ the table (e.g., functions in triggers/index expression/constraints etc.).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable class="parameter">server_name</replaceable></term>
<listitem>
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 473a0a4aeb..5521f5123e 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -33,6 +33,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name</replaceable>
OF <replaceable class="parameter">type_name</replaceable> [ (
@@ -45,6 +46,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name</replaceable>
PARTITION OF <replaceable class="parameter">parent_table</replaceable> [ (
@@ -57,6 +59,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
<phrase>where <replaceable class="parameter">column_constraint</replaceable> is:</phrase>
@@ -1336,6 +1339,47 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry id="sql-createtable-paralleldmlsafety">
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the table
+ can't be modified in parallel mode, and this forces a serial execution plan
+ for DML statements operating on the table. This is the default.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader.
+ <literal>PARALLEL DML SAFE</literal> indicates that the data in the table
+ can be modified in parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Note that for partitioned table, <literal>PARALLEL DML DEFAULT</literal>
+ is the same as <literal>PARALLEL DML UNSAFE</literal> which indicates
+ that the data in the table can't be modified in parallel mode.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ the table
+ (e.g., functions in triggers/index expressions/constraints etc.).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>USING INDEX TABLESPACE <replaceable class="parameter">tablespace_name</replaceable></literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml
index 07558ab56c..ba5f80d45c 100644
--- a/doc/src/sgml/ref/create_table_as.sgml
+++ b/doc/src/sgml/ref/create_table_as.sgml
@@ -27,6 +27,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+ [ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
AS <replaceable>query</replaceable>
[ WITH [ NO ] DATA ]
</synopsis>
@@ -223,6 +224,43 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML DEFAULT</literal> indicates that the safety of
+ parallel modification will be checked automatically. This is default.
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the
+ table can't be modified in parallel mode, and this forces a serial
+ execution plan for DML statements operating on the table.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader. <literal>PARALLEL DML
+ SAFE</literal> indicates that the data in the table can be modified in
+ parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ table (e.g., functions in trigger/index expression/constraints ...).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable>query</replaceable></term>
<listitem>
diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out
index 4bee0c1173..5fefbe9347 100644
--- a/src/test/regress/expected/alter_table.out
+++ b/src/test/regress/expected/alter_table.out
@@ -2206,6 +2206,7 @@ alter table test_storage alter column a set storage external;
b | integer | | | 0 | plain | |
Indexes:
"test_storage_idx" btree (b, a)
+Parallel DML: default
\d+ test_storage_idx
Index "public.test_storage_idx"
@@ -4193,6 +4194,7 @@ ALTER TABLE range_parted2 DETACH PARTITION part_rp CONCURRENTLY;
a | integer | | | | plain | |
Partition key: RANGE (a)
Number of partitions: 0
+Parallel DML: default
-- constraint should be created
\d part_rp
diff --git a/src/test/regress/expected/compression_1.out b/src/test/regress/expected/compression_1.out
index 1ce2962d55..ad2b1ff001 100644
--- a/src/test/regress/expected/compression_1.out
+++ b/src/test/regress/expected/compression_1.out
@@ -12,6 +12,7 @@ INSERT INTO cmdata VALUES(repeat('1234567890', 1000));
f1 | text | | | | extended | pglz | |
Indexes:
"idx" btree (f1)
+Parallel DML: default
CREATE TABLE cmdata1(f1 TEXT COMPRESSION lz4);
ERROR: compression method lz4 not supported
@@ -51,6 +52,7 @@ SELECT * INTO cmmove1 FROM cmdata;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | text | | | | extended | | |
+Parallel DML: default
SELECT pg_column_compression(f1) FROM cmmove1;
pg_column_compression
@@ -138,6 +140,7 @@ CREATE TABLE cmdata2 (f1 int);
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | integer | | | | plain | | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;
\d+ cmdata2
@@ -145,6 +148,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | character varying | | | | extended | | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE int USING f1::integer;
\d+ cmdata2
@@ -152,6 +156,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE int USING f1::integer;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | integer | | | | plain | | |
+Parallel DML: default
--changing column storage should not impact the compression method
--but the data should not be compressed
@@ -162,6 +167,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION pglz;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | character varying | | | | extended | pglz | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 SET STORAGE plain;
\d+ cmdata2
@@ -169,6 +175,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET STORAGE plain;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | character varying | | | | plain | pglz | |
+Parallel DML: default
INSERT INTO cmdata2 VALUES (repeat('123456789', 800));
SELECT pg_column_compression(f1) FROM cmdata2;
@@ -249,6 +256,7 @@ INSERT INTO cmdata VALUES (repeat('123456789', 4004));
f1 | text | | | | extended | pglz | |
Indexes:
"idx" btree (f1)
+Parallel DML: default
SELECT pg_column_compression(f1) FROM cmdata;
pg_column_compression
@@ -263,6 +271,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION default;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | character varying | | | | plain | | |
+Parallel DML: default
-- test alter compression method for materialized views
ALTER MATERIALIZED VIEW compressmv ALTER COLUMN x SET COMPRESSION lz4;
diff --git a/src/test/regress/expected/copy2.out b/src/test/regress/expected/copy2.out
index 5f3685e9ef..cd0d153461 100644
--- a/src/test/regress/expected/copy2.out
+++ b/src/test/regress/expected/copy2.out
@@ -519,6 +519,7 @@ alter table check_con_tbl add check (check_con_function(check_con_tbl.*));
f1 | integer | | | | plain | |
Check constraints:
"check_con_tbl_check" CHECK (check_con_function(check_con_tbl.*))
+Parallel DML: default
copy check_con_tbl from stdin;
NOTICE: input = {"f1":1}
diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out
index a958b84979..fe10ac8bb0 100644
--- a/src/test/regress/expected/create_table.out
+++ b/src/test/regress/expected/create_table.out
@@ -505,6 +505,7 @@ Number of partitions: 0
b | text | | | | extended | |
Partition key: RANGE (((a + 1)), substr(b, 1, 5))
Number of partitions: 0
+Parallel DML: default
INSERT INTO partitioned2 VALUES (1, 'hello');
ERROR: no partition of relation "partitioned2" found for row
@@ -518,6 +519,7 @@ CREATE TABLE part2_1 PARTITION OF partitioned2 FOR VALUES FROM (-1, 'aaaaa') TO
b | text | | | | extended | |
Partition of: partitioned2 FOR VALUES FROM ('-1', 'aaaaa') TO (100, 'ccccc')
Partition constraint: (((a + 1) IS NOT NULL) AND (substr(b, 1, 5) IS NOT NULL) AND (((a + 1) > '-1'::integer) OR (((a + 1) = '-1'::integer) AND (substr(b, 1, 5) >= 'aaaaa'::text))) AND (((a + 1) < 100) OR (((a + 1) = 100) AND (substr(b, 1, 5) < 'ccccc'::text))))
+Parallel DML: default
DROP TABLE partitioned, partitioned2;
-- check reference to partitioned table's rowtype in partition descriptor
@@ -559,6 +561,7 @@ select * from partitioned where partitioned = '(1,2)'::partitioned;
b | integer | | | | plain | |
Partition of: partitioned FOR VALUES IN ('(1,2)')
Partition constraint: (((partitioned1.*)::partitioned IS DISTINCT FROM NULL) AND ((partitioned1.*)::partitioned = '(1,2)'::partitioned))
+Parallel DML: default
drop table partitioned;
-- check that dependencies of partition columns are handled correctly
@@ -618,6 +621,7 @@ Partitions: part_null FOR VALUES IN (NULL),
part_p1 FOR VALUES IN (1),
part_p2 FOR VALUES IN (2),
part_p3 FOR VALUES IN (3)
+Parallel DML: default
-- forbidden expressions for partition bound with list partitioned table
CREATE TABLE part_bogus_expr_fail PARTITION OF list_parted FOR VALUES IN (somename);
@@ -1064,6 +1068,7 @@ drop table test_part_coll_posix;
b | integer | | not null | 1 | plain | |
Partition of: parted FOR VALUES IN ('b')
Partition constraint: ((a IS NOT NULL) AND (a = 'b'::text))
+Parallel DML: default
-- Both partition bound and partition key in describe output
\d+ part_c
@@ -1076,6 +1081,7 @@ Partition of: parted FOR VALUES IN ('c')
Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text))
Partition key: RANGE (b)
Partitions: part_c_1_10 FOR VALUES FROM (1) TO (10)
+Parallel DML: default
-- a level-2 partition's constraint will include the parent's expressions
\d+ part_c_1_10
@@ -1086,6 +1092,7 @@ Partitions: part_c_1_10 FOR VALUES FROM (1) TO (10)
b | integer | | not null | 0 | plain | |
Partition of: part_c FOR VALUES FROM (1) TO (10)
Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text) AND (b IS NOT NULL) AND (b >= 1) AND (b < 10))
+Parallel DML: default
-- Show partition count in the parent's describe output
-- Tempted to include \d+ output listing partitions with bound info but
@@ -1120,6 +1127,7 @@ CREATE TABLE unbounded_range_part PARTITION OF range_parted4 FOR VALUES FROM (MI
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (MAXVALUE, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL))
+Parallel DML: default
DROP TABLE unbounded_range_part;
CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE);
@@ -1132,6 +1140,7 @@ CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALU
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND (abs(a) <= 1))
+Parallel DML: default
CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE);
\d+ range_parted4_2
@@ -1143,6 +1152,7 @@ CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 3) OR ((abs(a) = 3) AND (abs(b) > 4)) OR ((abs(a) = 3) AND (abs(b) = 4) AND (c >= 5))) AND ((abs(a) < 6) OR ((abs(a) = 6) AND (abs(b) <= 7))))
+Parallel DML: default
CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE);
\d+ range_parted4_3
@@ -1154,6 +1164,7 @@ CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, M
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 6) OR ((abs(a) = 6) AND (abs(b) >= 8))) AND (abs(a) <= 9))
+Parallel DML: default
DROP TABLE range_parted4;
-- user-defined operator class in partition key
@@ -1190,6 +1201,7 @@ SELECT obj_description('parted_col_comment'::regclass);
b | text | | | | extended | |
Partition key: LIST (a)
Number of partitions: 0
+Parallel DML: default
DROP TABLE parted_col_comment;
-- list partitioning on array type column
@@ -1202,6 +1214,7 @@ CREATE TABLE arrlp12 PARTITION OF arrlp FOR VALUES IN ('{1}', '{2}');
a | integer[] | | | | extended | |
Partition of: arrlp FOR VALUES IN ('{1}', '{2}')
Partition constraint: ((a IS NOT NULL) AND ((a = '{1}'::integer[]) OR (a = '{2}'::integer[])))
+Parallel DML: default
DROP TABLE arrlp;
-- partition on boolean column
@@ -1216,6 +1229,7 @@ create table boolspart_f partition of boolspart for values in (false);
Partition key: LIST (a)
Partitions: boolspart_f FOR VALUES IN (false),
boolspart_t FOR VALUES IN (true)
+Parallel DML: default
drop table boolspart;
-- partitions mixing temporary and permanent relations
diff --git a/src/test/regress/expected/create_table_like.out b/src/test/regress/expected/create_table_like.out
index 0ed94f1d2f..3757e2f8d0 100644
--- a/src/test/regress/expected/create_table_like.out
+++ b/src/test/regress/expected/create_table_like.out
@@ -333,6 +333,7 @@ CREATE TABLE ctlt12_storage (LIKE ctlt1 INCLUDING STORAGE, LIKE ctlt2 INCLUDING
a | text | | not null | | main | |
b | text | | | | extended | |
c | text | | | | external | |
+Parallel DML: default
CREATE TABLE ctlt12_comments (LIKE ctlt1 INCLUDING COMMENTS, LIKE ctlt2 INCLUDING COMMENTS);
\d+ ctlt12_comments
@@ -342,6 +343,7 @@ CREATE TABLE ctlt12_comments (LIKE ctlt1 INCLUDING COMMENTS, LIKE ctlt2 INCLUDIN
a | text | | not null | | extended | | A
b | text | | | | extended | | B
c | text | | | | extended | | C
+Parallel DML: default
CREATE TABLE ctlt1_inh (LIKE ctlt1 INCLUDING CONSTRAINTS INCLUDING COMMENTS) INHERITS (ctlt1);
NOTICE: merging column "a" with inherited definition
@@ -356,6 +358,7 @@ NOTICE: merging constraint "ctlt1_a_check" with inherited definition
Check constraints:
"ctlt1_a_check" CHECK (length(a) > 2)
Inherits: ctlt1
+Parallel DML: default
SELECT description FROM pg_description, pg_constraint c WHERE classoid = 'pg_constraint'::regclass AND objoid = c.oid AND c.conrelid = 'ctlt1_inh'::regclass;
description
@@ -378,6 +381,7 @@ Check constraints:
"ctlt3_c_check" CHECK (length(c) < 7)
Inherits: ctlt1,
ctlt3
+Parallel DML: default
CREATE TABLE ctlt13_like (LIKE ctlt3 INCLUDING CONSTRAINTS INCLUDING INDEXES INCLUDING COMMENTS INCLUDING STORAGE) INHERITS (ctlt1);
NOTICE: merging column "a" with inherited definition
@@ -395,6 +399,7 @@ Check constraints:
"ctlt3_a_check" CHECK (length(a) < 5)
"ctlt3_c_check" CHECK (length(c) < 7)
Inherits: ctlt1
+Parallel DML: default
SELECT description FROM pg_description, pg_constraint c WHERE classoid = 'pg_constraint'::regclass AND objoid = c.oid AND c.conrelid = 'ctlt13_like'::regclass;
description
@@ -418,6 +423,7 @@ Check constraints:
Statistics objects:
"public.ctlt_all_a_b_stat" ON a, b FROM ctlt_all
"public.ctlt_all_expr_stat" ON (a || b) FROM ctlt_all
+Parallel DML: default
SELECT c.relname, objsubid, description FROM pg_description, pg_index i, pg_class c WHERE classoid = 'pg_class'::regclass AND objoid = i.indexrelid AND c.oid = i.indexrelid AND i.indrelid = 'ctlt_all'::regclass ORDER BY c.relname, objsubid;
relname | objsubid | description
@@ -458,6 +464,7 @@ Check constraints:
Statistics objects:
"public.pg_attrdef_a_b_stat" ON a, b FROM public.pg_attrdef
"public.pg_attrdef_expr_stat" ON (a || b) FROM public.pg_attrdef
+Parallel DML: default
DROP TABLE public.pg_attrdef;
-- Check that LIKE isn't confused when new table masks the old, either
@@ -480,6 +487,7 @@ Check constraints:
Statistics objects:
"ctl_schema.ctlt1_a_b_stat" ON a, b FROM ctlt1
"ctl_schema.ctlt1_expr_stat" ON (a || b) FROM ctlt1
+Parallel DML: default
ROLLBACK;
DROP TABLE ctlt1, ctlt2, ctlt3, ctlt4, ctlt12_storage, ctlt12_comments, ctlt1_inh, ctlt13_inh, ctlt13_like, ctlt_all, ctla, ctlb CASCADE;
diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out
index 411d5c003e..cc0bbe85d1 100644
--- a/src/test/regress/expected/domain.out
+++ b/src/test/regress/expected/domain.out
@@ -276,6 +276,7 @@ Rules:
silly AS
ON DELETE TO dcomptable DO INSTEAD UPDATE dcomptable SET d1.r = (dcomptable.d1).r - 1::double precision, d1.i = (dcomptable.d1).i + 1::double precision
WHERE (dcomptable.d1).i > 0::double precision
+Parallel DML: default
drop table dcomptable;
drop type comptype cascade;
@@ -413,6 +414,7 @@ Rules:
silly AS
ON DELETE TO dcomptable DO INSTEAD UPDATE dcomptable SET d1[1].r = dcomptable.d1[1].r - 1::double precision, d1[1].i = dcomptable.d1[1].i + 1::double precision
WHERE dcomptable.d1[1].i > 0::double precision
+Parallel DML: default
drop table dcomptable;
drop type comptype cascade;
diff --git a/src/test/regress/expected/foreign_data.out b/src/test/regress/expected/foreign_data.out
index 426080ae39..dcbcdb512a 100644
--- a/src/test/regress/expected/foreign_data.out
+++ b/src/test/regress/expected/foreign_data.out
@@ -735,6 +735,7 @@ Check constraints:
"ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
\det+
List of foreign tables
@@ -857,6 +858,7 @@ Check constraints:
"ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- can't change the column type if it's used elsewhere
CREATE TABLE use_ft1_column_type (x ft1);
@@ -1396,6 +1398,7 @@ CREATE FOREIGN TABLE ft2 () INHERITS (fd_pt1)
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1407,6 +1410,7 @@ Child tables: ft2
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
DROP FOREIGN TABLE ft2;
\d+ fd_pt1
@@ -1416,6 +1420,7 @@ DROP FOREIGN TABLE ft2;
c1 | integer | | not null | | plain | |
c2 | text | | | | extended | |
c3 | date | | | | plain | |
+Parallel DML: default
CREATE FOREIGN TABLE ft2 (
c1 integer NOT NULL,
@@ -1431,6 +1436,7 @@ CREATE FOREIGN TABLE ft2 (
c3 | date | | | | | plain | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER FOREIGN TABLE ft2 INHERIT fd_pt1;
\d+ fd_pt1
@@ -1441,6 +1447,7 @@ ALTER FOREIGN TABLE ft2 INHERIT fd_pt1;
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1452,6 +1459,7 @@ Child tables: ft2
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
CREATE TABLE ct3() INHERITS(ft2);
CREATE FOREIGN TABLE ft3 (
@@ -1475,6 +1483,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
\d+ ct3
Table "public.ct3"
@@ -1484,6 +1493,7 @@ Child tables: ct3,
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Inherits: ft2
+Parallel DML: default
\d+ ft3
Foreign table "public.ft3"
@@ -1494,6 +1504,7 @@ Inherits: ft2
c3 | date | | | | | plain | |
Server: s0
Inherits: ft2
+Parallel DML: default
-- add attributes recursively
ALTER TABLE fd_pt1 ADD COLUMN c4 integer;
@@ -1514,6 +1525,7 @@ ALTER TABLE fd_pt1 ADD COLUMN c8 integer;
c7 | integer | | not null | | plain | |
c8 | integer | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1532,6 +1544,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
\d+ ct3
Table "public.ct3"
@@ -1546,6 +1559,7 @@ Child tables: ct3,
c7 | integer | | not null | | plain | |
c8 | integer | | | | plain | |
Inherits: ft2
+Parallel DML: default
\d+ ft3
Foreign table "public.ft3"
@@ -1561,6 +1575,7 @@ Inherits: ft2
c8 | integer | | | | | plain | |
Server: s0
Inherits: ft2
+Parallel DML: default
-- alter attributes recursively
ALTER TABLE fd_pt1 ALTER COLUMN c4 SET DEFAULT 0;
@@ -1588,6 +1603,7 @@ ALTER TABLE fd_pt1 ALTER COLUMN c8 SET STORAGE EXTERNAL;
c7 | integer | | | | plain | |
c8 | text | | | | external | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1606,6 +1622,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
-- drop attributes recursively
ALTER TABLE fd_pt1 DROP COLUMN c4;
@@ -1621,6 +1638,7 @@ ALTER TABLE fd_pt1 DROP COLUMN c8;
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1634,6 +1652,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
-- add constraints recursively
ALTER TABLE fd_pt1 ADD CONSTRAINT fd_pt1chk1 CHECK (c1 > 0) NO INHERIT;
@@ -1661,6 +1680,7 @@ Check constraints:
"fd_pt1chk1" CHECK (c1 > 0) NO INHERIT
"fd_pt1chk2" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1676,6 +1696,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
DROP FOREIGN TABLE ft2; -- ERROR
ERROR: cannot drop foreign table ft2 because other objects depend on it
@@ -1708,6 +1729,7 @@ Check constraints:
"fd_pt1chk1" CHECK (c1 > 0) NO INHERIT
"fd_pt1chk2" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1721,6 +1743,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- drop constraints recursively
ALTER TABLE fd_pt1 DROP CONSTRAINT fd_pt1chk1 CASCADE;
@@ -1738,6 +1761,7 @@ ALTER TABLE fd_pt1 ADD CONSTRAINT fd_pt1chk3 CHECK (c2 <> '') NOT VALID;
Check constraints:
"fd_pt1chk3" CHECK (c2 <> ''::text) NOT VALID
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1752,6 +1776,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- VALIDATE CONSTRAINT need do nothing on foreign tables
ALTER TABLE fd_pt1 VALIDATE CONSTRAINT fd_pt1chk3;
@@ -1765,6 +1790,7 @@ ALTER TABLE fd_pt1 VALIDATE CONSTRAINT fd_pt1chk3;
Check constraints:
"fd_pt1chk3" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1779,6 +1805,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- changes name of an attribute recursively
ALTER TABLE fd_pt1 RENAME COLUMN c1 TO f1;
@@ -1796,6 +1823,7 @@ ALTER TABLE fd_pt1 RENAME CONSTRAINT fd_pt1chk3 TO f2_check;
Check constraints:
"f2_check" CHECK (f2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1810,6 +1838,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- TRUNCATE doesn't work on foreign tables, either directly or recursively
TRUNCATE ft2; -- ERROR
@@ -1859,6 +1888,7 @@ CREATE FOREIGN TABLE fd_pt2_1 PARTITION OF fd_pt2 FOR VALUES IN (1)
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1871,6 +1901,7 @@ Partition of: fd_pt2 FOR VALUES IN (1)
Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1))
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- partition cannot have additional columns
DROP FOREIGN TABLE fd_pt2_1;
@@ -1890,6 +1921,7 @@ CREATE FOREIGN TABLE fd_pt2_1 (
c4 | character(1) | | | | | extended | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: table "fd_pt2_1" contains column "c4" not found in parent "fd_pt2"
@@ -1904,6 +1936,7 @@ DROP FOREIGN TABLE fd_pt2_1;
c3 | date | | | | plain | |
Partition key: LIST (c1)
Number of partitions: 0
+Parallel DML: default
CREATE FOREIGN TABLE fd_pt2_1 (
c1 integer NOT NULL,
@@ -1919,6 +1952,7 @@ CREATE FOREIGN TABLE fd_pt2_1 (
c3 | date | | | | | plain | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- no attach partition validation occurs for foreign tables
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1);
@@ -1931,6 +1965,7 @@ ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1);
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1943,6 +1978,7 @@ Partition of: fd_pt2 FOR VALUES IN (1)
Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1))
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- cannot add column to a partition
ALTER TABLE fd_pt2_1 ADD c4 char;
@@ -1959,6 +1995,7 @@ ALTER TABLE fd_pt2_1 ADD CONSTRAINT p21chk CHECK (c2 <> '');
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1973,6 +2010,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- cannot drop inherited NOT NULL constraint from a partition
ALTER TABLE fd_pt2_1 ALTER c1 DROP NOT NULL;
@@ -1989,6 +2027,7 @@ ALTER TABLE fd_pt2 ALTER c2 SET NOT NULL;
c3 | date | | | | plain | |
Partition key: LIST (c1)
Number of partitions: 0
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -2001,6 +2040,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: column "c2" in child table must be marked NOT NULL
@@ -2019,6 +2059,7 @@ Partition key: LIST (c1)
Check constraints:
"fd_pt2chk1" CHECK (c1 > 0)
Number of partitions: 0
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -2031,6 +2072,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: child table is missing constraint "fd_pt2chk1"
diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out
index 99811570b7..a5b5a1b24d 100644
--- a/src/test/regress/expected/identity.out
+++ b/src/test/regress/expected/identity.out
@@ -506,6 +506,7 @@ TABLE itest8;
f3 | integer | | not null | generated by default as identity | plain | |
f4 | bigint | | not null | generated always as identity | plain | |
f5 | bigint | | | | plain | |
+Parallel DML: default
\d itest8_f2_seq
Sequence "public.itest8_f2_seq"
diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out
index 2d49e765de..0a720862eb 100644
--- a/src/test/regress/expected/inherit.out
+++ b/src/test/regress/expected/inherit.out
@@ -1059,6 +1059,7 @@ ALTER TABLE inhts RENAME d TO dd;
dd | integer | | | | plain | |
Inherits: inht1,
inhs1
+Parallel DML: default
DROP TABLE inhts;
-- Test for renaming in diamond inheritance
@@ -1079,6 +1080,7 @@ ALTER TABLE inht1 RENAME aa TO aaa;
z | integer | | | | plain | |
Inherits: inht2,
inht3
+Parallel DML: default
CREATE TABLE inhts (d int) INHERITS (inht2, inhs1);
NOTICE: merging multiple inherited definitions of column "b"
@@ -1096,6 +1098,7 @@ ERROR: cannot rename inherited column "b"
d | integer | | | | plain | |
Inherits: inht2,
inhs1
+Parallel DML: default
WITH RECURSIVE r AS (
SELECT 'inht1'::regclass AS inhrelid
@@ -1142,6 +1145,7 @@ CREATE TABLE test_constraints_inh () INHERITS (test_constraints);
Indexes:
"test_constraints_val1_val2_key" UNIQUE CONSTRAINT, btree (val1, val2)
Child tables: test_constraints_inh
+Parallel DML: default
ALTER TABLE ONLY test_constraints DROP CONSTRAINT test_constraints_val1_val2_key;
\d+ test_constraints
@@ -1152,6 +1156,7 @@ ALTER TABLE ONLY test_constraints DROP CONSTRAINT test_constraints_val1_val2_key
val1 | character varying | | | | extended | |
val2 | integer | | | | plain | |
Child tables: test_constraints_inh
+Parallel DML: default
\d+ test_constraints_inh
Table "public.test_constraints_inh"
@@ -1161,6 +1166,7 @@ Child tables: test_constraints_inh
val1 | character varying | | | | extended | |
val2 | integer | | | | plain | |
Inherits: test_constraints
+Parallel DML: default
DROP TABLE test_constraints_inh;
DROP TABLE test_constraints;
@@ -1177,6 +1183,7 @@ CREATE TABLE test_ex_constraints_inh () INHERITS (test_ex_constraints);
Indexes:
"test_ex_constraints_c_excl" EXCLUDE USING gist (c WITH &&)
Child tables: test_ex_constraints_inh
+Parallel DML: default
ALTER TABLE test_ex_constraints DROP CONSTRAINT test_ex_constraints_c_excl;
\d+ test_ex_constraints
@@ -1185,6 +1192,7 @@ ALTER TABLE test_ex_constraints DROP CONSTRAINT test_ex_constraints_c_excl;
--------+--------+-----------+----------+---------+---------+--------------+-------------
c | circle | | | | plain | |
Child tables: test_ex_constraints_inh
+Parallel DML: default
\d+ test_ex_constraints_inh
Table "public.test_ex_constraints_inh"
@@ -1192,6 +1200,7 @@ Child tables: test_ex_constraints_inh
--------+--------+-----------+----------+---------+---------+--------------+-------------
c | circle | | | | plain | |
Inherits: test_ex_constraints
+Parallel DML: default
DROP TABLE test_ex_constraints_inh;
DROP TABLE test_ex_constraints;
@@ -1208,6 +1217,7 @@ Indexes:
"test_primary_constraints_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "test_foreign_constraints" CONSTRAINT "test_foreign_constraints_id1_fkey" FOREIGN KEY (id1) REFERENCES test_primary_constraints(id)
+Parallel DML: default
\d+ test_foreign_constraints
Table "public.test_foreign_constraints"
@@ -1217,6 +1227,7 @@ Referenced by:
Foreign-key constraints:
"test_foreign_constraints_id1_fkey" FOREIGN KEY (id1) REFERENCES test_primary_constraints(id)
Child tables: test_foreign_constraints_inh
+Parallel DML: default
ALTER TABLE test_foreign_constraints DROP CONSTRAINT test_foreign_constraints_id1_fkey;
\d+ test_foreign_constraints
@@ -1225,6 +1236,7 @@ ALTER TABLE test_foreign_constraints DROP CONSTRAINT test_foreign_constraints_id
--------+---------+-----------+----------+---------+---------+--------------+-------------
id1 | integer | | | | plain | |
Child tables: test_foreign_constraints_inh
+Parallel DML: default
\d+ test_foreign_constraints_inh
Table "public.test_foreign_constraints_inh"
@@ -1232,6 +1244,7 @@ Child tables: test_foreign_constraints_inh
--------+---------+-----------+----------+---------+---------+--------------+-------------
id1 | integer | | | | plain | |
Inherits: test_foreign_constraints
+Parallel DML: default
DROP TABLE test_foreign_constraints_inh;
DROP TABLE test_foreign_constraints;
diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out
index 5063a3dc22..c8440449c1 100644
--- a/src/test/regress/expected/insert.out
+++ b/src/test/regress/expected/insert.out
@@ -177,6 +177,7 @@ Rules:
irule3 AS
ON INSERT TO inserttest2 DO INSERT INTO inserttest (f4[1].if1, f4[1].if2[2]) SELECT new.f1,
new.f2
+Parallel DML: default
drop table inserttest2;
drop table inserttest;
@@ -482,6 +483,7 @@ Partitions: part_aa_bb FOR VALUES IN ('aa', 'bb'),
part_null FOR VALUES IN (NULL),
part_xx_yy FOR VALUES IN ('xx', 'yy'), PARTITIONED,
part_default DEFAULT, PARTITIONED
+Parallel DML: default
-- cleanup
drop table range_parted, list_parted;
@@ -497,6 +499,7 @@ create table part_default partition of list_parted default;
a | integer | | | | plain | |
Partition of: list_parted DEFAULT
No partition constraint
+Parallel DML: default
insert into part_default values (null);
insert into part_default values (1);
@@ -888,6 +891,7 @@ Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVAL
mcrparted6_common_ge_10 FOR VALUES FROM ('common', 10) TO ('common', MAXVALUE),
mcrparted7_gt_common_lt_d FOR VALUES FROM ('common', MAXVALUE) TO ('d', MINVALUE),
mcrparted8_ge_d FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE)
+Parallel DML: default
\d+ mcrparted1_lt_b
Table "public.mcrparted1_lt_b"
@@ -897,6 +901,7 @@ Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVAL
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a < 'b'::text))
+Parallel DML: default
\d+ mcrparted2_b
Table "public.mcrparted2_b"
@@ -906,6 +911,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a < 'b'::text))
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('b', MINVALUE) TO ('c', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'b'::text) AND (a < 'c'::text))
+Parallel DML: default
\d+ mcrparted3_c_to_common
Table "public.mcrparted3_c_to_common"
@@ -915,6 +921,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'b'::text)
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('c', MINVALUE) TO ('common', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'c'::text) AND (a < 'common'::text))
+Parallel DML: default
\d+ mcrparted4_common_lt_0
Table "public.mcrparted4_common_lt_0"
@@ -924,6 +931,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'c'::text)
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', MINVALUE) TO ('common', 0)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b < 0))
+Parallel DML: default
\d+ mcrparted5_common_0_to_10
Table "public.mcrparted5_common_0_to_10"
@@ -933,6 +941,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', 0) TO ('common', 10)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b >= 0) AND (b < 10))
+Parallel DML: default
\d+ mcrparted6_common_ge_10
Table "public.mcrparted6_common_ge_10"
@@ -942,6 +951,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', 10) TO ('common', MAXVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b >= 10))
+Parallel DML: default
\d+ mcrparted7_gt_common_lt_d
Table "public.mcrparted7_gt_common_lt_d"
@@ -951,6 +961,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', MAXVALUE) TO ('d', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a > 'common'::text) AND (a < 'd'::text))
+Parallel DML: default
\d+ mcrparted8_ge_d
Table "public.mcrparted8_ge_d"
@@ -960,6 +971,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a > 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'd'::text))
+Parallel DML: default
insert into mcrparted values ('aaa', 0), ('b', 0), ('bz', 10), ('c', -10),
('comm', -10), ('common', -10), ('common', 0), ('common', 10),
diff --git a/src/test/regress/expected/insert_parallel.out b/src/test/regress/expected/insert_parallel.out
new file mode 100644
index 0000000000..304237f619
--- /dev/null
+++ b/src/test/regress/expected/insert_parallel.out
@@ -0,0 +1,713 @@
+--
+-- PARALLEL
+--
+--
+-- START: setup some tables and data needed by the tests.
+--
+-- Setup - index expressions test
+create function pg_class_relname(Oid)
+returns name language sql parallel unsafe
+as 'select relname from pg_class where $1 = oid';
+-- For testing purposes, we'll mark this function as parallel-unsafe
+create or replace function fullname_parallel_unsafe(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel unsafe;
+create or replace function fullname_parallel_restricted(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel restricted;
+create table names(index int, first_name text, last_name text);
+create table names2(index int, first_name text, last_name text);
+create index names2_fullname_idx on names2 (fullname_parallel_unsafe(first_name, last_name));
+create table names4(index int, first_name text, last_name text);
+create index names4_fullname_idx on names4 (fullname_parallel_restricted(first_name, last_name));
+insert into names values
+ (1, 'albert', 'einstein'),
+ (2, 'niels', 'bohr'),
+ (3, 'erwin', 'schrodinger'),
+ (4, 'leonhard', 'euler'),
+ (5, 'stephen', 'hawking'),
+ (6, 'isaac', 'newton'),
+ (7, 'alan', 'turing'),
+ (8, 'richard', 'feynman');
+-- Setup - column default tests
+create or replace function bdefault_unsafe ()
+returns int language plpgsql parallel unsafe as $$
+begin
+ RETURN 5;
+end $$;
+create or replace function cdefault_restricted ()
+returns int language plpgsql parallel restricted as $$
+begin
+ RETURN 10;
+end $$;
+create or replace function ddefault_safe ()
+returns int language plpgsql parallel safe as $$
+begin
+ RETURN 20;
+end $$;
+create table testdef(a int, b int default bdefault_unsafe(), c int default cdefault_restricted(), d int default ddefault_safe());
+create table test_data(a int);
+insert into test_data select * from generate_series(1,10);
+--
+-- END: setup some tables and data needed by the tests.
+--
+begin;
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+create table para_insert_p1 (
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+);
+create table para_insert_f1 (
+ unique1 int4 REFERENCES para_insert_p1(unique1),
+ stringu1 name
+);
+create table para_insert_with_parallel_unsafe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml unsafe;
+create table para_insert_with_parallel_restricted(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml restricted;
+create table para_insert_with_parallel_safe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml safe;
+create table para_insert_with_parallel_auto(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml default;
+-- Check FK trigger
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('para_insert_f1');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | r
+ pg_trigger | r
+ pg_proc | r
+ pg_trigger | r
+(4 rows)
+
+select pg_get_table_max_parallel_dml_hazard('para_insert_f1');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+--
+-- Test INSERT with underlying query.
+-- Set parallel dml safe.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+alter table para_insert_p1 parallel dml safe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into para_insert_p1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+ count
+-------
+ 1
+(1 row)
+
+explain (costs off) insert into para_insert_with_parallel_safe select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------
+ Insert on para_insert_with_parallel_safe
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Set parallel dml unsafe.
+-- (should not create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml unsafe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+--------------------------
+ Insert on para_insert_p1
+ -> Seq Scan on tenk1
+(2 rows)
+
+explain (costs off) insert into para_insert_with_parallel_unsafe select unique1, stringu1 from tenk1;
+ QUERY PLAN
+--------------------------------------------
+ Insert on para_insert_with_parallel_unsafe
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Set parallel dml restricted.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml restricted;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+explain (costs off) insert into para_insert_with_parallel_restricted select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------------
+ Insert on para_insert_with_parallel_restricted
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Reset parallel dml.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml default;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+explain (costs off) insert into para_insert_with_parallel_auto select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------
+ Insert on para_insert_with_parallel_auto
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Test INSERT with ordered underlying query.
+-- (should create plan with parallel SELECT, GatherMerge parent node)
+--
+truncate para_insert_p1 cascade;
+NOTICE: truncate cascades to table "para_insert_f1"
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+ QUERY PLAN
+----------------------------------------------
+ Insert on para_insert_p1
+ -> Gather Merge
+ Workers Planned: 4
+ -> Sort
+ Sort Key: tenk1.unique1
+ -> Parallel Seq Scan on tenk1
+(6 rows)
+
+insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+ count
+-------
+ 1
+(1 row)
+
+--
+-- Test INSERT with RETURNING clause.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+create table test_data1(like test_data);
+explain (costs off) insert into test_data1 select * from test_data where a = 10 returning a as data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on test_data1
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+ Filter: (a = 10)
+(5 rows)
+
+insert into test_data1 select * from test_data where a = 10 returning a as data;
+ data
+------
+ 10
+(1 row)
+
+--
+-- Test INSERT into a table with a foreign key.
+-- (Insert into a table with a foreign key is parallel-restricted,
+-- as doing this in a parallel worker would create a new commandId
+-- and within a worker this is not currently supported)
+--
+explain (costs off) insert into para_insert_f1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_f1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into para_insert_f1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the insert worked
+select count(*), sum(unique1) from para_insert_f1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+--
+-- Test INSERT with ON CONFLICT ... DO UPDATE ...
+-- (should not create a parallel plan)
+--
+create table test_conflict_table(id serial primary key, somedata int);
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on test_conflict_table
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+(4 rows)
+
+insert into test_conflict_table(id, somedata) select a, a from test_data;
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data ON CONFLICT(id) DO UPDATE SET somedata = EXCLUDED.somedata + 1;
+ QUERY PLAN
+------------------------------------------------------
+ Insert on test_conflict_table
+ Conflict Resolution: UPDATE
+ Conflict Arbiter Indexes: test_conflict_table_pkey
+ -> Seq Scan on test_data
+(4 rows)
+
+--
+-- Test INSERT with parallel-unsafe index expression
+-- (should not create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names2');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_index | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names2');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into names2 select * from names;
+ QUERY PLAN
+-------------------------
+ Insert on names2
+ -> Seq Scan on names
+(2 rows)
+
+--
+-- Test INSERT with parallel-restricted index expression
+-- (should create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names4');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | r
+ pg_index | r
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names4');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+explain (costs off) insert into names4 select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on names4
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+--
+-- Test INSERT with underlying query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names5 (like names);
+explain (costs off) insert into names5 select * from names returning *;
+ QUERY PLAN
+----------------------------------------
+ Insert on names5
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names6 (like names);
+explain (costs off) insert into names6 select * from names order by last_name returning *;
+ QUERY PLAN
+----------------------------------------------
+ Insert on names6
+ -> Gather Merge
+ Workers Planned: 3
+ -> Sort
+ Sort Key: names.last_name
+ -> Parallel Seq Scan on names
+(6 rows)
+
+insert into names6 select * from names order by last_name returning *;
+ index | first_name | last_name
+-------+------------+-------------
+ 2 | niels | bohr
+ 1 | albert | einstein
+ 4 | leonhard | euler
+ 8 | richard | feynman
+ 5 | stephen | hawking
+ 6 | isaac | newton
+ 3 | erwin | schrodinger
+ 7 | alan | turing
+(8 rows)
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (with projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names7 (like names);
+explain (costs off) insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+ QUERY PLAN
+----------------------------------------------
+ Insert on names7
+ -> Gather Merge
+ Workers Planned: 3
+ -> Sort
+ Sort Key: names.last_name
+ -> Parallel Seq Scan on names
+(6 rows)
+
+insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+ last_name_then_first_name
+---------------------------
+ bohr, niels
+ einstein, albert
+ euler, leonhard
+ feynman, richard
+ hawking, stephen
+ newton, isaac
+ schrodinger, erwin
+ turing, alan
+(8 rows)
+
+--
+-- Test INSERT into temporary table with underlying query.
+-- (Insert into a temp table is parallel-restricted;
+-- should create a parallel plan; parallel SELECT)
+--
+create temporary table temp_names (like names);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('temp_names');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_class | r
+(1 row)
+
+select pg_get_table_max_parallel_dml_hazard('temp_names');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+explain (costs off) insert into temp_names select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on temp_names
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+insert into temp_names select * from names;
+--
+-- Test INSERT with column defaults
+--
+--
+--
+-- Parallel INSERT with unsafe column default, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,c,d) select a,a*4,a*8 from test_data;
+ QUERY PLAN
+-----------------------------
+ Insert on testdef
+ -> Seq Scan on test_data
+(2 rows)
+
+--
+-- Parallel INSERT with restricted column default, should use parallel SELECT
+--
+explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on testdef
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+(4 rows)
+
+insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+select * from testdef order by a;
+ a | b | c | d
+----+----+----+----
+ 1 | 2 | 10 | 8
+ 2 | 4 | 10 | 16
+ 3 | 6 | 10 | 24
+ 4 | 8 | 10 | 32
+ 5 | 10 | 10 | 40
+ 6 | 12 | 10 | 48
+ 7 | 14 | 10 | 56
+ 8 | 16 | 10 | 64
+ 9 | 18 | 10 | 72
+ 10 | 20 | 10 | 80
+(10 rows)
+
+truncate testdef;
+--
+-- Parallel INSERT with restricted and unsafe column defaults, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,d) select a,a*8 from test_data;
+ QUERY PLAN
+-----------------------------
+ Insert on testdef
+ -> Seq Scan on test_data
+(2 rows)
+
+--
+-- Test INSERT into partition with underlying query.
+--
+create table parttable1 (a int, b name) partition by range (a);
+create table parttable1_1 partition of parttable1 for values from (0) to (5000);
+create table parttable1_2 partition of parttable1 for values from (5000) to (10000);
+alter table parttable1 parallel dml safe;
+explain (costs off) insert into parttable1 select unique1,stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on parttable1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into parttable1 select unique1,stringu1 from tenk1;
+select count(*) from parttable1_1;
+ count
+-------
+ 5000
+(1 row)
+
+select count(*) from parttable1_2;
+ count
+-------
+ 5000
+(1 row)
+
+--
+-- Test table with parallel-unsafe check constraint
+--
+create or replace function check_b_unsafe(b name) returns boolean as $$
+ begin
+ return (b <> 'XXXXXX');
+ end;
+$$ language plpgsql parallel unsafe;
+create table table_check_b(a int4, b name check (check_b_unsafe(b)), c name);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('table_check_b');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_constraint | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('table_check_b');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into table_check_b(a,b,c) select unique1, unique2, stringu1 from tenk1;
+ QUERY PLAN
+-------------------------
+ Insert on table_check_b
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Test table with parallel-safe before stmt-level triggers
+-- (should create a parallel SELECT plan; triggers should fire)
+--
+create table names_with_safe_trigger (like names);
+create or replace function insert_before_trigger_safe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_safe';
+ return new;
+ end;
+$$ language plpgsql parallel safe;
+create trigger insert_before_trigger_safe before insert on names_with_safe_trigger
+ for each statement execute procedure insert_before_trigger_safe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_safe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+(0 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names_with_safe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ s
+(1 row)
+
+explain (costs off) insert into names_with_safe_trigger select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on names_with_safe_trigger
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+insert into names_with_safe_trigger select * from names;
+NOTICE: hello from insert_before_trigger_safe
+--
+-- Test table with parallel-unsafe before stmt-level triggers
+-- (should not create a parallel plan; triggers should fire)
+--
+create table names_with_unsafe_trigger (like names);
+create or replace function insert_before_trigger_unsafe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_unsafe';
+ return new;
+ end;
+$$ language plpgsql parallel unsafe;
+create trigger insert_before_trigger_unsafe before insert on names_with_unsafe_trigger
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_unsafe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_trigger | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names_with_unsafe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into names_with_unsafe_trigger select * from names;
+ QUERY PLAN
+-------------------------------------
+ Insert on names_with_unsafe_trigger
+ -> Seq Scan on names
+(2 rows)
+
+insert into names_with_unsafe_trigger select * from names;
+NOTICE: hello from insert_before_trigger_unsafe
+--
+-- Test partition with parallel-unsafe trigger
+-- (should not create a parallel plan)
+--
+create table part_unsafe_trigger (a int4, b name) partition by range (a);
+create table part_unsafe_trigger_1 partition of part_unsafe_trigger for values from (0) to (5000);
+create table part_unsafe_trigger_2 partition of part_unsafe_trigger for values from (5000) to (10000);
+create trigger part_insert_before_trigger_unsafe before insert on part_unsafe_trigger_1
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('part_unsafe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_trigger | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('part_unsafe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into part_unsafe_trigger select unique1, stringu1 from tenk1;
+ QUERY PLAN
+-------------------------------
+ Insert on part_unsafe_trigger
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Test DOMAIN column with a CHECK constraint
+--
+create function sql_is_distinct_from_u(anyelement, anyelement)
+returns boolean language sql parallel unsafe
+as 'select $1 is distinct from $2 limit 1';
+create domain inotnull_u int
+ check (sql_is_distinct_from_u(value, null));
+create table dom_table_u (x inotnull_u, y int);
+-- Test DOMAIN column with parallel-unsafe CHECK constraint
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('dom_table_u');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_constraint | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('dom_table_u');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into dom_table_u select unique1, unique2 from tenk1;
+ QUERY PLAN
+-------------------------
+ Insert on dom_table_u
+ -> Seq Scan on tenk1
+(2 rows)
+
+rollback;
+--
+-- Clean up anything not created in the transaction
+--
+drop table names;
+drop index names2_fullname_idx;
+drop table names2;
+drop index names4_fullname_idx;
+drop table names4;
+drop table testdef;
+drop table test_data;
+drop function bdefault_unsafe;
+drop function cdefault_restricted;
+drop function ddefault_safe;
+drop function fullname_parallel_unsafe;
+drop function fullname_parallel_restricted;
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 1b2f6bc418..760abca4e8 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -2818,6 +2818,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
--------+----------------+-----------+----------+---------+----------+--------------+-------------
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
+Parallel DML: default
\d+ tbl_heap
Table "tableam_display.tbl_heap"
@@ -2825,6 +2826,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
--------+----------------+-----------+----------+---------+----------+--------------+-------------
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
+Parallel DML: default
\set HIDE_TABLEAM off
\d+ tbl_heap_psql
@@ -2834,6 +2836,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
Access method: heap_psql
+Parallel DML: default
\d+ tbl_heap
Table "tableam_display.tbl_heap"
@@ -2842,50 +2845,51 @@ Access method: heap_psql
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
Access method: heap
+Parallel DML: default
-- AM is displayed for tables, indexes and materialized views.
\d+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | 0 bytes |
- tableam_display | tbl_heap | table | regress_display_role | permanent | heap | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | 0 bytes |
- tableam_display | view_heap_psql | view | regress_display_role | permanent | | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | default | 0 bytes |
+ tableam_display | tbl_heap | table | regress_display_role | permanent | heap | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | default | 0 bytes |
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | | default | 0 bytes |
(4 rows)
\dt+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+---------------+-------+----------------------+-------------+---------------+---------+-------------
- tableam_display | tbl_heap | table | regress_display_role | permanent | heap | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+---------------+-------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | tbl_heap | table | regress_display_role | permanent | heap | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | default | 0 bytes |
(2 rows)
\dm+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | default | 0 bytes |
(1 row)
-- But not for views and sequences.
\dv+
- List of relations
- Schema | Name | Type | Owner | Persistence | Size | Description
------------------+----------------+------+----------------------+-------------+---------+-------------
- tableam_display | view_heap_psql | view | regress_display_role | permanent | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Parallel DML | Size | Description
+-----------------+----------------+------+----------------------+-------------+--------------+---------+-------------
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | default | 0 bytes |
(1 row)
\set HIDE_TABLEAM on
\d+
- List of relations
- Schema | Name | Type | Owner | Persistence | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | 0 bytes |
- tableam_display | tbl_heap | table | regress_display_role | permanent | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | 0 bytes |
- tableam_display | view_heap_psql | view | regress_display_role | permanent | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | tbl_heap | table | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | default | 0 bytes |
(4 rows)
RESET ROLE;
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4a5ef0bc24..ffb498dc88 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -85,6 +85,7 @@ Indexes:
"testpub_tbl2_pkey" PRIMARY KEY, btree (id)
Publications:
"testpub_foralltables"
+Parallel DML: default
\dRp+ testpub_foralltables
Publication testpub_foralltables
@@ -198,6 +199,7 @@ Publications:
"testpib_ins_trunct"
"testpub_default"
"testpub_fortbl"
+Parallel DML: default
\d+ testpub_tbl1
Table "public.testpub_tbl1"
@@ -211,6 +213,7 @@ Publications:
"testpib_ins_trunct"
"testpub_default"
"testpub_fortbl"
+Parallel DML: default
\dRp+ testpub_default
Publication testpub_default
@@ -236,6 +239,7 @@ Indexes:
Publications:
"testpib_ins_trunct"
"testpub_fortbl"
+Parallel DML: default
-- permissions
SET ROLE regress_publication_user2;
diff --git a/src/test/regress/expected/replica_identity.out b/src/test/regress/expected/replica_identity.out
index 79002197a7..482fe4d8c4 100644
--- a/src/test/regress/expected/replica_identity.out
+++ b/src/test/regress/expected/replica_identity.out
@@ -171,6 +171,7 @@ Indexes:
"test_replica_identity_unique_defer" UNIQUE CONSTRAINT, btree (keya, keyb) DEFERRABLE
"test_replica_identity_unique_nondefer" UNIQUE CONSTRAINT, btree (keya, keyb)
Replica Identity: FULL
+Parallel DML: default
ALTER TABLE test_replica_identity REPLICA IDENTITY NOTHING;
SELECT relreplident FROM pg_class WHERE oid = 'test_replica_identity'::regclass;
diff --git a/src/test/regress/expected/rowsecurity.out b/src/test/regress/expected/rowsecurity.out
index 89397e41f0..26ab706515 100644
--- a/src/test/regress/expected/rowsecurity.out
+++ b/src/test/regress/expected/rowsecurity.out
@@ -958,6 +958,7 @@ Policies:
Partitions: part_document_fiction FOR VALUES FROM (11) TO (12),
part_document_nonfiction FOR VALUES FROM (99) TO (100),
part_document_satire FOR VALUES FROM (55) TO (56)
+Parallel DML: default
SELECT * FROM pg_policies WHERE schemaname = 'regress_rls_schema' AND tablename like '%part_document%' ORDER BY policyname;
schemaname | tablename | policyname | permissive | roles | cmd | qual | with_check
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2fa00a3c29..ea8a737c1c 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -3180,6 +3180,7 @@ Rules:
r3 AS
ON DELETE TO rules_src DO
NOTIFY rules_src_deletion
+Parallel DML: default
--
-- Ensure an aliased target relation for insert is correctly deparsed.
@@ -3208,6 +3209,7 @@ Rules:
r5 AS
ON UPDATE TO rules_src DO INSTEAD UPDATE rules_log trgt SET tag = 'updated'::text
WHERE trgt.f1 = new.f1
+Parallel DML: default
--
-- Also check multiassignment deparsing.
@@ -3231,6 +3233,7 @@ Rules:
WHERE trgt.f1 = new.f1
RETURNING new.f1,
new.f2
+Parallel DML: default
drop table rule_t1, rule_dest;
--
diff --git a/src/test/regress/expected/stats_ext.out b/src/test/regress/expected/stats_ext.out
index a7f12e989d..06c1d25326 100644
--- a/src/test/regress/expected/stats_ext.out
+++ b/src/test/regress/expected/stats_ext.out
@@ -156,6 +156,7 @@ ALTER STATISTICS ab1_a_b_stats SET STATISTICS -1;
b | integer | | | | plain | |
Statistics objects:
"public.ab1_a_b_stats" ON a, b FROM ab1
+Parallel DML: default
-- partial analyze doesn't build stats either
ANALYZE ab1 (a);
diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out
index 5d124cf96f..13e0547302 100644
--- a/src/test/regress/expected/triggers.out
+++ b/src/test/regress/expected/triggers.out
@@ -3483,6 +3483,7 @@ alter trigger parenttrig on parent rename to anothertrig;
Triggers:
parenttrig AFTER INSERT ON child FOR EACH ROW EXECUTE FUNCTION f()
Inherits: parent
+Parallel DML: default
drop table parent, child;
drop function f();
diff --git a/src/test/regress/expected/update.out b/src/test/regress/expected/update.out
index c809f88f54..3b981ae2aa 100644
--- a/src/test/regress/expected/update.out
+++ b/src/test/regress/expected/update.out
@@ -753,6 +753,7 @@ create table part_def partition of range_parted default;
e | character varying | | | | extended | |
Partition of: range_parted DEFAULT
Partition constraint: (NOT ((a IS NOT NULL) AND (b IS NOT NULL) AND (((a = 'a'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'a'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'b'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '20'::bigint) AND (b < '30'::bigint)))))
+Parallel DML: default
insert into range_parted values ('c', 9);
-- ok
diff --git a/src/test/regress/output/tablespace.source b/src/test/regress/output/tablespace.source
index 1bbe7e0323..8d17677072 100644
--- a/src/test/regress/output/tablespace.source
+++ b/src/test/regress/output/tablespace.source
@@ -339,6 +339,7 @@ Indexes:
"part_a_idx" btree (a), tablespace "regress_tblspace"
Partitions: testschema.part1 FOR VALUES IN (1),
testschema.part2 FOR VALUES IN (2)
+Parallel DML: default
\d testschema.part1
Table "testschema.part1"
@@ -358,6 +359,7 @@ Partition of: testschema.part FOR VALUES IN (1)
Partition constraint: ((a IS NOT NULL) AND (a = 1))
Indexes:
"part1_a_idx" btree (a), tablespace "regress_tblspace"
+Parallel DML: default
\d testschema.part_a_idx
Partitioned index "testschema.part_a_idx"
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 7be89178f0..daf0bad4d5 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -96,6 +96,7 @@ test: rules psql psql_crosstab amutils stats_ext collate.linux.utf8
# run by itself so it can run parallel workers
test: select_parallel
test: write_parallel
+test: insert_parallel
# no relation related tests can be put in this group
test: publication subscription
diff --git a/src/test/regress/sql/insert_parallel.sql b/src/test/regress/sql/insert_parallel.sql
new file mode 100644
index 0000000000..65ab8b79d0
--- /dev/null
+++ b/src/test/regress/sql/insert_parallel.sql
@@ -0,0 +1,381 @@
+--
+-- PARALLEL
+--
+
+--
+-- START: setup some tables and data needed by the tests.
+--
+
+-- Setup - index expressions test
+
+create function pg_class_relname(Oid)
+returns name language sql parallel unsafe
+as 'select relname from pg_class where $1 = oid';
+
+-- For testing purposes, we'll mark this function as parallel-unsafe
+create or replace function fullname_parallel_unsafe(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel unsafe;
+
+create or replace function fullname_parallel_restricted(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel restricted;
+
+create table names(index int, first_name text, last_name text);
+create table names2(index int, first_name text, last_name text);
+create index names2_fullname_idx on names2 (fullname_parallel_unsafe(first_name, last_name));
+create table names4(index int, first_name text, last_name text);
+create index names4_fullname_idx on names4 (fullname_parallel_restricted(first_name, last_name));
+
+
+insert into names values
+ (1, 'albert', 'einstein'),
+ (2, 'niels', 'bohr'),
+ (3, 'erwin', 'schrodinger'),
+ (4, 'leonhard', 'euler'),
+ (5, 'stephen', 'hawking'),
+ (6, 'isaac', 'newton'),
+ (7, 'alan', 'turing'),
+ (8, 'richard', 'feynman');
+
+-- Setup - column default tests
+
+create or replace function bdefault_unsafe ()
+returns int language plpgsql parallel unsafe as $$
+begin
+ RETURN 5;
+end $$;
+
+create or replace function cdefault_restricted ()
+returns int language plpgsql parallel restricted as $$
+begin
+ RETURN 10;
+end $$;
+
+create or replace function ddefault_safe ()
+returns int language plpgsql parallel safe as $$
+begin
+ RETURN 20;
+end $$;
+
+create table testdef(a int, b int default bdefault_unsafe(), c int default cdefault_restricted(), d int default ddefault_safe());
+create table test_data(a int);
+insert into test_data select * from generate_series(1,10);
+
+--
+-- END: setup some tables and data needed by the tests.
+--
+
+begin;
+
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+
+create table para_insert_p1 (
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+);
+
+create table para_insert_f1 (
+ unique1 int4 REFERENCES para_insert_p1(unique1),
+ stringu1 name
+);
+
+create table para_insert_with_parallel_unsafe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml unsafe;
+
+create table para_insert_with_parallel_restricted(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml restricted;
+
+create table para_insert_with_parallel_safe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml safe;
+
+create table para_insert_with_parallel_auto(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml default;
+
+-- Check FK trigger
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('para_insert_f1');
+select pg_get_table_max_parallel_dml_hazard('para_insert_f1');
+
+--
+-- Test INSERT with underlying query.
+-- Set parallel dml safe.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+alter table para_insert_p1 parallel dml safe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+insert into para_insert_p1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+explain (costs off) insert into para_insert_with_parallel_safe select unique1, stringu1 from tenk1;
+
+--
+-- Set parallel dml unsafe.
+-- (should not create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml unsafe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_unsafe select unique1, stringu1 from tenk1;
+
+--
+-- Set parallel dml restricted.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml restricted;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_restricted select unique1, stringu1 from tenk1;
+
+--
+-- Reset parallel dml.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml default;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_auto select unique1, stringu1 from tenk1;
+
+--
+-- Test INSERT with ordered underlying query.
+-- (should create plan with parallel SELECT, GatherMerge parent node)
+--
+truncate para_insert_p1 cascade;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+
+--
+-- Test INSERT with RETURNING clause.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+create table test_data1(like test_data);
+explain (costs off) insert into test_data1 select * from test_data where a = 10 returning a as data;
+insert into test_data1 select * from test_data where a = 10 returning a as data;
+
+--
+-- Test INSERT into a table with a foreign key.
+-- (Insert into a table with a foreign key is parallel-restricted,
+-- as doing this in a parallel worker would create a new commandId
+-- and within a worker this is not currently supported)
+--
+explain (costs off) insert into para_insert_f1 select unique1, stringu1 from tenk1;
+insert into para_insert_f1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the insert worked
+select count(*), sum(unique1) from para_insert_f1;
+
+--
+-- Test INSERT with ON CONFLICT ... DO UPDATE ...
+-- (should not create a parallel plan)
+--
+create table test_conflict_table(id serial primary key, somedata int);
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data;
+insert into test_conflict_table(id, somedata) select a, a from test_data;
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data ON CONFLICT(id) DO UPDATE SET somedata = EXCLUDED.somedata + 1;
+
+--
+-- Test INSERT with parallel-unsafe index expression
+-- (should not create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names2');
+select pg_get_table_max_parallel_dml_hazard('names2');
+explain (costs off) insert into names2 select * from names;
+
+--
+-- Test INSERT with parallel-restricted index expression
+-- (should create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names4');
+select pg_get_table_max_parallel_dml_hazard('names4');
+explain (costs off) insert into names4 select * from names;
+
+--
+-- Test INSERT with underlying query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names5 (like names);
+explain (costs off) insert into names5 select * from names returning *;
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names6 (like names);
+explain (costs off) insert into names6 select * from names order by last_name returning *;
+insert into names6 select * from names order by last_name returning *;
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (with projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names7 (like names);
+explain (costs off) insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+
+
+--
+-- Test INSERT into temporary table with underlying query.
+-- (Insert into a temp table is parallel-restricted;
+-- should create a parallel plan; parallel SELECT)
+--
+create temporary table temp_names (like names);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('temp_names');
+select pg_get_table_max_parallel_dml_hazard('temp_names');
+explain (costs off) insert into temp_names select * from names;
+insert into temp_names select * from names;
+
+--
+-- Test INSERT with column defaults
+--
+--
+
+--
+-- Parallel INSERT with unsafe column default, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,c,d) select a,a*4,a*8 from test_data;
+
+--
+-- Parallel INSERT with restricted column default, should use parallel SELECT
+--
+explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+select * from testdef order by a;
+truncate testdef;
+
+--
+-- Parallel INSERT with restricted and unsafe column defaults, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,d) select a,a*8 from test_data;
+
+--
+-- Test INSERT into partition with underlying query.
+--
+create table parttable1 (a int, b name) partition by range (a);
+create table parttable1_1 partition of parttable1 for values from (0) to (5000);
+create table parttable1_2 partition of parttable1 for values from (5000) to (10000);
+
+alter table parttable1 parallel dml safe;
+
+explain (costs off) insert into parttable1 select unique1,stringu1 from tenk1;
+insert into parttable1 select unique1,stringu1 from tenk1;
+select count(*) from parttable1_1;
+select count(*) from parttable1_2;
+
+--
+-- Test table with parallel-unsafe check constraint
+--
+create or replace function check_b_unsafe(b name) returns boolean as $$
+ begin
+ return (b <> 'XXXXXX');
+ end;
+$$ language plpgsql parallel unsafe;
+
+create table table_check_b(a int4, b name check (check_b_unsafe(b)), c name);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('table_check_b');
+select pg_get_table_max_parallel_dml_hazard('table_check_b');
+explain (costs off) insert into table_check_b(a,b,c) select unique1, unique2, stringu1 from tenk1;
+
+--
+-- Test table with parallel-safe before stmt-level triggers
+-- (should create a parallel SELECT plan; triggers should fire)
+--
+create table names_with_safe_trigger (like names);
+
+create or replace function insert_before_trigger_safe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_safe';
+ return new;
+ end;
+$$ language plpgsql parallel safe;
+create trigger insert_before_trigger_safe before insert on names_with_safe_trigger
+ for each statement execute procedure insert_before_trigger_safe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_safe_trigger');
+select pg_get_table_max_parallel_dml_hazard('names_with_safe_trigger');
+explain (costs off) insert into names_with_safe_trigger select * from names;
+insert into names_with_safe_trigger select * from names;
+
+--
+-- Test table with parallel-unsafe before stmt-level triggers
+-- (should not create a parallel plan; triggers should fire)
+--
+create table names_with_unsafe_trigger (like names);
+create or replace function insert_before_trigger_unsafe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_unsafe';
+ return new;
+ end;
+$$ language plpgsql parallel unsafe;
+create trigger insert_before_trigger_unsafe before insert on names_with_unsafe_trigger
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_unsafe_trigger');
+select pg_get_table_max_parallel_dml_hazard('names_with_unsafe_trigger');
+explain (costs off) insert into names_with_unsafe_trigger select * from names;
+insert into names_with_unsafe_trigger select * from names;
+
+--
+-- Test partition with parallel-unsafe trigger
+-- (should not create a parallel plan)
+--
+create table part_unsafe_trigger (a int4, b name) partition by range (a);
+create table part_unsafe_trigger_1 partition of part_unsafe_trigger for values from (0) to (5000);
+create table part_unsafe_trigger_2 partition of part_unsafe_trigger for values from (5000) to (10000);
+create trigger part_insert_before_trigger_unsafe before insert on part_unsafe_trigger_1
+ for each statement execute procedure insert_before_trigger_unsafe();
+
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('part_unsafe_trigger');
+select pg_get_table_max_parallel_dml_hazard('part_unsafe_trigger');
+explain (costs off) insert into part_unsafe_trigger select unique1, stringu1 from tenk1;
+
+--
+-- Test DOMAIN column with a CHECK constraint
+--
+create function sql_is_distinct_from_u(anyelement, anyelement)
+returns boolean language sql parallel unsafe
+as 'select $1 is distinct from $2 limit 1';
+
+create domain inotnull_u int
+ check (sql_is_distinct_from_u(value, null));
+
+create table dom_table_u (x inotnull_u, y int);
+
+-- Test DOMAIN column with parallel-unsafe CHECK constraint
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('dom_table_u');
+select pg_get_table_max_parallel_dml_hazard('dom_table_u');
+explain (costs off) insert into dom_table_u select unique1, unique2 from tenk1;
+
+rollback;
+
+--
+-- Clean up anything not created in the transaction
+--
+
+drop table names;
+drop index names2_fullname_idx;
+drop table names2;
+drop index names4_fullname_idx;
+drop table names4;
+drop table testdef;
+drop table test_data;
+
+drop function bdefault_unsafe;
+drop function cdefault_restricted;
+drop function ddefault_safe;
+drop function fullname_parallel_unsafe;
+drop function fullname_parallel_restricted;
--
2.18.4
v18-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patchapplication/octet-stream; name=v18-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patchDownload
From 01bdde01fb66e93928cb84b6aeee7dd31ea9ad83 Mon Sep 17 00:00:00 2001
From: Hou Zhijie <HouZhijie@foxmail.com>
Date: Tue, 3 Aug 2021 14:13:39 +0800
Subject: [PATCH] CREATE-ALTER-TABLE-PARALLEL-DML
Enable users to declare a table's parallel data-modification safety
(DEFAULT/SAFE/RESTRICTED/UNSAFE).
Add a table property that represents parallel safety of a table for
DML statement execution.
It can be specified as follows:
CREATE TABLE table_name PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE };
ALTER TABLE table_name PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE };
This property is recorded in pg_class's relparalleldml column as 'u',
'r', or 's' like pg_proc's proparallel and as 'd' if not set.
The default is 'd'.
If relparalleldml is specific(safe/restricted/unsafe), then
the planner assumes that all of the table, its descendant partitions,
and their ancillary objects have, at worst, the specified parallel
safety. The user is responsible for its correctness.
If relparalleldml is not set or set to DEFAULT, for non-partitioned table,
planner will check the parallel safety automatically(see 0004 patch).
But for partitioned table, planner will assume that the table is UNSAFE
to be modified in parallel mode.
---
src/backend/bootstrap/bootparse.y | 3 +
src/backend/catalog/heap.c | 7 +-
src/backend/catalog/index.c | 2 +
src/backend/catalog/toasting.c | 1 +
src/backend/commands/cluster.c | 1 +
src/backend/commands/createas.c | 1 +
src/backend/commands/sequence.c | 1 +
src/backend/commands/tablecmds.c | 97 +++++++++++++++++++
src/backend/commands/typecmds.c | 1 +
src/backend/commands/view.c | 1 +
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 2 +
src/backend/nodes/outfuncs.c | 2 +
src/backend/nodes/readfuncs.c | 1 +
src/backend/parser/gram.y | 73 ++++++++++----
src/backend/utils/cache/relcache.c | 6 +-
src/bin/pg_dump/pg_dump.c | 50 ++++++++--
src/bin/pg_dump/pg_dump.h | 1 +
src/bin/psql/describe.c | 71 ++++++++++++--
src/include/catalog/heap.h | 2 +
src/include/catalog/pg_class.h | 3 +
src/include/catalog/pg_proc.h | 2 +
src/include/nodes/parsenodes.h | 4 +-
src/include/nodes/primnodes.h | 1 +
src/include/parser/kwlist.h | 1 +
src/include/utils/relcache.h | 3 +-
.../test_ddl_deparse/test_ddl_deparse.c | 3 +
27 files changed, 302 insertions(+), 39 deletions(-)
diff --git a/src/backend/bootstrap/bootparse.y b/src/backend/bootstrap/bootparse.y
index 5fcd004e1b..4712536088 100644
--- a/src/backend/bootstrap/bootparse.y
+++ b/src/backend/bootstrap/bootparse.y
@@ -25,6 +25,7 @@
#include "catalog/pg_authid.h"
#include "catalog/pg_class.h"
#include "catalog/pg_namespace.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/toasting.h"
#include "commands/defrem.h"
@@ -208,6 +209,7 @@ Boot_CreateStmt:
tupdesc,
RELKIND_RELATION,
RELPERSISTENCE_PERMANENT,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
true,
@@ -231,6 +233,7 @@ Boot_CreateStmt:
NIL,
RELKIND_RELATION,
RELPERSISTENCE_PERMANENT,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
ONCOMMIT_NOOP,
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 83746d3fd9..135df961c9 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -302,6 +302,7 @@ heap_create(const char *relname,
TupleDesc tupDesc,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
bool allow_system_table_mods,
@@ -404,7 +405,8 @@ heap_create(const char *relname,
shared_relation,
mapped_relation,
relpersistence,
- relkind);
+ relkind,
+ relparalleldml);
/*
* Have the storage manager create the relation's disk file, if needed.
@@ -959,6 +961,7 @@ InsertPgClassTuple(Relation pg_class_desc,
values[Anum_pg_class_relhassubclass - 1] = BoolGetDatum(rd_rel->relhassubclass);
values[Anum_pg_class_relispopulated - 1] = BoolGetDatum(rd_rel->relispopulated);
values[Anum_pg_class_relreplident - 1] = CharGetDatum(rd_rel->relreplident);
+ values[Anum_pg_class_relparalleldml - 1] = CharGetDatum(rd_rel->relparalleldml);
values[Anum_pg_class_relispartition - 1] = BoolGetDatum(rd_rel->relispartition);
values[Anum_pg_class_relrewrite - 1] = ObjectIdGetDatum(rd_rel->relrewrite);
values[Anum_pg_class_relfrozenxid - 1] = TransactionIdGetDatum(rd_rel->relfrozenxid);
@@ -1152,6 +1155,7 @@ heap_create_with_catalog(const char *relname,
List *cooked_constraints,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
OnCommitAction oncommit,
@@ -1299,6 +1303,7 @@ heap_create_with_catalog(const char *relname,
tupdesc,
relkind,
relpersistence,
+ relparalleldml,
shared_relation,
mapped_relation,
allow_system_table_mods,
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 26bfa74ce7..18f3a51686 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -50,6 +50,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_opclass.h"
#include "catalog/pg_operator.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_trigger.h"
#include "catalog/pg_type.h"
@@ -935,6 +936,7 @@ index_create(Relation heapRelation,
indexTupDesc,
relkind,
relpersistence,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
allow_system_table_mods,
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index 147b5abc19..b32d2d4132 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -251,6 +251,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
NIL,
RELKIND_TOASTVALUE,
rel->rd_rel->relpersistence,
+ rel->rd_rel->relparalleldml,
shared_relation,
mapped_relation,
ONCOMMIT_NOOP,
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index b3d8b6deb0..d1a7603d90 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -693,6 +693,7 @@ make_new_heap(Oid OIDOldHeap, Oid NewTableSpace, Oid NewAccessMethod,
NIL,
RELKIND_RELATION,
relpersistence,
+ OldHeap->rd_rel->relparalleldml,
false,
RelationIsMapped(OldHeap),
ONCOMMIT_NOOP,
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 0982851715..7607b91ae8 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -107,6 +107,7 @@ create_ctas_internal(List *attrList, IntoClause *into)
create->options = into->options;
create->oncommit = into->onCommit;
create->tablespacename = into->tableSpaceName;
+ create->paralleldmlsafety = into->paralleldmlsafety;
create->if_not_exists = false;
create->accessMethod = into->accessMethod;
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..384770050a 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -211,6 +211,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
stmt->options = NIL;
stmt->oncommit = ONCOMMIT_NOOP;
stmt->tablespacename = NULL;
+ stmt->paralleldmlsafety = NULL;
stmt->if_not_exists = seq->if_not_exists;
address = DefineRelation(stmt, RELKIND_SEQUENCE, seq->ownerId, NULL, NULL);
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fcd778c62a..5968252648 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -40,6 +40,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_namespace.h"
#include "catalog/pg_opclass.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_statistic_ext.h"
#include "catalog/pg_trigger.h"
@@ -603,6 +604,7 @@ static void refuseDupeIndexAttach(Relation parentIdx, Relation partIdx,
static List *GetParentedForeignKeyRefs(Relation partition);
static void ATDetachCheckNoForeignKeyRefs(Relation partition);
static char GetAttributeCompression(Oid atttypid, char *compression);
+static void ATExecParallelDMLSafety(Relation rel, Node *def);
/* ----------------------------------------------------------------
@@ -648,6 +650,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
LOCKMODE parentLockmode;
const char *accessMethod = NULL;
Oid accessMethodId = InvalidOid;
+ char relparalleldml = PROPARALLEL_DEFAULT;
/*
* Truncate relname to appropriate length (probably a waste of time, as
@@ -926,6 +929,32 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
if (accessMethod != NULL)
accessMethodId = get_table_am_oid(accessMethod, false);
+ if (stmt->paralleldmlsafety != NULL)
+ {
+ if (strcmp(stmt->paralleldmlsafety, "safe") == 0)
+ {
+ if (relkind == RELKIND_FOREIGN_TABLE ||
+ stmt->relation->relpersistence == RELPERSISTENCE_TEMP)
+ ereport(ERROR,
+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ errmsg("cannot perform parallel data modification on relation \"%s\"",
+ relname),
+ errdetail_relkind_not_supported(relkind)));
+
+ relparalleldml = PROPARALLEL_SAFE;
+ }
+ else if (strcmp(stmt->paralleldmlsafety, "restricted") == 0)
+ relparalleldml = PROPARALLEL_RESTRICTED;
+ else if (strcmp(stmt->paralleldmlsafety, "unsafe") == 0)
+ relparalleldml = PROPARALLEL_UNSAFE;
+ else if (strcmp(stmt->paralleldmlsafety, "default") == 0)
+ relparalleldml = PROPARALLEL_DEFAULT;
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("parameter \"parallel dml\" must be SAFE, RESTRICTED, UNSAFE or DEFAULT")));
+ }
+
/*
* Create the relation. Inherited defaults and constraints are passed in
* for immediate handling --- since they don't need parsing, they can be
@@ -944,6 +973,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
old_constraints),
relkind,
stmt->relation->relpersistence,
+ relparalleldml,
false,
false,
stmt->oncommit,
@@ -4187,6 +4217,7 @@ AlterTableGetLockLevel(List *cmds)
case AT_SetIdentity:
case AT_DropExpression:
case AT_SetCompression:
+ case AT_ParallelDMLSafety:
cmd_lockmode = AccessExclusiveLock;
break;
@@ -4737,6 +4768,11 @@ ATPrepCmd(List **wqueue, Relation rel, AlterTableCmd *cmd,
/* No command-specific prep needed */
pass = AT_PASS_MISC;
break;
+ case AT_ParallelDMLSafety:
+ ATSimplePermissions(cmd->subtype, rel, ATT_TABLE | ATT_FOREIGN_TABLE);
+ /* No command-specific prep needed */
+ pass = AT_PASS_MISC;
+ break;
default: /* oops */
elog(ERROR, "unrecognized alter table type: %d",
(int) cmd->subtype);
@@ -5142,6 +5178,9 @@ ATExecCmd(List **wqueue, AlteredTableInfo *tab,
case AT_DetachPartitionFinalize:
ATExecDetachPartitionFinalize(rel, ((PartitionCmd *) cmd->def)->name);
break;
+ case AT_ParallelDMLSafety:
+ ATExecParallelDMLSafety(rel, cmd->def);
+ break;
default: /* oops */
elog(ERROR, "unrecognized alter table type: %d",
(int) cmd->subtype);
@@ -6113,6 +6152,8 @@ alter_table_type_to_string(AlterTableType cmdtype)
return "ALTER COLUMN ... DROP IDENTITY";
case AT_ReAddStatistics:
return NULL; /* not real grammar */
+ case AT_ParallelDMLSafety:
+ return "PARALLEL DML SAFETY";
}
return NULL;
@@ -18773,3 +18814,59 @@ GetAttributeCompression(Oid atttypid, char *compression)
return cmethod;
}
+
+static void
+ATExecParallelDMLSafety(Relation rel, Node *def)
+{
+ Relation pg_class;
+ Oid relid;
+ HeapTuple tuple;
+ char relparallel = PROPARALLEL_DEFAULT;
+ char *parallel = strVal(def);
+
+ if (parallel)
+ {
+ if (strcmp(parallel, "safe") == 0)
+ {
+ /*
+ * We can't support table modification in a parallel worker if it's
+ * a foreign table/partition (no FDW API for supporting parallel
+ * access) or a temporary table.
+ */
+ if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||
+ RelationUsesLocalBuffers(rel))
+ ereport(ERROR,
+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ errmsg("cannot perform parallel data modification on relation \"%s\"",
+ RelationGetRelationName(rel)),
+ errdetail_relkind_not_supported(rel->rd_rel->relkind)));
+
+ relparallel = PROPARALLEL_SAFE;
+ }
+ else if (strcmp(parallel, "restricted") == 0)
+ relparallel = PROPARALLEL_RESTRICTED;
+ else if (strcmp(parallel, "unsafe") == 0)
+ relparallel = PROPARALLEL_UNSAFE;
+ else if (strcmp(parallel, "default") == 0)
+ relparallel = PROPARALLEL_DEFAULT;
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("parameter \"parallel dml\" must be SAFE, RESTRICTED, UNSAFE or DEFAULT")));
+ }
+
+ relid = RelationGetRelid(rel);
+
+ pg_class = table_open(RelationRelationId, RowExclusiveLock);
+
+ tuple = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(relid));
+
+ if (!HeapTupleIsValid(tuple))
+ elog(ERROR, "cache lookup failed for relation %u", relid);
+
+ ((Form_pg_class) GETSTRUCT(tuple))->relparalleldml = relparallel;
+ CatalogTupleUpdate(pg_class, &tuple->t_self, tuple);
+
+ table_close(pg_class, RowExclusiveLock);
+ heap_freetuple(tuple);
+}
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index 93eeff950b..a2f06c3e79 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2525,6 +2525,7 @@ DefineCompositeType(RangeVar *typevar, List *coldeflist)
createStmt->options = NIL;
createStmt->oncommit = ONCOMMIT_NOOP;
createStmt->tablespacename = NULL;
+ createStmt->paralleldmlsafety = NULL;
createStmt->if_not_exists = false;
/*
diff --git a/src/backend/commands/view.c b/src/backend/commands/view.c
index 4df05a0b33..65f33a95d8 100644
--- a/src/backend/commands/view.c
+++ b/src/backend/commands/view.c
@@ -227,6 +227,7 @@ DefineVirtualRelation(RangeVar *relation, List *tlist, bool replace,
createStmt->options = options;
createStmt->oncommit = ONCOMMIT_NOOP;
createStmt->tablespacename = NULL;
+ createStmt->paralleldmlsafety = NULL;
createStmt->if_not_exists = false;
/*
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 29020c908e..df41165c5f 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -3534,6 +3534,7 @@ CopyCreateStmtFields(const CreateStmt *from, CreateStmt *newnode)
COPY_SCALAR_FIELD(oncommit);
COPY_STRING_FIELD(tablespacename);
COPY_STRING_FIELD(accessMethod);
+ COPY_STRING_FIELD(paralleldmlsafety);
COPY_SCALAR_FIELD(if_not_exists);
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 8a1762000c..67b1966f18 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -146,6 +146,7 @@ _equalIntoClause(const IntoClause *a, const IntoClause *b)
COMPARE_NODE_FIELD(options);
COMPARE_SCALAR_FIELD(onCommit);
COMPARE_STRING_FIELD(tableSpaceName);
+ COMPARE_STRING_FIELD(paralleldmlsafety);
COMPARE_NODE_FIELD(viewQuery);
COMPARE_SCALAR_FIELD(skipData);
@@ -1292,6 +1293,7 @@ _equalCreateStmt(const CreateStmt *a, const CreateStmt *b)
COMPARE_SCALAR_FIELD(oncommit);
COMPARE_STRING_FIELD(tablespacename);
COMPARE_STRING_FIELD(accessMethod);
+ COMPARE_STRING_FIELD(paralleldmlsafety);
COMPARE_SCALAR_FIELD(if_not_exists);
return true;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 48202d2232..fdc5b63c28 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -1107,6 +1107,7 @@ _outIntoClause(StringInfo str, const IntoClause *node)
WRITE_NODE_FIELD(options);
WRITE_ENUM_FIELD(onCommit, OnCommitAction);
WRITE_STRING_FIELD(tableSpaceName);
+ WRITE_STRING_FIELD(paralleldmlsafety);
WRITE_NODE_FIELD(viewQuery);
WRITE_BOOL_FIELD(skipData);
}
@@ -2714,6 +2715,7 @@ _outCreateStmtInfo(StringInfo str, const CreateStmt *node)
WRITE_ENUM_FIELD(oncommit, OnCommitAction);
WRITE_STRING_FIELD(tablespacename);
WRITE_STRING_FIELD(accessMethod);
+ WRITE_STRING_FIELD(paralleldmlsafety);
WRITE_BOOL_FIELD(if_not_exists);
}
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 77d082d8b4..ba725cb290 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -563,6 +563,7 @@ _readIntoClause(void)
READ_NODE_FIELD(options);
READ_ENUM_FIELD(onCommit, OnCommitAction);
READ_STRING_FIELD(tableSpaceName);
+ READ_STRING_FIELD(paralleldmlsafety);
READ_NODE_FIELD(viewQuery);
READ_BOOL_FIELD(skipData);
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 39a2849eba..f74a7cac60 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -609,7 +609,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <partboundspec> PartitionBoundSpec
%type <list> hash_partbound
%type <defelt> hash_partbound_elem
-
+%type <str> ParallelDMLSafety
/*
* Non-keyword token types. These are hard-wired into the "flex" lexer.
@@ -654,7 +654,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
DATA_P DATABASE DAY_P DEALLOCATE DEC DECIMAL_P DECLARE DEFAULT DEFAULTS
DEFERRABLE DEFERRED DEFINER DELETE_P DELIMITER DELIMITERS DEPENDS DEPTH DESC
- DETACH DICTIONARY DISABLE_P DISCARD DISTINCT DO DOCUMENT_P DOMAIN_P
+ DETACH DICTIONARY DISABLE_P DISCARD DISTINCT DML DO DOCUMENT_P DOMAIN_P
DOUBLE_P DROP
EACH ELSE ENABLE_P ENCODING ENCRYPTED END_P ENUM_P ESCAPE EVENT EXCEPT
@@ -2691,6 +2691,21 @@ alter_table_cmd:
n->subtype = AT_NoForceRowSecurity;
$$ = (Node *)n;
}
+ /* ALTER TABLE <name> PARALLEL DML SAFE/RESTRICTED/UNSAFE/DEFAULT */
+ | PARALLEL DML ColId
+ {
+ AlterTableCmd *n = makeNode(AlterTableCmd);
+ n->subtype = AT_ParallelDMLSafety;
+ n->def = (Node *)makeString($3);
+ $$ = (Node *)n;
+ }
+ | PARALLEL DML DEFAULT
+ {
+ AlterTableCmd *n = makeNode(AlterTableCmd);
+ n->subtype = AT_ParallelDMLSafety;
+ n->def = (Node *)makeString("default");
+ $$ = (Node *)n;
+ }
| alter_generic_options
{
AlterTableCmd *n = makeNode(AlterTableCmd);
@@ -3276,7 +3291,7 @@ copy_generic_opt_arg_list_item:
CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
OptInherit OptPartitionSpec table_access_method_clause OptWith
- OnCommitOption OptTableSpace
+ OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3290,12 +3305,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $11;
n->oncommit = $12;
n->tablespacename = $13;
+ n->paralleldmlsafety = $14;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name '('
OptTableElementList ')' OptInherit OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3309,12 +3325,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $14;
n->oncommit = $15;
n->tablespacename = $16;
+ n->paralleldmlsafety = $17;
n->if_not_exists = true;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE qualified_name OF any_name
OptTypedTableElementList OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3329,12 +3346,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $10;
n->oncommit = $11;
n->tablespacename = $12;
+ n->paralleldmlsafety = $13;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name OF any_name
OptTypedTableElementList OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3349,12 +3367,14 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $13;
n->oncommit = $14;
n->tablespacename = $15;
+ n->paralleldmlsafety = $16;
n->if_not_exists = true;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE qualified_name PARTITION OF qualified_name
OptTypedTableElementList PartitionBoundSpec OptPartitionSpec
table_access_method_clause OptWith OnCommitOption OptTableSpace
+ ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3369,12 +3389,14 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $12;
n->oncommit = $13;
n->tablespacename = $14;
+ n->paralleldmlsafety = $15;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name PARTITION OF
qualified_name OptTypedTableElementList PartitionBoundSpec OptPartitionSpec
table_access_method_clause OptWith OnCommitOption OptTableSpace
+ ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3389,6 +3411,7 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $15;
n->oncommit = $16;
n->tablespacename = $17;
+ n->paralleldmlsafety = $18;
n->if_not_exists = true;
$$ = (Node *)n;
}
@@ -4089,6 +4112,11 @@ OptTableSpace: TABLESPACE name { $$ = $2; }
| /*EMPTY*/ { $$ = NULL; }
;
+ParallelDMLSafety: PARALLEL DML name { $$ = $3; }
+ | PARALLEL DML DEFAULT { $$ = pstrdup("default"); }
+ | /*EMPTY*/ { $$ = NULL; }
+ ;
+
OptConsTableSpace: USING INDEX TABLESPACE name { $$ = $4; }
| /*EMPTY*/ { $$ = NULL; }
;
@@ -4236,7 +4264,7 @@ CreateAsStmt:
create_as_target:
qualified_name opt_column_list table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
$$ = makeNode(IntoClause);
$$->rel = $1;
@@ -4245,6 +4273,7 @@ create_as_target:
$$->options = $4;
$$->onCommit = $5;
$$->tableSpaceName = $6;
+ $$->paralleldmlsafety = $7;
$$->viewQuery = NULL;
$$->skipData = false; /* might get changed later */
}
@@ -5024,7 +5053,7 @@ AlterForeignServerStmt: ALTER SERVER name foreign_server_version alter_generic_o
CreateForeignTableStmt:
CREATE FOREIGN TABLE qualified_name
'(' OptTableElementList ')'
- OptInherit SERVER name create_generic_options
+ OptInherit ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$4->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5036,15 +5065,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $9;
n->base.if_not_exists = false;
/* FDW-specific data */
- n->servername = $10;
- n->options = $11;
+ n->servername = $11;
+ n->options = $12;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE IF_P NOT EXISTS qualified_name
'(' OptTableElementList ')'
- OptInherit SERVER name create_generic_options
+ OptInherit ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$7->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5056,15 +5086,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $12;
n->base.if_not_exists = true;
/* FDW-specific data */
- n->servername = $13;
- n->options = $14;
+ n->servername = $14;
+ n->options = $15;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE qualified_name
PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec
- SERVER name create_generic_options
+ ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$4->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5077,15 +5108,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $10;
n->base.if_not_exists = false;
/* FDW-specific data */
- n->servername = $11;
- n->options = $12;
+ n->servername = $12;
+ n->options = $13;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE IF_P NOT EXISTS qualified_name
PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec
- SERVER name create_generic_options
+ ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$7->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5098,10 +5130,11 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $13;
n->base.if_not_exists = true;
/* FDW-specific data */
- n->servername = $14;
- n->options = $15;
+ n->servername = $15;
+ n->options = $16;
$$ = (Node *) n;
}
;
@@ -15547,6 +15580,7 @@ unreserved_keyword:
| DICTIONARY
| DISABLE_P
| DISCARD
+ | DML
| DOCUMENT_P
| DOMAIN_P
| DOUBLE_P
@@ -16087,6 +16121,7 @@ bare_label_keyword:
| DISABLE_P
| DISCARD
| DISTINCT
+ | DML
| DO
| DOCUMENT_P
| DOMAIN_P
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 13d9994af3..70d8ecb1dd 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -1873,6 +1873,7 @@ formrdesc(const char *relationName, Oid relationReltype,
relation->rd_rel->relkind = RELKIND_RELATION;
relation->rd_rel->relnatts = (int16) natts;
relation->rd_rel->relam = HEAP_TABLE_AM_OID;
+ relation->rd_rel->relparalleldml = PROPARALLEL_DEFAULT;
/*
* initialize attribute tuple form
@@ -3359,7 +3360,8 @@ RelationBuildLocalRelation(const char *relname,
bool shared_relation,
bool mapped_relation,
char relpersistence,
- char relkind)
+ char relkind,
+ char relparalleldml)
{
Relation rel;
MemoryContext oldcxt;
@@ -3509,6 +3511,8 @@ RelationBuildLocalRelation(const char *relname,
else
rel->rd_rel->relreplident = REPLICA_IDENTITY_NOTHING;
+ rel->rd_rel->relparalleldml = relparalleldml;
+
/*
* Insert relation physical and logical identifiers (OIDs) into the right
* places. For a mapped relation, we set relfilenode to zero and rely on
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 90ac445bcd..5165202e84 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -6253,6 +6253,7 @@ getTables(Archive *fout, int *numTables)
int i_relpersistence;
int i_relispopulated;
int i_relreplident;
+ int i_relparalleldml;
int i_owning_tab;
int i_owning_col;
int i_reltablespace;
@@ -6358,7 +6359,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, am.amname, "
+ "c.relreplident, c.relparalleldml, c.relpages, am.amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
"ELSE 0 END AS foreignserver, "
@@ -6450,7 +6451,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, "
+ "c.relreplident, c.relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6503,7 +6504,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, "
+ "c.relreplident, c.relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6556,7 +6557,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6609,7 +6610,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"c.relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6660,7 +6661,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, "
@@ -6708,7 +6709,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6756,7 +6757,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6803,7 +6804,7 @@ getTables(Archive *fout, int *numTables)
"0 AS toid, "
"0 AS tfrozenxid, 0 AS tminmxid,"
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6872,6 +6873,7 @@ getTables(Archive *fout, int *numTables)
i_relpersistence = PQfnumber(res, "relpersistence");
i_relispopulated = PQfnumber(res, "relispopulated");
i_relreplident = PQfnumber(res, "relreplident");
+ i_relparalleldml = PQfnumber(res, "relparalleldml");
i_relpages = PQfnumber(res, "relpages");
i_foreignserver = PQfnumber(res, "foreignserver");
i_owning_tab = PQfnumber(res, "owning_tab");
@@ -6927,6 +6929,7 @@ getTables(Archive *fout, int *numTables)
tblinfo[i].hasoids = (strcmp(PQgetvalue(res, i, i_relhasoids), "t") == 0);
tblinfo[i].relispopulated = (strcmp(PQgetvalue(res, i, i_relispopulated), "t") == 0);
tblinfo[i].relreplident = *(PQgetvalue(res, i, i_relreplident));
+ tblinfo[i].relparalleldml = *(PQgetvalue(res, i, i_relparalleldml));
tblinfo[i].relpages = atoi(PQgetvalue(res, i, i_relpages));
tblinfo[i].frozenxid = atooid(PQgetvalue(res, i, i_relfrozenxid));
tblinfo[i].minmxid = atooid(PQgetvalue(res, i, i_relminmxid));
@@ -16555,6 +16558,35 @@ dumpTableSchema(Archive *fout, const TableInfo *tbinfo)
}
}
+ if (tbinfo->relkind == RELKIND_RELATION ||
+ tbinfo->relkind == RELKIND_PARTITIONED_TABLE ||
+ tbinfo->relkind == RELKIND_FOREIGN_TABLE)
+ {
+ appendPQExpBuffer(q, "\nALTER %sTABLE %s PARALLEL DML ",
+ tbinfo->relkind == RELKIND_FOREIGN_TABLE ? "FOREIGN " : "",
+ qualrelname);
+
+ switch (tbinfo->relparalleldml)
+ {
+ case 's':
+ appendPQExpBuffer(q, "SAFE;\n");
+ break;
+ case 'r':
+ appendPQExpBuffer(q, "RESTRICTED;\n");
+ break;
+ case 'u':
+ appendPQExpBuffer(q, "UNSAFE;\n");
+ break;
+ case 'd':
+ appendPQExpBuffer(q, "DEFAULT;\n");
+ break;
+ default:
+ /* should not reach here */
+ appendPQExpBuffer(q, "DEFAULT;\n");
+ break;
+ }
+ }
+
if (tbinfo->forcerowsec)
appendPQExpBuffer(q, "\nALTER TABLE ONLY %s FORCE ROW LEVEL SECURITY;\n",
qualrelname);
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index f5e170e0db..8175a0bc82 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -270,6 +270,7 @@ typedef struct _tableInfo
char relpersistence; /* relation persistence */
bool relispopulated; /* relation is populated */
char relreplident; /* replica identifier */
+ char relparalleldml; /* parallel safety of dml on the relation */
char *reltablespace; /* relation tablespace */
char *reloptions; /* options specified by WITH (...) */
char *checkoption; /* WITH CHECK OPTION, if any */
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 8333558bda..f896fe1793 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1656,6 +1656,7 @@ describeOneTableDetails(const char *schemaname,
char *reloftype;
char relpersistence;
char relreplident;
+ char relparalleldml;
char *relam;
} tableinfo;
bool show_column_details = false;
@@ -1669,7 +1670,25 @@ describeOneTableDetails(const char *schemaname,
initPQExpBuffer(&tmpbuf);
/* Get general table info */
- if (pset.sversion >= 120000)
+ if (pset.sversion >= 150000)
+ {
+ printfPQExpBuffer(&buf,
+ "SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, "
+ "c.relhastriggers, c.relrowsecurity, c.relforcerowsecurity, "
+ "false AS relhasoids, c.relispartition, %s, c.reltablespace, "
+ "CASE WHEN c.reloftype = 0 THEN '' ELSE c.reloftype::pg_catalog.regtype::pg_catalog.text END, "
+ "c.relpersistence, c.relreplident, am.amname, c.relparalleldml\n"
+ "FROM pg_catalog.pg_class c\n "
+ "LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)\n"
+ "LEFT JOIN pg_catalog.pg_am am ON (c.relam = am.oid)\n"
+ "WHERE c.oid = '%s';",
+ (verbose ?
+ "pg_catalog.array_to_string(c.reloptions || "
+ "array(select 'toast.' || x from pg_catalog.unnest(tc.reloptions) x), ', ')\n"
+ : "''"),
+ oid);
+ }
+ else if (pset.sversion >= 120000)
{
printfPQExpBuffer(&buf,
"SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, "
@@ -1853,6 +1872,8 @@ describeOneTableDetails(const char *schemaname,
(char *) NULL : pg_strdup(PQgetvalue(res, 0, 14));
else
tableinfo.relam = NULL;
+ tableinfo.relparalleldml = (pset.sversion >= 150000) ?
+ *(PQgetvalue(res, 0, 15)) : 0;
PQclear(res);
res = NULL;
@@ -3630,6 +3651,21 @@ describeOneTableDetails(const char *schemaname,
printfPQExpBuffer(&buf, _("Access method: %s"), tableinfo.relam);
printTableAddFooter(&cont, buf.data);
}
+
+ if (verbose &&
+ (tableinfo.relkind == RELKIND_RELATION ||
+ tableinfo.relkind == RELKIND_PARTITIONED_TABLE ||
+ tableinfo.relkind == RELKIND_FOREIGN_TABLE) &&
+ tableinfo.relparalleldml != 0)
+ {
+ printfPQExpBuffer(&buf, _("Parallel DML: %s"),
+ tableinfo.relparalleldml == 'd' ? "default" :
+ tableinfo.relparalleldml == 'u' ? "unsafe" :
+ tableinfo.relparalleldml == 'r' ? "restricted" :
+ tableinfo.relparalleldml == 's' ? "safe" :
+ "???");
+ printTableAddFooter(&cont, buf.data);
+ }
}
/* reloptions, if verbose */
@@ -4005,7 +4041,7 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys
PGresult *res;
printQueryOpt myopt = pset.popt;
int cols_so_far;
- bool translate_columns[] = {false, false, true, false, false, false, false, false, false};
+ bool translate_columns[] = {false, false, true, false, false, false, false, false, false, false};
/* If tabtypes is empty, we default to \dtvmsE (but see also command.c) */
if (!(showTables || showIndexes || showViews || showMatViews || showSeq || showForeign))
@@ -4073,22 +4109,43 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys
gettext_noop("unlogged"),
gettext_noop("Persistence"));
translate_columns[cols_so_far] = true;
+ cols_so_far++;
}
- /*
- * We don't bother to count cols_so_far below here, as there's no need
- * to; this might change with future additions to the output columns.
- */
-
/*
* Access methods exist for tables, materialized views and indexes.
* This has been introduced in PostgreSQL 12 for tables.
*/
if (pset.sversion >= 120000 && !pset.hide_tableam &&
(showTables || showMatViews || showIndexes))
+ {
appendPQExpBuffer(&buf,
",\n am.amname as \"%s\"",
gettext_noop("Access method"));
+ cols_so_far++;
+ }
+
+ /*
+ * Show whether the data in the relation is default('d') unsafe('u'),
+ * restricted('r'), or safe('s') can be modified in parallel mode.
+ * This has been introduced in PostgreSQL 15 for tables.
+ */
+ if (pset.sversion >= 150000)
+ {
+ appendPQExpBuffer(&buf,
+ ",\n CASE c.relparalleldml WHEN 'd' THEN '%s' WHEN 'u' THEN '%s' WHEN 'r' THEN '%s' WHEN 's' THEN '%s' END as \"%s\"",
+ gettext_noop("default"),
+ gettext_noop("unsafe"),
+ gettext_noop("restricted"),
+ gettext_noop("safe"),
+ gettext_noop("Parallel DML"));
+ translate_columns[cols_so_far] = true;
+ }
+
+ /*
+ * We don't bother to count cols_so_far below here, as there's no need
+ * to; this might change with future additions to the output columns.
+ */
/*
* As of PostgreSQL 9.0, use pg_table_size() to show a more accurate
diff --git a/src/include/catalog/heap.h b/src/include/catalog/heap.h
index 6ce480b49c..b59975919b 100644
--- a/src/include/catalog/heap.h
+++ b/src/include/catalog/heap.h
@@ -55,6 +55,7 @@ extern Relation heap_create(const char *relname,
TupleDesc tupDesc,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
bool allow_system_table_mods,
@@ -73,6 +74,7 @@ extern Oid heap_create_with_catalog(const char *relname,
List *cooked_constraints,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
OnCommitAction oncommit,
diff --git a/src/include/catalog/pg_class.h b/src/include/catalog/pg_class.h
index fef9945ed8..244eac6bd8 100644
--- a/src/include/catalog/pg_class.h
+++ b/src/include/catalog/pg_class.h
@@ -116,6 +116,9 @@ CATALOG(pg_class,1259,RelationRelationId) BKI_BOOTSTRAP BKI_ROWTYPE_OID(83,Relat
/* see REPLICA_IDENTITY_xxx constants */
char relreplident BKI_DEFAULT(n);
+ /* parallel safety of the dml on the relation */
+ char relparalleldml BKI_DEFAULT(d);
+
/* is relation a partition? */
bool relispartition BKI_DEFAULT(f);
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index b33b8b0134..cd52c0e254 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -171,6 +171,8 @@ DECLARE_UNIQUE_INDEX(pg_proc_proname_args_nsp_index, 2691, ProcedureNameArgsNspI
#define PROPARALLEL_RESTRICTED 'r' /* can run in parallel leader only */
#define PROPARALLEL_UNSAFE 'u' /* banned while in parallel mode */
+#define PROPARALLEL_DEFAULT 'd' /* only used for parallel dml safety */
+
/*
* Symbolic values for proargmodes column. Note that these must agree with
* the FunctionParameterMode enum in parsenodes.h; we declare them here to
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index e28248af32..0352e41c6e 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -1934,7 +1934,8 @@ typedef enum AlterTableType
AT_AddIdentity, /* ADD IDENTITY */
AT_SetIdentity, /* SET identity column options */
AT_DropIdentity, /* DROP IDENTITY */
- AT_ReAddStatistics /* internal to commands/tablecmds.c */
+ AT_ReAddStatistics, /* internal to commands/tablecmds.c */
+ AT_ParallelDMLSafety /* PARALLEL DML SAFE/RESTRICTED/UNSAFE/DEFAULT */
} AlterTableType;
typedef struct ReplicaIdentityStmt
@@ -2180,6 +2181,7 @@ typedef struct CreateStmt
OnCommitAction oncommit; /* what do we do at COMMIT? */
char *tablespacename; /* table space to use, or NULL */
char *accessMethod; /* table access method */
+ char *paralleldmlsafety; /* parallel dml safety */
bool if_not_exists; /* just do nothing if it already exists? */
} CreateStmt;
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index c04282f91f..6e679d9f97 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -115,6 +115,7 @@ typedef struct IntoClause
List *options; /* options from WITH clause */
OnCommitAction onCommit; /* what do we do at COMMIT? */
char *tableSpaceName; /* table space to use, or NULL */
+ char *paralleldmlsafety; /* parallel dml safety */
Node *viewQuery; /* materialized view's SELECT query */
bool skipData; /* true for WITH NO DATA */
} IntoClause;
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index f836acf876..05222faccd 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -139,6 +139,7 @@ PG_KEYWORD("dictionary", DICTIONARY, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("disable", DISABLE_P, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("discard", DISCARD, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("distinct", DISTINCT, RESERVED_KEYWORD, BARE_LABEL)
+PG_KEYWORD("dml", DML, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("do", DO, RESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("document", DOCUMENT_P, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("domain", DOMAIN_P, UNRESERVED_KEYWORD, BARE_LABEL)
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index f772855ac6..5ea225ac2d 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -108,7 +108,8 @@ extern Relation RelationBuildLocalRelation(const char *relname,
bool shared_relation,
bool mapped_relation,
char relpersistence,
- char relkind);
+ char relkind,
+ char relparalleldml);
/*
* Routines to manage assignment of new relfilenode to a relation
diff --git a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
index 1bae1e5438..e1f5678eef 100644
--- a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
+++ b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
@@ -276,6 +276,9 @@ get_altertable_subcmdtypes(PG_FUNCTION_ARGS)
case AT_NoForceRowSecurity:
strtype = "NO FORCE ROW SECURITY";
break;
+ case AT_ParallelDMLSafety:
+ strtype = "PARALLEL DML SAFETY";
+ break;
case AT_GenericOptions:
strtype = "SET OPTIONS";
break;
--
2.27.0
v18-0004-Cache-parallel-dml-safety.patchapplication/octet-stream; name=v18-0004-Cache-parallel-dml-safety.patchDownload
From dffebd8f53ffe275814f151ed1ff2dd4dac05707 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@fujitsu.com>
Date: Thu, 19 Aug 2021 13:48:50 +0800
Subject: [PATCH] Cache parallel dml safety
The planner is updated to perform additional parallel-safety checks For
non-partitioned table if pg_class.relparalleldml is DEFAULT('d'), and cache the
parallel safety for the relation.
Whenever any function's parallel-safety is changed, invalidate the cached
parallel-safety for all relations in relcache for a particular database.
For partitioned table, If pg_class.relparalleldml is DEFAULT('d'), assume that
the table is UNSAFE to be modified in parallel mode.
If pg_class.relparalleldml is SAFE/RESTRICTED/UNSAFE, respect the specified
parallel dml safety instead of checking it again.
---
src/backend/catalog/pg_proc.c | 13 +++++
src/backend/commands/functioncmds.c | 18 ++++++-
src/backend/optimizer/util/clauses.c | 78 ++++++++++++++++++++++------
src/backend/utils/cache/inval.c | 53 +++++++++++++++++++
src/backend/utils/cache/relcache.c | 19 +++++++
src/include/storage/sinval.h | 8 +++
src/include/utils/inval.h | 2 +
src/include/utils/rel.h | 1 +
src/include/utils/relcache.h | 2 +
9 files changed, 176 insertions(+), 18 deletions(-)
diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c
index 1454d2fb67..9745ee8558 100644
--- a/src/backend/catalog/pg_proc.c
+++ b/src/backend/catalog/pg_proc.c
@@ -39,6 +39,7 @@
#include "tcop/tcopprot.h"
#include "utils/acl.h"
#include "utils/builtins.h"
+#include "utils/inval.h"
#include "utils/lsyscache.h"
#include "utils/regproc.h"
#include "utils/rel.h"
@@ -367,6 +368,9 @@ ProcedureCreate(const char *procedureName,
Datum proargnames;
bool isnull;
const char *dropcmd;
+ char old_proparallel;
+
+ old_proparallel = oldproc->proparallel;
if (!replace)
ereport(ERROR,
@@ -559,6 +563,15 @@ ProcedureCreate(const char *procedureName,
tup = heap_modify_tuple(oldtup, tupDesc, values, nulls, replaces);
CatalogTupleUpdate(rel, &tup->t_self, tup);
+ /*
+ * If the function's parallel safety changed, the tables that depend
+ * on this function won't be safe to be modified in parallel mode
+ * anymore. So, we need to invalidate the parallel dml flag in
+ * relcache.
+ */
+ if (old_proparallel != parallel)
+ CacheInvalidateParallelDML();
+
ReleaseSysCache(oldtup);
is_update = true;
}
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index 79d875ab10..57d9ca52e5 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -70,6 +70,7 @@
#include "utils/builtins.h"
#include "utils/fmgroids.h"
#include "utils/guc.h"
+#include "utils/inval.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
#include "utils/rel.h"
@@ -1504,7 +1505,22 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
repl_val, repl_null, repl_repl);
}
if (parallel_item)
- procForm->proparallel = interpret_func_parallel(parallel_item);
+ {
+ char proparallel;
+
+ proparallel = interpret_func_parallel(parallel_item);
+
+ /*
+ * If the function's parallel safety changed, the tables that depends
+ * on this function won't be safe to be modified in parallel mode
+ * anymore. So, we need to invalidate the parallel dml flag in
+ * relcache.
+ */
+ if (proparallel != procForm->proparallel)
+ CacheInvalidateParallelDML();
+
+ procForm->proparallel = proparallel;
+ }
/* Do the update */
CatalogTupleUpdate(rel, &tup->t_self, tup);
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 749cb0dacd..5c27fc222e 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -187,7 +187,7 @@ static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
static safety_object *make_safety_object(Oid objid, Oid classid, char proparallel);
-
+static char max_parallel_dml_hazard(Query *parse, max_parallel_hazard_context *context);
/*****************************************************************************
* Aggregate-function clause manipulation
@@ -654,7 +654,6 @@ contain_volatile_functions_not_nextval_walker(Node *node, void *context)
char
max_parallel_hazard(Query *parse)
{
- bool max_hazard_found;
max_parallel_hazard_context context;
context.max_hazard = PROPARALLEL_SAFE;
@@ -664,28 +663,73 @@ max_parallel_hazard(Query *parse)
context.objects = NIL;
context.partition_directory = NULL;
- max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
+ if(!max_parallel_hazard_walker((Node *) parse, &context))
+ (void) max_parallel_dml_hazard(parse, &context);
+
+ return context.max_hazard;
+}
+
+/* Check the safety of parallel data modification */
+static char
+max_parallel_dml_hazard(Query *parse,
+ max_parallel_hazard_context *context)
+{
+ RangeTblEntry *rte;
+ Relation target_rel;
+ char hazard;
+
+ if (!IsModifySupportedInParallelMode(parse->commandType))
+ return context->max_hazard;
+
+ /*
+ * The target table is already locked by the caller (this is done in the
+ * parse/analyze phase), and remains locked until end-of-transaction.
+ */
+ rte = rt_fetch(parse->resultRelation, parse->rtable);
+ target_rel = table_open(rte->relid, NoLock);
+
+ /*
+ * If user set specific parallel dml safety safe/restricted/unsafe, we
+ * respect what user has set. If not set, for non-partitioned table, check
+ * the safety automatically, for partitioned table, consider it as unsafe.
+ */
+ hazard = target_rel->rd_rel->relparalleldml;
+ if (target_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE &&
+ hazard == PROPARALLEL_DEFAULT)
+ hazard = PROPARALLEL_UNSAFE;
+
+ if (hazard != PROPARALLEL_DEFAULT)
+ (void) max_parallel_hazard_test(hazard, context);
- if (!max_hazard_found &&
- IsModifySupportedInParallelMode(parse->commandType))
+ /* Do parallel safety check for the target relation */
+ else if (!target_rel->rd_paralleldml)
{
- RangeTblEntry *rte;
- Relation target_rel;
+ bool max_hazard_found;
+ char pre_max_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
- rte = rt_fetch(parse->resultRelation, parse->rtable);
+ max_hazard_found = target_rel_parallel_hazard_recurse(target_rel,
+ context,
+ false,
+ false);
- /*
- * The target table is already locked by the caller (this is done in the
- * parse/analyze phase), and remains locked until end-of-transaction.
- */
- target_rel = table_open(rte->relid, NoLock);
+ /* Cache the parallel dml safety of this relation */
+ target_rel->rd_paralleldml = context->max_hazard;
- (void) max_parallel_hazard_test(target_rel->rd_rel->relparalleldml,
- &context);
- table_close(target_rel, NoLock);
+ if (!max_hazard_found)
+ (void) max_parallel_hazard_test(pre_max_hazard, context);
}
- return context.max_hazard;
+ /*
+ * If we already cached the parallel dml safety of this relation, we don't
+ * need to check it again.
+ */
+ else
+ (void) max_parallel_hazard_test(target_rel->rd_paralleldml, context);
+
+ table_close(target_rel, NoLock);
+
+ return context->max_hazard;
}
/*
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 9352c68090..bacb18e10e 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -478,6 +478,27 @@ AddSnapshotInvalidationMessage(InvalidationMsgsGroup *group,
AddInvalidationMessage(group, RelCacheMsgs, &msg);
}
+/*
+ * Add a parallel dml inval entry
+ */
+static void
+AddParallelDMLInvalidationMessage(InvalidationMsgsGroup *group)
+{
+ SharedInvalidationMessage msg;
+
+ /* Don't add a duplicate item. */
+ ProcessMessageSubGroup(group, RelCacheMsgs,
+ if (msg->rc.id == SHAREDINVALPARALLELDML_ID)
+ return);
+
+ /* OK, add the item */
+ msg.pd.id = SHAREDINVALPARALLELDML_ID;
+ /* check AddCatcacheInvalidationMessage() for an explanation */
+ VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
+
+ AddInvalidationMessage(group, RelCacheMsgs, &msg);
+}
+
/*
* Append one group of invalidation messages to another, resetting
* the source group to empty.
@@ -576,6 +597,21 @@ RegisterRelcacheInvalidation(Oid dbId, Oid relId)
transInvalInfo->RelcacheInitFileInval = true;
}
+/*
+ * RegisterParallelDMLInvalidation
+ *
+ * As above, but register a invalidation event for paralleldml in all relcache.
+ */
+static void
+RegisterParallelDMLInvalidation()
+{
+ AddParallelDMLInvalidationMessage(&transInvalInfo->CurrentCmdInvalidMsgs);
+
+ (void) GetCurrentCommandId(true);
+
+ transInvalInfo->RelcacheInitFileInval = true;
+}
+
/*
* RegisterSnapshotInvalidation
*
@@ -668,6 +704,11 @@ LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
else if (msg->sn.dbId == MyDatabaseId)
InvalidateCatalogSnapshot();
}
+ else if (msg->id == SHAREDINVALPARALLELDML_ID)
+ {
+ /* Invalid all the relcache's parallel dml flag */
+ ParallelDMLInvalidate();
+ }
else
elog(FATAL, "unrecognized SI message ID: %d", msg->id);
}
@@ -1370,6 +1411,18 @@ CacheInvalidateRelcacheAll(void)
RegisterRelcacheInvalidation(InvalidOid, InvalidOid);
}
+/*
+ * CacheInvalidateParallelDML
+ * Register invalidation of the whole relcache at the end of command.
+ */
+void
+CacheInvalidateParallelDML(void)
+{
+ PrepareInvalidationState();
+
+ RegisterParallelDMLInvalidation();
+}
+
/*
* CacheInvalidateRelcacheByTuple
* As above, but relation is identified by passing its pg_class tuple.
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 70d8ecb1dd..57fe97dcd4 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2934,6 +2934,25 @@ RelationCacheInvalidate(void)
list_free(rebuildList);
}
+/*
+ * ParallelDMLInvalidate
+ * Invalidate all the relcache's parallel dml flag.
+ */
+void
+ParallelDMLInvalidate(void)
+{
+ HASH_SEQ_STATUS status;
+ RelIdCacheEnt *idhentry;
+ Relation relation;
+
+ hash_seq_init(&status, RelationIdCache);
+
+ while ((idhentry = (RelIdCacheEnt *) hash_seq_search(&status)) != NULL)
+ {
+ relation = idhentry->reldesc;
+ relation->rd_paralleldml = 0;
+ }
+}
/*
* RelationCloseSmgrByOid - close a relcache entry's smgr link
*
diff --git a/src/include/storage/sinval.h b/src/include/storage/sinval.h
index f03dc23b14..9859a3bea0 100644
--- a/src/include/storage/sinval.h
+++ b/src/include/storage/sinval.h
@@ -110,6 +110,13 @@ typedef struct
Oid relId; /* relation ID */
} SharedInvalSnapshotMsg;
+#define SHAREDINVALPARALLELDML_ID (-6)
+
+typedef struct
+{
+ int8 id; /* type field --- must be first */
+} SharedInvalParallelDMLMsg;
+
typedef union
{
int8 id; /* type field --- must be first */
@@ -119,6 +126,7 @@ typedef union
SharedInvalSmgrMsg sm;
SharedInvalRelmapMsg rm;
SharedInvalSnapshotMsg sn;
+ SharedInvalParallelDMLMsg pd;
} SharedInvalidationMessage;
diff --git a/src/include/utils/inval.h b/src/include/utils/inval.h
index 770672890b..f1ce1462c1 100644
--- a/src/include/utils/inval.h
+++ b/src/include/utils/inval.h
@@ -64,4 +64,6 @@ extern void CallSyscacheCallbacks(int cacheid, uint32 hashvalue);
extern void InvalidateSystemCaches(void);
extern void LogLogicalInvalidations(void);
+
+extern void CacheInvalidateParallelDML(void);
#endif /* INVAL_H */
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index b4faa1c123..52574e9d40 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -63,6 +63,7 @@ typedef struct RelationData
bool rd_indexvalid; /* is rd_indexlist valid? (also rd_pkindex and
* rd_replidindex) */
bool rd_statvalid; /* is rd_statlist valid? */
+ char rd_paralleldml; /* parallel dml safety */
/*----------
* rd_createSubid is the ID of the highest subtransaction the rel has
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 5ea225ac2d..5813aa50a0 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -128,6 +128,8 @@ extern void RelationCacheInvalidate(void);
extern void RelationCloseSmgrByOid(Oid relationId);
+extern void ParallelDMLInvalidate(void);
+
#ifdef USE_ASSERT_CHECKING
extern void AssertPendingSyncs_RelationCache(void);
#else
--
2.18.4
v18-0002-Parallel-SELECT-for-INSERT.patchapplication/octet-stream; name=v18-0002-Parallel-SELECT-for-INSERT.patchDownload
From 7cad3cf052856ec9f5e087f1edec1c24b920dc74 Mon Sep 17 00:00:00 2001
From: houzj <houzj.fnst@fujitsu.com>
Date: Mon, 31 May 2021 09:32:54 +0800
Subject: [PATCH v14 2/4] parallel-SELECT-for-INSERT
Enable parallel select for insert.
Prepare for entering parallel mode by assigning a TransactionId.
---
src/backend/access/transam/xact.c | 26 +++++++++
src/backend/executor/execMain.c | 3 +
src/backend/optimizer/plan/planner.c | 21 +++----
src/backend/optimizer/util/clauses.c | 87 +++++++++++++++++++++++++++-
src/include/access/xact.h | 15 +++++
src/include/optimizer/clauses.h | 2 +
6 files changed, 143 insertions(+), 11 deletions(-)
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 441445927e..2d68e4633a 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -1014,6 +1014,32 @@ IsInParallelMode(void)
return CurrentTransactionState->parallelModeLevel != 0;
}
+/*
+ * PrepareParallelModePlanExec
+ *
+ * Prepare for entering parallel mode plan execution, based on command-type.
+ */
+void
+PrepareParallelModePlanExec(CmdType commandType)
+{
+ if (IsModifySupportedInParallelMode(commandType))
+ {
+ Assert(!IsInParallelMode());
+
+ /*
+ * Prepare for entering parallel mode by assigning a TransactionId.
+ * Failure to do this now would result in heap_insert() subsequently
+ * attempting to assign a TransactionId whilst in parallel-mode, which
+ * is not allowed.
+ *
+ * This approach has a disadvantage in that if the underlying SELECT
+ * does not return any rows, then the TransactionId is not used,
+ * however that shouldn't happen in practice in many cases.
+ */
+ (void) GetCurrentTransactionId();
+ }
+}
+
/*
* CommandCounterIncrement
*/
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index b3ce4bae53..ea685f0846 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -1535,7 +1535,10 @@ ExecutePlan(EState *estate,
estate->es_use_parallel_mode = use_parallel_mode;
if (use_parallel_mode)
+ {
+ PrepareParallelModePlanExec(estate->es_plannedstmt->commandType);
EnterParallelMode();
+ }
/*
* Loop until we've processed the proper number of tuples from the plan.
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 1868c4eff4..7736813230 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -314,16 +314,16 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
/*
* Assess whether it's feasible to use parallel mode for this query. We
* can't do this in a standalone backend, or if the command will try to
- * modify any data, or if this is a cursor operation, or if GUCs are set
- * to values that don't permit parallelism, or if parallel-unsafe
- * functions are present in the query tree.
+ * modify any data (except for Insert), or if this is a cursor operation,
+ * or if GUCs are set to values that don't permit parallelism, or if
+ * parallel-unsafe functions are present in the query tree.
*
- * (Note that we do allow CREATE TABLE AS, SELECT INTO, and CREATE
- * MATERIALIZED VIEW to use parallel plans, but as of now, only the leader
- * backend writes into a completely new table. In the future, we can
- * extend it to allow workers to write into the table. However, to allow
- * parallel updates and deletes, we have to solve other problems,
- * especially around combo CIDs.)
+ * (Note that we do allow CREATE TABLE AS, INSERT INTO...SELECT, SELECT
+ * INTO, and CREATE MATERIALIZED VIEW to use parallel plans. However, as
+ * of now, only the leader backend writes into a completely new table. In
+ * the future, we can extend it to allow workers to write into the table.
+ * However, to allow parallel updates and deletes, we have to solve other
+ * problems, especially around combo CIDs.)
*
* For now, we don't try to use parallel mode if we're running inside a
* parallel worker. We might eventually be able to relax this
@@ -332,7 +332,8 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
*/
if ((cursorOptions & CURSOR_OPT_PARALLEL_OK) != 0 &&
IsUnderPostmaster &&
- parse->commandType == CMD_SELECT &&
+ (parse->commandType == CMD_SELECT ||
+ is_parallel_allowed_for_modify(parse)) &&
!parse->hasModifyingCTE &&
max_parallel_workers_per_gather > 0 &&
!IsParallelWorker())
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 7187f17da5..ac0f243bf1 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -20,6 +20,8 @@
#include "postgres.h"
#include "access/htup_details.h"
+#include "access/table.h"
+#include "access/xact.h"
#include "catalog/pg_aggregate.h"
#include "catalog/pg_class.h"
#include "catalog/pg_language.h"
@@ -43,6 +45,7 @@
#include "parser/parse_agg.h"
#include "parser/parse_coerce.h"
#include "parser/parse_func.h"
+#include "parser/parsetree.h"
#include "rewrite/rewriteHandler.h"
#include "rewrite/rewriteManip.h"
#include "tcop/tcopprot.h"
@@ -51,6 +54,7 @@
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
+#include "utils/rel.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
@@ -151,6 +155,7 @@ static Query *substitute_actual_srf_parameters(Query *expr,
int nargs, List *args);
static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
+static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
/*****************************************************************************
@@ -618,12 +623,34 @@ contain_volatile_functions_not_nextval_walker(Node *node, void *context)
char
max_parallel_hazard(Query *parse)
{
+ bool max_hazard_found;
max_parallel_hazard_context context;
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_UNSAFE;
context.safe_param_ids = NIL;
- (void) max_parallel_hazard_walker((Node *) parse, &context);
+
+ max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
+
+ if (!max_hazard_found &&
+ IsModifySupportedInParallelMode(parse->commandType))
+ {
+ RangeTblEntry *rte;
+ Relation target_rel;
+
+ rte = rt_fetch(parse->resultRelation, parse->rtable);
+
+ /*
+ * The target table is already locked by the caller (this is done in the
+ * parse/analyze phase), and remains locked until end-of-transaction.
+ */
+ target_rel = table_open(rte->relid, NoLock);
+
+ (void) max_parallel_hazard_test(target_rel->rd_rel->relparalleldml,
+ &context);
+ table_close(target_rel, NoLock);
+ }
+
return context.max_hazard;
}
@@ -857,6 +884,64 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
context);
}
+/*
+ * is_parallel_allowed_for_modify
+ *
+ * Check at a high-level if parallel mode is able to be used for the specified
+ * table-modification statement. Currently, we support only Inserts.
+ *
+ * It's not possible in the following cases:
+ *
+ * 1) INSERT...ON CONFLICT...DO UPDATE
+ * 2) INSERT without SELECT
+ *
+ * (Note: we don't do in-depth parallel-safety checks here, we do only the
+ * cheaper tests that can quickly exclude obvious cases for which
+ * parallelism isn't supported, to avoid having to do further parallel-safety
+ * checks for these)
+ */
+bool
+is_parallel_allowed_for_modify(Query *parse)
+{
+ bool hasSubQuery;
+ RangeTblEntry *rte;
+ ListCell *lc;
+
+ if (!IsModifySupportedInParallelMode(parse->commandType))
+ return false;
+
+ /*
+ * UPDATE is not currently supported in parallel-mode, so prohibit
+ * INSERT...ON CONFLICT...DO UPDATE...
+ *
+ * In order to support update, even if only in the leader, some further
+ * work would need to be done. A mechanism would be needed for sharing
+ * combo-cids between leader and workers during parallel-mode, since for
+ * example, the leader might generate a combo-cid and it needs to be
+ * propagated to the workers.
+ */
+ if (parse->commandType == CMD_INSERT &&
+ parse->onConflict != NULL &&
+ parse->onConflict->action == ONCONFLICT_UPDATE)
+ return false;
+
+ /*
+ * If there is no underlying SELECT, a parallel insert operation is not
+ * desirable.
+ */
+ hasSubQuery = false;
+ foreach(lc, parse->rtable)
+ {
+ rte = lfirst_node(RangeTblEntry, lc);
+ if (rte->rtekind == RTE_SUBQUERY)
+ {
+ hasSubQuery = true;
+ break;
+ }
+ }
+
+ return hasSubQuery;
+}
/*****************************************************************************
* Check clauses for nonstrict functions
diff --git a/src/include/access/xact.h b/src/include/access/xact.h
index 134f6862da..fd3f86bf7c 100644
--- a/src/include/access/xact.h
+++ b/src/include/access/xact.h
@@ -466,5 +466,20 @@ extern void ParsePrepareRecord(uint8 info, xl_xact_prepare *xlrec, xl_xact_parse
extern void EnterParallelMode(void);
extern void ExitParallelMode(void);
extern bool IsInParallelMode(void);
+extern void PrepareParallelModePlanExec(CmdType commandType);
+
+/*
+ * IsModifySupportedInParallelMode
+ *
+ * Indicates whether execution of the specified table-modification command
+ * (INSERT/UPDATE/DELETE) in parallel-mode is supported, subject to certain
+ * parallel-safety conditions.
+ */
+static inline bool
+IsModifySupportedInParallelMode(CmdType commandType)
+{
+ /* Currently only INSERT is supported */
+ return (commandType == CMD_INSERT);
+}
#endif /* XACT_H */
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 0673887a85..32b56565e5 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -53,4 +53,6 @@ extern void CommuteOpExpr(OpExpr *clause);
extern Query *inline_set_returning_function(PlannerInfo *root,
RangeTblEntry *rte);
+extern bool is_parallel_allowed_for_modify(Query *parse);
+
#endif /* CLAUSES_H */
--
2.27.0
v18-0006-Workaround-for-query-rewriter-hasModifyingCTE-bug.patchapplication/octet-stream; name=v18-0006-Workaround-for-query-rewriter-hasModifyingCTE-bug.patchDownload
From 0b7733c62a4bc80aab9dd36bd593982da1586429 Mon Sep 17 00:00:00 2001
From: Greg Nancarrow <gregn4422@gmail.com>
Date: Fri, 6 Aug 2021 13:39:45 +1000
Subject: [PATCH] Workaround for query rewriter bug which results in
modifyingCTE flag not being set.
If a query uses a modifying CTE, the hasModifyingCTE flag should be set in the
query tree, and the query will be regarded as parallel-unsafe. However, in some
cases, a re-written query with a modifying CTE does not have that flag set, due
to a bug in the query rewriter. The workaround is to update the
max_parallel_hazard_walker() to detect a modifying CTE in the query and indicate
in this case that the query is parallel-unsafe.
Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
---
src/backend/optimizer/util/clauses.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 7187f17da5..7eb305ffda 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -758,6 +758,30 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
return true;
}
+ /*
+ * ModifyingCTE expressions are treated as parallel-unsafe.
+ *
+ * XXX Normally, if the Query has a modifying CTE, the hasModifyingCTE
+ * flag is set in the Query tree, and the query will be regarded as
+ * parallel-usafe. However, in some cases, a re-written query with a
+ * modifying CTE does not have that flag set, due to a bug in the query
+ * rewriter. The following else-if is a workaround for this bug, to detect
+ * a modifying CTE in the query and regard it as parallel-unsafe. This
+ * comment, and the else-if block immediately below, may be removed once
+ * the bug in the query rewriter is fixed.
+ */
+ else if (IsA(node, CommonTableExpr))
+ {
+ CommonTableExpr *cte = (CommonTableExpr *) node;
+ Query *ctequery = castNode(Query, cte->ctequery);
+
+ if (ctequery->commandType != CMD_SELECT)
+ {
+ context->max_hazard = PROPARALLEL_UNSAFE;
+ return true;
+ }
+ }
+
/*
* As a notational convenience for callers, look through RestrictInfo.
*/
--
2.27.0
v18-0003-Get-parallel-safety-functions.patchapplication/octet-stream; name=v18-0003-Get-parallel-safety-functions.patchDownload
From d93281fdbeef47af1b16bf6803d80c18e592fc13 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Fri, 30 Jul 2021 11:50:55 +0800
Subject: [PATCH] get-parallel-safety-functions
Parallel SELECT can't be utilized for INSERT when target table has a
parallel-unsafe: trigger, index expression or predicate, column default
expression, partition key expression or check constraint.
Provide a utility function "pg_get_table_parallel_dml_safety(regclass)" that
returns records of (objid, classid, parallel_safety) for all
parallel unsafe/restricted table-related objects from which the
table's parallel DML safety is determined. The user can use this
information during development in order to accurately declare a
table's parallel DML safety. Or to identify any problematic objects
if a parallel DML fails or behaves unexpectedly.
When the use of an index-related parallel unsafe/restricted function
is detected, both the function oid and the index oid are returned.
Provide a utility function "pg_get_table_max_parallel_dml_hazard(regclass)" that
returns the worst parallel DML safety hazard that can be found in the
given relation. Users can use this function to do a quick check without
caring about specific parallel-related objects.
---
src/backend/optimizer/util/clauses.c | 658 ++++++++++++++++++++++++++++++++++-
src/backend/utils/adt/misc.c | 94 +++++
src/backend/utils/cache/typcache.c | 17 +
src/include/catalog/pg_proc.dat | 22 +-
src/include/optimizer/clauses.h | 14 +
src/include/utils/typcache.h | 2 +
src/tools/pgindent/typedefs.list | 1 +
7 files changed, 803 insertions(+), 5 deletions(-)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index ac0f243..749cb0d 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -19,15 +19,20 @@
#include "postgres.h"
+#include "access/amapi.h"
+#include "access/genam.h"
#include "access/htup_details.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/pg_aggregate.h"
#include "catalog/pg_class.h"
+#include "catalog/pg_constraint.h"
#include "catalog/pg_language.h"
#include "catalog/pg_operator.h"
#include "catalog/pg_proc.h"
+#include "catalog/pg_trigger.h"
#include "catalog/pg_type.h"
+#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/functions.h"
#include "funcapi.h"
@@ -46,6 +51,8 @@
#include "parser/parse_coerce.h"
#include "parser/parse_func.h"
#include "parser/parsetree.h"
+#include "partitioning/partdesc.h"
+#include "rewrite/rewriteHandler.h"
#include "rewrite/rewriteHandler.h"
#include "rewrite/rewriteManip.h"
#include "tcop/tcopprot.h"
@@ -54,6 +61,7 @@
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
+#include "utils/partcache.h"
#include "utils/rel.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
@@ -92,6 +100,9 @@ typedef struct
char max_hazard; /* worst proparallel hazard found so far */
char max_interesting; /* worst proparallel hazard of interest */
List *safe_param_ids; /* PARAM_EXEC Param IDs to treat as safe */
+ bool check_all; /* whether collect all the unsafe/restricted objects */
+ List *objects; /* parallel unsafe/restricted objects */
+ PartitionDirectory partition_directory; /* partition descriptors */
} max_parallel_hazard_context;
static bool contain_agg_clause_walker(Node *node, void *context);
@@ -102,6 +113,25 @@ static bool contain_volatile_functions_walker(Node *node, void *context);
static bool contain_volatile_functions_not_nextval_walker(Node *node, void *context);
static bool max_parallel_hazard_walker(Node *node,
max_parallel_hazard_context *context);
+static bool target_rel_parallel_hazard_recurse(Relation relation,
+ max_parallel_hazard_context *context,
+ bool is_partition,
+ bool check_column_default);
+static bool target_rel_trigger_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
+static bool index_expr_parallel_hazard(Relation index_rel,
+ List *ii_Expressions,
+ List *ii_Predicate,
+ max_parallel_hazard_context *context);
+static bool target_rel_index_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
+static bool target_rel_domain_parallel_hazard(Oid typid,
+ max_parallel_hazard_context *context);
+static bool target_rel_partitions_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition);
+static bool target_rel_chk_constr_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
static bool contain_nonstrict_functions_walker(Node *node, void *context);
static bool contain_exec_param_walker(Node *node, List *param_ids);
static bool contain_context_dependent_node(Node *clause);
@@ -156,6 +186,7 @@ static Query *substitute_actual_srf_parameters(Query *expr,
static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
+static safety_object *make_safety_object(Oid objid, Oid classid, char proparallel);
/*****************************************************************************
@@ -629,6 +660,9 @@ max_parallel_hazard(Query *parse)
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_UNSAFE;
context.safe_param_ids = NIL;
+ context.check_all = false;
+ context.objects = NIL;
+ context.partition_directory = NULL;
max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
@@ -681,6 +715,9 @@ is_parallel_safe(PlannerInfo *root, Node *node)
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_RESTRICTED;
context.safe_param_ids = NIL;
+ context.check_all = false;
+ context.objects = NIL;
+ context.partition_directory = NULL;
/*
* The params that refer to the same or parent query level are considered
@@ -712,7 +749,7 @@ max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context)
break;
case PROPARALLEL_RESTRICTED:
/* increase max_hazard to RESTRICTED */
- Assert(context->max_hazard != PROPARALLEL_UNSAFE);
+ Assert(context->check_all || context->max_hazard != PROPARALLEL_UNSAFE);
context->max_hazard = proparallel;
/* done if we are not expecting any unsafe functions */
if (context->max_interesting == proparallel)
@@ -729,6 +766,82 @@ max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context)
return false;
}
+/*
+ * make_safety_object
+ *
+ * Creates a safety_object, given object id, class id and parallel safety.
+ */
+static safety_object *
+make_safety_object(Oid objid, Oid classid, char proparallel)
+{
+ safety_object *object = (safety_object *) palloc(sizeof(safety_object));
+
+ object->objid = objid;
+ object->classid = classid;
+ object->proparallel = proparallel;
+
+ return object;
+}
+
+/* check_functions_in_node callback */
+static bool
+parallel_hazard_checker(Oid func_id, void *context)
+{
+ char proparallel;
+ max_parallel_hazard_context *cont = (max_parallel_hazard_context *) context;
+
+ proparallel = func_parallel(func_id);
+
+ if (max_parallel_hazard_test(proparallel, cont) && !cont->check_all)
+ return true;
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object = make_safety_object(func_id,
+ ProcedureRelationId,
+ proparallel);
+ cont->objects = lappend(cont->objects, object);
+ }
+
+ return false;
+}
+
+/*
+ * parallel_hazard_walker
+ *
+ * Recursively search an expression tree which is defined as partition key or
+ * index or constraint or column default expression for PARALLEL
+ * UNSAFE/RESTRICTED table-related objects.
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
+{
+ if (node == NULL)
+ return false;
+
+ /* Check for hazardous functions in node itself */
+ if (check_functions_in_node(node, parallel_hazard_checker,
+ context))
+ return true;
+
+ if (IsA(node, CoerceToDomain))
+ {
+ CoerceToDomain *domain = (CoerceToDomain *) node;
+
+ if (target_rel_domain_parallel_hazard(domain->resulttype, context))
+ return true;
+ }
+
+ /* Recurse to check arguments */
+ return expression_tree_walker(node,
+ parallel_hazard_walker,
+ context);
+}
+
/* check_functions_in_node callback */
static bool
max_parallel_hazard_checker(Oid func_id, void *context)
@@ -885,6 +998,549 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
}
/*
+ * target_rel_parallel_hazard
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+List*
+target_rel_parallel_hazard(Oid relOid, bool findall,
+ char max_interesting, char *max_hazard)
+{
+ max_parallel_hazard_context context;
+ Relation targetRel;
+
+ context.check_all = findall;
+ context.objects = NIL;
+ context.max_hazard = PROPARALLEL_SAFE;
+ context.max_interesting = max_interesting;
+ context.safe_param_ids = NIL;
+ context.partition_directory = NULL;
+
+ targetRel = table_open(relOid, AccessShareLock);
+
+ (void) target_rel_parallel_hazard_recurse(targetRel, &context, false, true);
+ if (context.partition_directory)
+ DestroyPartitionDirectory(context.partition_directory);
+
+ table_close(targetRel, AccessShareLock);
+
+ *max_hazard = context.max_hazard;
+
+ return context.objects;
+}
+
+/*
+ * target_rel_parallel_hazard_recurse
+ *
+ * Recursively search all table-related objects for PARALLEL UNSAFE/RESTRICTED
+ * objects.
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_parallel_hazard_recurse(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition,
+ bool check_column_default)
+{
+ TupleDesc tupdesc;
+ int attnum;
+
+ /*
+ * We can't support table modification in a parallel worker if it's a
+ * foreign table/partition (no FDW API for supporting parallel access) or
+ * a temporary table.
+ */
+ if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||
+ RelationUsesLocalBuffers(rel))
+ {
+ if (max_parallel_hazard_test(PROPARALLEL_RESTRICTED, context) &&
+ !context->check_all)
+ return true;
+ else
+ {
+ safety_object *object = make_safety_object(rel->rd_rel->oid,
+ RelationRelationId,
+ PROPARALLEL_RESTRICTED);
+ context->objects = lappend(context->objects, object);
+ }
+ }
+
+ /*
+ * If a partitioned table, check that each partition is safe for
+ * modification in parallel-mode.
+ */
+ if (target_rel_partitions_parallel_hazard(rel, context, is_partition))
+ return true;
+
+ /*
+ * If there are any index expressions or index predicate, check that they
+ * are parallel-mode safe.
+ */
+ if (target_rel_index_parallel_hazard(rel, context))
+ return true;
+
+ /*
+ * If any triggers exist, check that they are parallel-safe.
+ */
+ if (target_rel_trigger_parallel_hazard(rel, context))
+ return true;
+
+ /*
+ * Column default expressions are only applicable to INSERT and UPDATE.
+ * Note that even though column defaults may be specified separately for
+ * each partition in a partitioned table, a partition's default value is
+ * not applied when inserting a tuple through a partitioned table.
+ */
+
+ tupdesc = RelationGetDescr(rel);
+ for (attnum = 0; attnum < tupdesc->natts; attnum++)
+ {
+ Form_pg_attribute att = TupleDescAttr(tupdesc, attnum);
+
+ /* We don't need info for dropped or generated attributes */
+ if (att->attisdropped || att->attgenerated)
+ continue;
+
+ if (att->atthasdef && check_column_default)
+ {
+ Node *defaultexpr;
+
+ defaultexpr = build_column_default(rel, attnum + 1);
+ if (parallel_hazard_walker((Node *) defaultexpr, context))
+ return true;
+ }
+
+ /*
+ * If the column is of a DOMAIN type, determine whether that
+ * domain has any CHECK expressions that are not parallel-mode
+ * safe.
+ */
+ if (get_typtype(att->atttypid) == TYPTYPE_DOMAIN)
+ {
+ if (target_rel_domain_parallel_hazard(att->atttypid, context))
+ return true;
+ }
+ }
+
+ /*
+ * CHECK constraints are only applicable to INSERT and UPDATE. If any
+ * CHECK constraints exist, determine if they are parallel-safe.
+ */
+ if (target_rel_chk_constr_parallel_hazard(rel, context))
+ return true;
+
+ return false;
+}
+
+/*
+ * target_rel_trigger_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the specified relation's trigger data.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_trigger_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ int i;
+ char proparallel;
+
+ if (rel->trigdesc == NULL)
+ return false;
+
+ /*
+ * Care is needed here to avoid using the same relcache TriggerDesc field
+ * across other cache accesses, because relcache doesn't guarantee that it
+ * won't move.
+ */
+ for (i = 0; i < rel->trigdesc->numtriggers; i++)
+ {
+ Oid tgfoid = rel->trigdesc->triggers[i].tgfoid;
+ Oid tgoid = rel->trigdesc->triggers[i].tgoid;
+
+ proparallel = func_parallel(tgfoid);
+
+ if (max_parallel_hazard_test(proparallel, context) &&
+ !context->check_all)
+ return true;
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object,
+ *parent_object;
+
+ object = make_safety_object(tgfoid, ProcedureRelationId,
+ proparallel);
+ parent_object = make_safety_object(tgoid, TriggerRelationId,
+ proparallel);
+
+ context->objects = lappend(context->objects, object);
+ context->objects = lappend(context->objects, parent_object);
+ }
+ }
+
+ return false;
+}
+
+/*
+ * index_expr_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the input index expression and index predicate.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+index_expr_parallel_hazard(Relation index_rel,
+ List *ii_Expressions,
+ List *ii_Predicate,
+ max_parallel_hazard_context *context)
+{
+ int i;
+ Form_pg_index indexStruct;
+ ListCell *index_expr_item;
+
+ indexStruct = index_rel->rd_index;
+ index_expr_item = list_head(ii_Expressions);
+
+ /* Check parallel-safety of index expression */
+ for (i = 0; i < indexStruct->indnatts; i++)
+ {
+ int keycol = indexStruct->indkey.values[i];
+
+ if (keycol == 0)
+ {
+ /* Found an index expression */
+ Node *index_expr;
+
+ Assert(index_expr_item != NULL);
+ if (index_expr_item == NULL) /* shouldn't happen */
+ elog(ERROR, "too few entries in indexprs list");
+
+ index_expr = (Node *) lfirst(index_expr_item);
+
+ if (parallel_hazard_walker(index_expr, context))
+ return true;
+
+ index_expr_item = lnext(ii_Expressions, index_expr_item);
+ }
+ }
+
+ /* Check parallel-safety of index predicate */
+ if (parallel_hazard_walker((Node *) ii_Predicate, context))
+ return true;
+
+ return false;
+}
+
+/*
+ * target_rel_index_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any existing index expressions or index predicate of a specified
+ * relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_index_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ List *index_oid_list;
+ ListCell *lc;
+ LOCKMODE lockmode = AccessShareLock;
+ bool max_hazard_found;
+
+ index_oid_list = RelationGetIndexList(rel);
+ foreach(lc, index_oid_list)
+ {
+ Relation index_rel;
+ List *ii_Expressions;
+ List *ii_Predicate;
+ List *temp_objects;
+ char temp_hazard;
+ Oid index_oid = lfirst_oid(lc);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ index_rel = index_open(index_oid, lockmode);
+
+ /* Check index expression */
+ ii_Expressions = RelationGetIndexExpressions(index_rel);
+ ii_Predicate = RelationGetIndexPredicate(index_rel);
+
+ max_hazard_found = index_expr_parallel_hazard(index_rel,
+ ii_Expressions,
+ ii_Predicate,
+ context);
+
+ index_close(index_rel, lockmode);
+
+ if (max_hazard_found)
+ return true;
+
+ /* Add the index itself to the objects list */
+ else if (context->objects != NIL)
+ {
+ safety_object *object;
+
+ object = make_safety_object(index_oid, IndexRelationId,
+ context->max_hazard);
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ list_free(index_oid_list);
+
+ return false;
+}
+
+/*
+ * target_rel_domain_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the specified DOMAIN type. Only any CHECK expressions are
+ * examined for parallel-safety.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_domain_parallel_hazard(Oid typid,
+ max_parallel_hazard_context *context)
+{
+ ListCell *lc;
+ List *domain_list;
+ List *temp_objects;
+ char temp_hazard;
+
+ domain_list = GetDomainConstraints(typid);
+
+ foreach(lc, domain_list)
+ {
+ DomainConstraintState *r = (DomainConstraintState *) lfirst(lc);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ if (parallel_hazard_walker((Node *) r->check_expr, context))
+ return true;
+
+ /* Add the constraint itself to the objects list */
+ else if (context->objects != NIL)
+ {
+ safety_object *object;
+ Oid constr_oid = get_domain_constraint_oid(typid,
+ r->name,
+ false);
+
+ object = make_safety_object(constr_oid,
+ ConstraintRelationId,
+ context->max_hazard);
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ return false;
+
+}
+
+/*
+ * target_rel_partitions_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any partitions of a specified relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_partitions_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition)
+{
+ int i;
+ PartitionDesc pdesc;
+ PartitionKey pkey;
+ ListCell *partexprs_item;
+ int partnatts;
+ List *partexprs,
+ *qual;
+
+ /*
+ * The partition check expression is composed of its parent table's
+ * partition key expression, we do not need to check it again for a
+ * partition because we already checked the parallel safety of its parent
+ * table's partition key expression.
+ */
+ if (!is_partition)
+ {
+ qual = RelationGetPartitionQual(rel);
+ if (parallel_hazard_walker((Node *) qual, context))
+ return true;
+ }
+
+ if (rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
+ return false;
+
+ pkey = RelationGetPartitionKey(rel);
+
+ partnatts = get_partition_natts(pkey);
+ partexprs = get_partition_exprs(pkey);
+
+ partexprs_item = list_head(partexprs);
+ for (i = 0; i < partnatts; i++)
+ {
+ Oid funcOid = pkey->partsupfunc[i].fn_oid;
+
+ if (OidIsValid(funcOid))
+ {
+ char proparallel = func_parallel(funcOid);
+
+ if (max_parallel_hazard_test(proparallel, context) &&
+ !context->check_all)
+ return true;
+
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object;
+
+ object = make_safety_object(funcOid, ProcedureRelationId,
+ proparallel);
+ context->objects = lappend(context->objects, object);
+ }
+ }
+
+ /* Check parallel-safety of any expressions in the partition key */
+ if (get_partition_col_attnum(pkey, i) == 0)
+ {
+ Node *check_expr = (Node *) lfirst(partexprs_item);
+
+ if (parallel_hazard_walker(check_expr, context))
+ return true;
+
+ partexprs_item = lnext(partexprs, partexprs_item);
+ }
+ }
+
+ /* Recursively check each partition ... */
+
+ /* Create the PartitionDirectory infrastructure if we didn't already */
+ if (context->partition_directory == NULL)
+ context->partition_directory =
+ CreatePartitionDirectory(CurrentMemoryContext, false);
+
+ pdesc = PartitionDirectoryLookup(context->partition_directory, rel);
+
+ for (i = 0; i < pdesc->nparts; i++)
+ {
+ Relation part_rel;
+ bool max_hazard_found;
+
+ part_rel = table_open(pdesc->oids[i], AccessShareLock);
+ max_hazard_found = target_rel_parallel_hazard_recurse(part_rel,
+ context,
+ true,
+ false);
+ table_close(part_rel, AccessShareLock);
+
+ if (max_hazard_found)
+ return true;
+ }
+
+ return false;
+}
+
+/*
+ * target_rel_chk_constr_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any CHECK expressions or CHECK constraints related to the
+ * specified relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_chk_constr_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ char temp_hazard;
+ int i;
+ TupleDesc tupdesc;
+ List *temp_objects;
+ ConstrCheck *check;
+
+ tupdesc = RelationGetDescr(rel);
+
+ if (tupdesc->constr == NULL)
+ return false;
+
+ check = tupdesc->constr->check;
+
+ /*
+ * Determine if there are any CHECK constraints which are not
+ * parallel-safe.
+ */
+ for (i = 0; i < tupdesc->constr->num_check; i++)
+ {
+ Expr *check_expr = stringToNode(check[i].ccbin);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ if (parallel_hazard_walker((Node *) check_expr, context))
+ return true;
+
+ /* Add the constraint itself to the objects list */
+ if (context->objects != NIL)
+ {
+ Oid constr_oid;
+ safety_object *object;
+
+ constr_oid = get_relation_constraint_oid(rel->rd_rel->oid,
+ check->ccname,
+ true);
+
+ object = make_safety_object(constr_oid,
+ ConstraintRelationId,
+ context->max_hazard);
+
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ return false;
+}
+
+/*
* is_parallel_allowed_for_modify
*
* Check at a high-level if parallel mode is able to be used for the specified
diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c
index 88faf4d..06d859c 100644
--- a/src/backend/utils/adt/misc.c
+++ b/src/backend/utils/adt/misc.c
@@ -23,6 +23,8 @@
#include "access/sysattr.h"
#include "access/table.h"
#include "catalog/catalog.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_type.h"
#include "catalog/system_fk_info.h"
@@ -31,6 +33,7 @@
#include "common/keywords.h"
#include "funcapi.h"
#include "miscadmin.h"
+#include "optimizer/clauses.h"
#include "parser/scansup.h"
#include "pgstat.h"
#include "postmaster/syslogger.h"
@@ -43,6 +46,7 @@
#include "utils/lsyscache.h"
#include "utils/ruleutils.h"
#include "utils/timestamp.h"
+#include "utils/varlena.h"
/*
* Common subroutine for num_nulls() and num_nonnulls().
@@ -605,6 +609,96 @@ pg_collation_for(PG_FUNCTION_ARGS)
PG_RETURN_TEXT_P(cstring_to_text(generate_collation_name(collid)));
}
+/*
+ * Find the worst parallel-hazard level in the given relation
+ *
+ * Returns the worst parallel hazard level (the earliest in this list:
+ * PROPARALLEL_UNSAFE, PROPARALLEL_RESTRICTED, PROPARALLEL_SAFE) that can
+ * be found in the given relation.
+ */
+Datum
+pg_get_table_max_parallel_dml_hazard(PG_FUNCTION_ARGS)
+{
+ char max_parallel_hazard;
+ Oid relOid = PG_GETARG_OID(0);
+
+ (void) target_rel_parallel_hazard(relOid, false,
+ PROPARALLEL_UNSAFE,
+ &max_parallel_hazard);
+
+ PG_RETURN_CHAR(max_parallel_hazard);
+}
+
+/*
+ * Determine whether the target relation is safe to execute parallel modification.
+ *
+ * Return all the PARALLEL RESTRICTED/UNSAFE objects.
+ */
+Datum
+pg_get_table_parallel_dml_safety(PG_FUNCTION_ARGS)
+{
+#define PG_GET_PARALLEL_SAFETY_COLS 3
+ List *objects;
+ ListCell *object;
+ TupleDesc tupdesc;
+ Tuplestorestate *tupstore;
+ MemoryContext per_query_ctx;
+ MemoryContext oldcontext;
+ ReturnSetInfo *rsinfo;
+ char max_parallel_hazard;
+ Oid relOid = PG_GETARG_OID(0);
+
+ rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+
+ /* check to see if caller supports us returning a tuplestore */
+ if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
+
+ if (!(rsinfo->allowedModes & SFRM_Materialize))
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("materialize mode required, but it is not allowed in this context")));
+
+ /* Build a tuple descriptor for our result type */
+ if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+ elog(ERROR, "return type must be a row type");
+
+ per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+ oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+ tupstore = tuplestore_begin_heap(true, false, work_mem);
+ rsinfo->returnMode = SFRM_Materialize;
+ rsinfo->setResult = tupstore;
+ rsinfo->setDesc = tupdesc;
+
+ MemoryContextSwitchTo(oldcontext);
+
+ objects = target_rel_parallel_hazard(relOid, true,
+ PROPARALLEL_UNSAFE,
+ &max_parallel_hazard);
+ foreach(object, objects)
+ {
+ Datum values[PG_GET_PARALLEL_SAFETY_COLS];
+ bool nulls[PG_GET_PARALLEL_SAFETY_COLS];
+ safety_object *sobject = (safety_object *) lfirst(object);
+
+ memset(nulls, 0, sizeof(nulls));
+
+ values[0] = sobject->objid;
+ values[1] = sobject->classid;
+ values[2] = sobject->proparallel;
+
+ tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+ }
+
+ /* clean up and return the tuplestore */
+ tuplestore_donestoring(tupstore);
+
+ return (Datum) 0;
+}
+
/*
* pg_relation_is_updatable - determine which update events the specified
diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c
index 326fae6..02a8f70 100644
--- a/src/backend/utils/cache/typcache.c
+++ b/src/backend/utils/cache/typcache.c
@@ -2535,6 +2535,23 @@ compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2)
}
/*
+ * GetDomainConstraints --- get DomainConstraintState list of specified domain type
+ */
+List *
+GetDomainConstraints(Oid type_id)
+{
+ TypeCacheEntry *typentry;
+ List *constraints = NIL;
+
+ typentry = lookup_type_cache(type_id, TYPECACHE_DOMAIN_CONSTR_INFO);
+
+ if(typentry->domainData != NULL)
+ constraints = typentry->domainData->constraints;
+
+ return constraints;
+}
+
+/*
* Load (or re-load) the enumData member of the typcache entry.
*/
static void
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8cd0252..4483cd1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3770,6 +3770,20 @@
provolatile => 's', prorettype => 'regclass', proargtypes => 'regclass',
prosrc => 'pg_get_replica_identity_index' },
+{ oid => '6122',
+ descr => 'parallel unsafe/restricted objects in the target relation',
+ proname => 'pg_get_table_parallel_dml_safety', prorows => '100',
+ proretset => 't', provolatile => 'v', proparallel => 'u',
+ prorettype => 'record', proargtypes => 'regclass',
+ proallargtypes => '{regclass,oid,oid,char}',
+ proargmodes => '{i,o,o,o}',
+ proargnames => '{table_name, objid, classid, proparallel}',
+ prosrc => 'pg_get_table_parallel_dml_safety' },
+
+{ oid => '6123', descr => 'worst parallel-hazard level in the given relation for DML',
+ proname => 'pg_get_table_max_parallel_dml_hazard', prorettype => 'char', proargtypes => 'regclass',
+ prosrc => 'pg_get_table_max_parallel_dml_hazard', provolatile => 'v', proparallel => 'u' },
+
# Deferrable unique constraint trigger
{ oid => '1250', descr => 'deferred UNIQUE constraint check',
proname => 'unique_key_recheck', provolatile => 'v', prorettype => 'trigger',
@@ -3777,11 +3791,11 @@
# Generic referential integrity constraint triggers
{ oid => '1644', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
- proname => 'RI_FKey_check_ins', provolatile => 'v', prorettype => 'trigger',
- proargtypes => '', prosrc => 'RI_FKey_check_ins' },
+ proname => 'RI_FKey_check_ins', provolatile => 'v', proparallel => 'r',
+ prorettype => 'trigger', proargtypes => '', prosrc => 'RI_FKey_check_ins' },
{ oid => '1645', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
- proname => 'RI_FKey_check_upd', provolatile => 'v', prorettype => 'trigger',
- proargtypes => '', prosrc => 'RI_FKey_check_upd' },
+ proname => 'RI_FKey_check_upd', provolatile => 'v', proparallel => 'r',
+ prorettype => 'trigger', proargtypes => '', prosrc => 'RI_FKey_check_upd' },
{ oid => '1646', descr => 'referential integrity ON DELETE CASCADE',
proname => 'RI_FKey_cascade_del', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'RI_FKey_cascade_del' },
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 32b5656..f8b2a72 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -23,6 +23,17 @@ typedef struct
List **windowFuncs; /* lists of WindowFuncs for each winref */
} WindowFuncLists;
+/*
+ * Information about a table-related object which could affect the safety of
+ * parallel data modification on table.
+ */
+typedef struct safety_object
+{
+ Oid objid; /* OID of object itself */
+ Oid classid; /* OID of its catalog */
+ char proparallel; /* parallel safety of the object */
+} safety_object;
+
extern bool contain_agg_clause(Node *clause);
extern bool contain_window_function(Node *clause);
@@ -54,5 +65,8 @@ extern Query *inline_set_returning_function(PlannerInfo *root,
RangeTblEntry *rte);
extern bool is_parallel_allowed_for_modify(Query *parse);
+extern List *target_rel_parallel_hazard(Oid relOid, bool findall,
+ char max_interesting,
+ char *max_hazard);
#endif /* CLAUSES_H */
diff --git a/src/include/utils/typcache.h b/src/include/utils/typcache.h
index 1d68a9a..28ca7d8 100644
--- a/src/include/utils/typcache.h
+++ b/src/include/utils/typcache.h
@@ -199,6 +199,8 @@ extern uint64 assign_record_type_identifier(Oid type_id, int32 typmod);
extern int compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2);
+extern List *GetDomainConstraints(Oid type_id);
+
extern size_t SharedRecordTypmodRegistryEstimate(void);
extern void SharedRecordTypmodRegistryInit(SharedRecordTypmodRegistry *,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 37cf4b2..307bb97 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3491,6 +3491,7 @@ rm_detail_t
role_auth_extra
row_security_policy_hook_type
rsv_callback
+safety_object
saophash_hash
save_buffer
scram_state
--
2.7.2.windows.1
From: Wednesday, September 1, 2021 5:24 PM Hou Zhijie<houzj.fnst@fujitsu.com>
Thursday, August 19, 2021 4:16 PM Hou zhijie <houzj.fnst@fujitsu.com> wrote:
On Fri, Aug 6, 2021 4:23 PM Hou zhijie <houzj.fnst@fujitsu.com> wrote:
Update the commit message in patches to make it easier for others to
review.
CFbot reported a compile error due to recent commit 3aafc03.
Attach rebased patches which fix the error.The patch can't apply to the HEAD branch due a recent commit.
Attach rebased patches.
In the past, the rewriter could generate a re-written query with a modifying
CTE does not have hasModifyingCTE flag set and this bug cause the regression
test(force_parallel_mode=regress) failure when enable parallel select for
insert, so , we had a workaround 0006.patch for it. But now, the bug has been
fixed in commit 362e2d and we don't need the workaround patch anymore.
Attach new version patch set which remove the workaround patch.
Best regards,
Hou zj
Attachments:
v19-0002-Parallel-SELECT-for-INSERT.patchapplication/octet-stream; name=v19-0002-Parallel-SELECT-for-INSERT.patchDownload
From 7cad3cf052856ec9f5e087f1edec1c24b920dc74 Mon Sep 17 00:00:00 2001
From: houzj <houzj.fnst@fujitsu.com>
Date: Mon, 31 May 2021 09:32:54 +0800
Subject: [PATCH v14 2/4] parallel-SELECT-for-INSERT
Enable parallel select for insert.
Prepare for entering parallel mode by assigning a TransactionId.
---
src/backend/access/transam/xact.c | 26 +++++++++
src/backend/executor/execMain.c | 3 +
src/backend/optimizer/plan/planner.c | 21 +++----
src/backend/optimizer/util/clauses.c | 87 +++++++++++++++++++++++++++-
src/include/access/xact.h | 15 +++++
src/include/optimizer/clauses.h | 2 +
6 files changed, 143 insertions(+), 11 deletions(-)
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 441445927e..2d68e4633a 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -1014,6 +1014,32 @@ IsInParallelMode(void)
return CurrentTransactionState->parallelModeLevel != 0;
}
+/*
+ * PrepareParallelModePlanExec
+ *
+ * Prepare for entering parallel mode plan execution, based on command-type.
+ */
+void
+PrepareParallelModePlanExec(CmdType commandType)
+{
+ if (IsModifySupportedInParallelMode(commandType))
+ {
+ Assert(!IsInParallelMode());
+
+ /*
+ * Prepare for entering parallel mode by assigning a TransactionId.
+ * Failure to do this now would result in heap_insert() subsequently
+ * attempting to assign a TransactionId whilst in parallel-mode, which
+ * is not allowed.
+ *
+ * This approach has a disadvantage in that if the underlying SELECT
+ * does not return any rows, then the TransactionId is not used,
+ * however that shouldn't happen in practice in many cases.
+ */
+ (void) GetCurrentTransactionId();
+ }
+}
+
/*
* CommandCounterIncrement
*/
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index b3ce4bae53..ea685f0846 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -1535,7 +1535,10 @@ ExecutePlan(EState *estate,
estate->es_use_parallel_mode = use_parallel_mode;
if (use_parallel_mode)
+ {
+ PrepareParallelModePlanExec(estate->es_plannedstmt->commandType);
EnterParallelMode();
+ }
/*
* Loop until we've processed the proper number of tuples from the plan.
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 1868c4eff4..7736813230 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -314,16 +314,16 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
/*
* Assess whether it's feasible to use parallel mode for this query. We
* can't do this in a standalone backend, or if the command will try to
- * modify any data, or if this is a cursor operation, or if GUCs are set
- * to values that don't permit parallelism, or if parallel-unsafe
- * functions are present in the query tree.
+ * modify any data (except for Insert), or if this is a cursor operation,
+ * or if GUCs are set to values that don't permit parallelism, or if
+ * parallel-unsafe functions are present in the query tree.
*
- * (Note that we do allow CREATE TABLE AS, SELECT INTO, and CREATE
- * MATERIALIZED VIEW to use parallel plans, but as of now, only the leader
- * backend writes into a completely new table. In the future, we can
- * extend it to allow workers to write into the table. However, to allow
- * parallel updates and deletes, we have to solve other problems,
- * especially around combo CIDs.)
+ * (Note that we do allow CREATE TABLE AS, INSERT INTO...SELECT, SELECT
+ * INTO, and CREATE MATERIALIZED VIEW to use parallel plans. However, as
+ * of now, only the leader backend writes into a completely new table. In
+ * the future, we can extend it to allow workers to write into the table.
+ * However, to allow parallel updates and deletes, we have to solve other
+ * problems, especially around combo CIDs.)
*
* For now, we don't try to use parallel mode if we're running inside a
* parallel worker. We might eventually be able to relax this
@@ -332,7 +332,8 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
*/
if ((cursorOptions & CURSOR_OPT_PARALLEL_OK) != 0 &&
IsUnderPostmaster &&
- parse->commandType == CMD_SELECT &&
+ (parse->commandType == CMD_SELECT ||
+ is_parallel_allowed_for_modify(parse)) &&
!parse->hasModifyingCTE &&
max_parallel_workers_per_gather > 0 &&
!IsParallelWorker())
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 7187f17da5..ac0f243bf1 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -20,6 +20,8 @@
#include "postgres.h"
#include "access/htup_details.h"
+#include "access/table.h"
+#include "access/xact.h"
#include "catalog/pg_aggregate.h"
#include "catalog/pg_class.h"
#include "catalog/pg_language.h"
@@ -43,6 +45,7 @@
#include "parser/parse_agg.h"
#include "parser/parse_coerce.h"
#include "parser/parse_func.h"
+#include "parser/parsetree.h"
#include "rewrite/rewriteHandler.h"
#include "rewrite/rewriteManip.h"
#include "tcop/tcopprot.h"
@@ -51,6 +54,7 @@
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
+#include "utils/rel.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
@@ -151,6 +155,7 @@ static Query *substitute_actual_srf_parameters(Query *expr,
int nargs, List *args);
static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
+static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
/*****************************************************************************
@@ -618,12 +623,34 @@ contain_volatile_functions_not_nextval_walker(Node *node, void *context)
char
max_parallel_hazard(Query *parse)
{
+ bool max_hazard_found;
max_parallel_hazard_context context;
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_UNSAFE;
context.safe_param_ids = NIL;
- (void) max_parallel_hazard_walker((Node *) parse, &context);
+
+ max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
+
+ if (!max_hazard_found &&
+ IsModifySupportedInParallelMode(parse->commandType))
+ {
+ RangeTblEntry *rte;
+ Relation target_rel;
+
+ rte = rt_fetch(parse->resultRelation, parse->rtable);
+
+ /*
+ * The target table is already locked by the caller (this is done in the
+ * parse/analyze phase), and remains locked until end-of-transaction.
+ */
+ target_rel = table_open(rte->relid, NoLock);
+
+ (void) max_parallel_hazard_test(target_rel->rd_rel->relparalleldml,
+ &context);
+ table_close(target_rel, NoLock);
+ }
+
return context.max_hazard;
}
@@ -857,6 +884,64 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
context);
}
+/*
+ * is_parallel_allowed_for_modify
+ *
+ * Check at a high-level if parallel mode is able to be used for the specified
+ * table-modification statement. Currently, we support only Inserts.
+ *
+ * It's not possible in the following cases:
+ *
+ * 1) INSERT...ON CONFLICT...DO UPDATE
+ * 2) INSERT without SELECT
+ *
+ * (Note: we don't do in-depth parallel-safety checks here, we do only the
+ * cheaper tests that can quickly exclude obvious cases for which
+ * parallelism isn't supported, to avoid having to do further parallel-safety
+ * checks for these)
+ */
+bool
+is_parallel_allowed_for_modify(Query *parse)
+{
+ bool hasSubQuery;
+ RangeTblEntry *rte;
+ ListCell *lc;
+
+ if (!IsModifySupportedInParallelMode(parse->commandType))
+ return false;
+
+ /*
+ * UPDATE is not currently supported in parallel-mode, so prohibit
+ * INSERT...ON CONFLICT...DO UPDATE...
+ *
+ * In order to support update, even if only in the leader, some further
+ * work would need to be done. A mechanism would be needed for sharing
+ * combo-cids between leader and workers during parallel-mode, since for
+ * example, the leader might generate a combo-cid and it needs to be
+ * propagated to the workers.
+ */
+ if (parse->commandType == CMD_INSERT &&
+ parse->onConflict != NULL &&
+ parse->onConflict->action == ONCONFLICT_UPDATE)
+ return false;
+
+ /*
+ * If there is no underlying SELECT, a parallel insert operation is not
+ * desirable.
+ */
+ hasSubQuery = false;
+ foreach(lc, parse->rtable)
+ {
+ rte = lfirst_node(RangeTblEntry, lc);
+ if (rte->rtekind == RTE_SUBQUERY)
+ {
+ hasSubQuery = true;
+ break;
+ }
+ }
+
+ return hasSubQuery;
+}
/*****************************************************************************
* Check clauses for nonstrict functions
diff --git a/src/include/access/xact.h b/src/include/access/xact.h
index 134f6862da..fd3f86bf7c 100644
--- a/src/include/access/xact.h
+++ b/src/include/access/xact.h
@@ -466,5 +466,20 @@ extern void ParsePrepareRecord(uint8 info, xl_xact_prepare *xlrec, xl_xact_parse
extern void EnterParallelMode(void);
extern void ExitParallelMode(void);
extern bool IsInParallelMode(void);
+extern void PrepareParallelModePlanExec(CmdType commandType);
+
+/*
+ * IsModifySupportedInParallelMode
+ *
+ * Indicates whether execution of the specified table-modification command
+ * (INSERT/UPDATE/DELETE) in parallel-mode is supported, subject to certain
+ * parallel-safety conditions.
+ */
+static inline bool
+IsModifySupportedInParallelMode(CmdType commandType)
+{
+ /* Currently only INSERT is supported */
+ return (commandType == CMD_INSERT);
+}
#endif /* XACT_H */
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 0673887a85..32b56565e5 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -53,4 +53,6 @@ extern void CommuteOpExpr(OpExpr *clause);
extern Query *inline_set_returning_function(PlannerInfo *root,
RangeTblEntry *rte);
+extern bool is_parallel_allowed_for_modify(Query *parse);
+
#endif /* CLAUSES_H */
--
2.27.0
v19-0003-Get-parallel-safety-functions.patchapplication/octet-stream; name=v19-0003-Get-parallel-safety-functions.patchDownload
From d93281fdbeef47af1b16bf6803d80c18e592fc13 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@cn.fujitsu.com>
Date: Fri, 30 Jul 2021 11:50:55 +0800
Subject: [PATCH] get-parallel-safety-functions
Parallel SELECT can't be utilized for INSERT when target table has a
parallel-unsafe: trigger, index expression or predicate, column default
expression, partition key expression or check constraint.
Provide a utility function "pg_get_table_parallel_dml_safety(regclass)" that
returns records of (objid, classid, parallel_safety) for all
parallel unsafe/restricted table-related objects from which the
table's parallel DML safety is determined. The user can use this
information during development in order to accurately declare a
table's parallel DML safety. Or to identify any problematic objects
if a parallel DML fails or behaves unexpectedly.
When the use of an index-related parallel unsafe/restricted function
is detected, both the function oid and the index oid are returned.
Provide a utility function "pg_get_table_max_parallel_dml_hazard(regclass)" that
returns the worst parallel DML safety hazard that can be found in the
given relation. Users can use this function to do a quick check without
caring about specific parallel-related objects.
---
src/backend/optimizer/util/clauses.c | 658 ++++++++++++++++++++++++++++++++++-
src/backend/utils/adt/misc.c | 94 +++++
src/backend/utils/cache/typcache.c | 17 +
src/include/catalog/pg_proc.dat | 22 +-
src/include/optimizer/clauses.h | 14 +
src/include/utils/typcache.h | 2 +
src/tools/pgindent/typedefs.list | 1 +
7 files changed, 803 insertions(+), 5 deletions(-)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index ac0f243..749cb0d 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -19,15 +19,20 @@
#include "postgres.h"
+#include "access/amapi.h"
+#include "access/genam.h"
#include "access/htup_details.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/pg_aggregate.h"
#include "catalog/pg_class.h"
+#include "catalog/pg_constraint.h"
#include "catalog/pg_language.h"
#include "catalog/pg_operator.h"
#include "catalog/pg_proc.h"
+#include "catalog/pg_trigger.h"
#include "catalog/pg_type.h"
+#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/functions.h"
#include "funcapi.h"
@@ -46,6 +51,8 @@
#include "parser/parse_coerce.h"
#include "parser/parse_func.h"
#include "parser/parsetree.h"
+#include "partitioning/partdesc.h"
+#include "rewrite/rewriteHandler.h"
#include "rewrite/rewriteHandler.h"
#include "rewrite/rewriteManip.h"
#include "tcop/tcopprot.h"
@@ -54,6 +61,7 @@
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
+#include "utils/partcache.h"
#include "utils/rel.h"
#include "utils/syscache.h"
#include "utils/typcache.h"
@@ -92,6 +100,9 @@ typedef struct
char max_hazard; /* worst proparallel hazard found so far */
char max_interesting; /* worst proparallel hazard of interest */
List *safe_param_ids; /* PARAM_EXEC Param IDs to treat as safe */
+ bool check_all; /* whether collect all the unsafe/restricted objects */
+ List *objects; /* parallel unsafe/restricted objects */
+ PartitionDirectory partition_directory; /* partition descriptors */
} max_parallel_hazard_context;
static bool contain_agg_clause_walker(Node *node, void *context);
@@ -102,6 +113,25 @@ static bool contain_volatile_functions_walker(Node *node, void *context);
static bool contain_volatile_functions_not_nextval_walker(Node *node, void *context);
static bool max_parallel_hazard_walker(Node *node,
max_parallel_hazard_context *context);
+static bool target_rel_parallel_hazard_recurse(Relation relation,
+ max_parallel_hazard_context *context,
+ bool is_partition,
+ bool check_column_default);
+static bool target_rel_trigger_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
+static bool index_expr_parallel_hazard(Relation index_rel,
+ List *ii_Expressions,
+ List *ii_Predicate,
+ max_parallel_hazard_context *context);
+static bool target_rel_index_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
+static bool target_rel_domain_parallel_hazard(Oid typid,
+ max_parallel_hazard_context *context);
+static bool target_rel_partitions_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition);
+static bool target_rel_chk_constr_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context);
static bool contain_nonstrict_functions_walker(Node *node, void *context);
static bool contain_exec_param_walker(Node *node, List *param_ids);
static bool contain_context_dependent_node(Node *clause);
@@ -156,6 +186,7 @@ static Query *substitute_actual_srf_parameters(Query *expr,
static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
+static safety_object *make_safety_object(Oid objid, Oid classid, char proparallel);
/*****************************************************************************
@@ -629,6 +660,9 @@ max_parallel_hazard(Query *parse)
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_UNSAFE;
context.safe_param_ids = NIL;
+ context.check_all = false;
+ context.objects = NIL;
+ context.partition_directory = NULL;
max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
@@ -681,6 +715,9 @@ is_parallel_safe(PlannerInfo *root, Node *node)
context.max_hazard = PROPARALLEL_SAFE;
context.max_interesting = PROPARALLEL_RESTRICTED;
context.safe_param_ids = NIL;
+ context.check_all = false;
+ context.objects = NIL;
+ context.partition_directory = NULL;
/*
* The params that refer to the same or parent query level are considered
@@ -712,7 +749,7 @@ max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context)
break;
case PROPARALLEL_RESTRICTED:
/* increase max_hazard to RESTRICTED */
- Assert(context->max_hazard != PROPARALLEL_UNSAFE);
+ Assert(context->check_all || context->max_hazard != PROPARALLEL_UNSAFE);
context->max_hazard = proparallel;
/* done if we are not expecting any unsafe functions */
if (context->max_interesting == proparallel)
@@ -729,6 +766,82 @@ max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context)
return false;
}
+/*
+ * make_safety_object
+ *
+ * Creates a safety_object, given object id, class id and parallel safety.
+ */
+static safety_object *
+make_safety_object(Oid objid, Oid classid, char proparallel)
+{
+ safety_object *object = (safety_object *) palloc(sizeof(safety_object));
+
+ object->objid = objid;
+ object->classid = classid;
+ object->proparallel = proparallel;
+
+ return object;
+}
+
+/* check_functions_in_node callback */
+static bool
+parallel_hazard_checker(Oid func_id, void *context)
+{
+ char proparallel;
+ max_parallel_hazard_context *cont = (max_parallel_hazard_context *) context;
+
+ proparallel = func_parallel(func_id);
+
+ if (max_parallel_hazard_test(proparallel, cont) && !cont->check_all)
+ return true;
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object = make_safety_object(func_id,
+ ProcedureRelationId,
+ proparallel);
+ cont->objects = lappend(cont->objects, object);
+ }
+
+ return false;
+}
+
+/*
+ * parallel_hazard_walker
+ *
+ * Recursively search an expression tree which is defined as partition key or
+ * index or constraint or column default expression for PARALLEL
+ * UNSAFE/RESTRICTED table-related objects.
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
+{
+ if (node == NULL)
+ return false;
+
+ /* Check for hazardous functions in node itself */
+ if (check_functions_in_node(node, parallel_hazard_checker,
+ context))
+ return true;
+
+ if (IsA(node, CoerceToDomain))
+ {
+ CoerceToDomain *domain = (CoerceToDomain *) node;
+
+ if (target_rel_domain_parallel_hazard(domain->resulttype, context))
+ return true;
+ }
+
+ /* Recurse to check arguments */
+ return expression_tree_walker(node,
+ parallel_hazard_walker,
+ context);
+}
+
/* check_functions_in_node callback */
static bool
max_parallel_hazard_checker(Oid func_id, void *context)
@@ -885,6 +998,549 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
}
/*
+ * target_rel_parallel_hazard
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+List*
+target_rel_parallel_hazard(Oid relOid, bool findall,
+ char max_interesting, char *max_hazard)
+{
+ max_parallel_hazard_context context;
+ Relation targetRel;
+
+ context.check_all = findall;
+ context.objects = NIL;
+ context.max_hazard = PROPARALLEL_SAFE;
+ context.max_interesting = max_interesting;
+ context.safe_param_ids = NIL;
+ context.partition_directory = NULL;
+
+ targetRel = table_open(relOid, AccessShareLock);
+
+ (void) target_rel_parallel_hazard_recurse(targetRel, &context, false, true);
+ if (context.partition_directory)
+ DestroyPartitionDirectory(context.partition_directory);
+
+ table_close(targetRel, AccessShareLock);
+
+ *max_hazard = context.max_hazard;
+
+ return context.objects;
+}
+
+/*
+ * target_rel_parallel_hazard_recurse
+ *
+ * Recursively search all table-related objects for PARALLEL UNSAFE/RESTRICTED
+ * objects.
+ *
+ * If context->find_all is true, then detect all PARALLEL UNSAFE/RESTRICTED
+ * table-related objects.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_parallel_hazard_recurse(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition,
+ bool check_column_default)
+{
+ TupleDesc tupdesc;
+ int attnum;
+
+ /*
+ * We can't support table modification in a parallel worker if it's a
+ * foreign table/partition (no FDW API for supporting parallel access) or
+ * a temporary table.
+ */
+ if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||
+ RelationUsesLocalBuffers(rel))
+ {
+ if (max_parallel_hazard_test(PROPARALLEL_RESTRICTED, context) &&
+ !context->check_all)
+ return true;
+ else
+ {
+ safety_object *object = make_safety_object(rel->rd_rel->oid,
+ RelationRelationId,
+ PROPARALLEL_RESTRICTED);
+ context->objects = lappend(context->objects, object);
+ }
+ }
+
+ /*
+ * If a partitioned table, check that each partition is safe for
+ * modification in parallel-mode.
+ */
+ if (target_rel_partitions_parallel_hazard(rel, context, is_partition))
+ return true;
+
+ /*
+ * If there are any index expressions or index predicate, check that they
+ * are parallel-mode safe.
+ */
+ if (target_rel_index_parallel_hazard(rel, context))
+ return true;
+
+ /*
+ * If any triggers exist, check that they are parallel-safe.
+ */
+ if (target_rel_trigger_parallel_hazard(rel, context))
+ return true;
+
+ /*
+ * Column default expressions are only applicable to INSERT and UPDATE.
+ * Note that even though column defaults may be specified separately for
+ * each partition in a partitioned table, a partition's default value is
+ * not applied when inserting a tuple through a partitioned table.
+ */
+
+ tupdesc = RelationGetDescr(rel);
+ for (attnum = 0; attnum < tupdesc->natts; attnum++)
+ {
+ Form_pg_attribute att = TupleDescAttr(tupdesc, attnum);
+
+ /* We don't need info for dropped or generated attributes */
+ if (att->attisdropped || att->attgenerated)
+ continue;
+
+ if (att->atthasdef && check_column_default)
+ {
+ Node *defaultexpr;
+
+ defaultexpr = build_column_default(rel, attnum + 1);
+ if (parallel_hazard_walker((Node *) defaultexpr, context))
+ return true;
+ }
+
+ /*
+ * If the column is of a DOMAIN type, determine whether that
+ * domain has any CHECK expressions that are not parallel-mode
+ * safe.
+ */
+ if (get_typtype(att->atttypid) == TYPTYPE_DOMAIN)
+ {
+ if (target_rel_domain_parallel_hazard(att->atttypid, context))
+ return true;
+ }
+ }
+
+ /*
+ * CHECK constraints are only applicable to INSERT and UPDATE. If any
+ * CHECK constraints exist, determine if they are parallel-safe.
+ */
+ if (target_rel_chk_constr_parallel_hazard(rel, context))
+ return true;
+
+ return false;
+}
+
+/*
+ * target_rel_trigger_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the specified relation's trigger data.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_trigger_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ int i;
+ char proparallel;
+
+ if (rel->trigdesc == NULL)
+ return false;
+
+ /*
+ * Care is needed here to avoid using the same relcache TriggerDesc field
+ * across other cache accesses, because relcache doesn't guarantee that it
+ * won't move.
+ */
+ for (i = 0; i < rel->trigdesc->numtriggers; i++)
+ {
+ Oid tgfoid = rel->trigdesc->triggers[i].tgfoid;
+ Oid tgoid = rel->trigdesc->triggers[i].tgoid;
+
+ proparallel = func_parallel(tgfoid);
+
+ if (max_parallel_hazard_test(proparallel, context) &&
+ !context->check_all)
+ return true;
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object,
+ *parent_object;
+
+ object = make_safety_object(tgfoid, ProcedureRelationId,
+ proparallel);
+ parent_object = make_safety_object(tgoid, TriggerRelationId,
+ proparallel);
+
+ context->objects = lappend(context->objects, object);
+ context->objects = lappend(context->objects, parent_object);
+ }
+ }
+
+ return false;
+}
+
+/*
+ * index_expr_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the input index expression and index predicate.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+index_expr_parallel_hazard(Relation index_rel,
+ List *ii_Expressions,
+ List *ii_Predicate,
+ max_parallel_hazard_context *context)
+{
+ int i;
+ Form_pg_index indexStruct;
+ ListCell *index_expr_item;
+
+ indexStruct = index_rel->rd_index;
+ index_expr_item = list_head(ii_Expressions);
+
+ /* Check parallel-safety of index expression */
+ for (i = 0; i < indexStruct->indnatts; i++)
+ {
+ int keycol = indexStruct->indkey.values[i];
+
+ if (keycol == 0)
+ {
+ /* Found an index expression */
+ Node *index_expr;
+
+ Assert(index_expr_item != NULL);
+ if (index_expr_item == NULL) /* shouldn't happen */
+ elog(ERROR, "too few entries in indexprs list");
+
+ index_expr = (Node *) lfirst(index_expr_item);
+
+ if (parallel_hazard_walker(index_expr, context))
+ return true;
+
+ index_expr_item = lnext(ii_Expressions, index_expr_item);
+ }
+ }
+
+ /* Check parallel-safety of index predicate */
+ if (parallel_hazard_walker((Node *) ii_Predicate, context))
+ return true;
+
+ return false;
+}
+
+/*
+ * target_rel_index_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any existing index expressions or index predicate of a specified
+ * relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_index_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ List *index_oid_list;
+ ListCell *lc;
+ LOCKMODE lockmode = AccessShareLock;
+ bool max_hazard_found;
+
+ index_oid_list = RelationGetIndexList(rel);
+ foreach(lc, index_oid_list)
+ {
+ Relation index_rel;
+ List *ii_Expressions;
+ List *ii_Predicate;
+ List *temp_objects;
+ char temp_hazard;
+ Oid index_oid = lfirst_oid(lc);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ index_rel = index_open(index_oid, lockmode);
+
+ /* Check index expression */
+ ii_Expressions = RelationGetIndexExpressions(index_rel);
+ ii_Predicate = RelationGetIndexPredicate(index_rel);
+
+ max_hazard_found = index_expr_parallel_hazard(index_rel,
+ ii_Expressions,
+ ii_Predicate,
+ context);
+
+ index_close(index_rel, lockmode);
+
+ if (max_hazard_found)
+ return true;
+
+ /* Add the index itself to the objects list */
+ else if (context->objects != NIL)
+ {
+ safety_object *object;
+
+ object = make_safety_object(index_oid, IndexRelationId,
+ context->max_hazard);
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ list_free(index_oid_list);
+
+ return false;
+}
+
+/*
+ * target_rel_domain_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for the specified DOMAIN type. Only any CHECK expressions are
+ * examined for parallel-safety.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_domain_parallel_hazard(Oid typid,
+ max_parallel_hazard_context *context)
+{
+ ListCell *lc;
+ List *domain_list;
+ List *temp_objects;
+ char temp_hazard;
+
+ domain_list = GetDomainConstraints(typid);
+
+ foreach(lc, domain_list)
+ {
+ DomainConstraintState *r = (DomainConstraintState *) lfirst(lc);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ if (parallel_hazard_walker((Node *) r->check_expr, context))
+ return true;
+
+ /* Add the constraint itself to the objects list */
+ else if (context->objects != NIL)
+ {
+ safety_object *object;
+ Oid constr_oid = get_domain_constraint_oid(typid,
+ r->name,
+ false);
+
+ object = make_safety_object(constr_oid,
+ ConstraintRelationId,
+ context->max_hazard);
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ return false;
+
+}
+
+/*
+ * target_rel_partitions_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any partitions of a specified relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_partitions_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context,
+ bool is_partition)
+{
+ int i;
+ PartitionDesc pdesc;
+ PartitionKey pkey;
+ ListCell *partexprs_item;
+ int partnatts;
+ List *partexprs,
+ *qual;
+
+ /*
+ * The partition check expression is composed of its parent table's
+ * partition key expression, we do not need to check it again for a
+ * partition because we already checked the parallel safety of its parent
+ * table's partition key expression.
+ */
+ if (!is_partition)
+ {
+ qual = RelationGetPartitionQual(rel);
+ if (parallel_hazard_walker((Node *) qual, context))
+ return true;
+ }
+
+ if (rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
+ return false;
+
+ pkey = RelationGetPartitionKey(rel);
+
+ partnatts = get_partition_natts(pkey);
+ partexprs = get_partition_exprs(pkey);
+
+ partexprs_item = list_head(partexprs);
+ for (i = 0; i < partnatts; i++)
+ {
+ Oid funcOid = pkey->partsupfunc[i].fn_oid;
+
+ if (OidIsValid(funcOid))
+ {
+ char proparallel = func_parallel(funcOid);
+
+ if (max_parallel_hazard_test(proparallel, context) &&
+ !context->check_all)
+ return true;
+
+ else if (proparallel != PROPARALLEL_SAFE)
+ {
+ safety_object *object;
+
+ object = make_safety_object(funcOid, ProcedureRelationId,
+ proparallel);
+ context->objects = lappend(context->objects, object);
+ }
+ }
+
+ /* Check parallel-safety of any expressions in the partition key */
+ if (get_partition_col_attnum(pkey, i) == 0)
+ {
+ Node *check_expr = (Node *) lfirst(partexprs_item);
+
+ if (parallel_hazard_walker(check_expr, context))
+ return true;
+
+ partexprs_item = lnext(partexprs, partexprs_item);
+ }
+ }
+
+ /* Recursively check each partition ... */
+
+ /* Create the PartitionDirectory infrastructure if we didn't already */
+ if (context->partition_directory == NULL)
+ context->partition_directory =
+ CreatePartitionDirectory(CurrentMemoryContext, false);
+
+ pdesc = PartitionDirectoryLookup(context->partition_directory, rel);
+
+ for (i = 0; i < pdesc->nparts; i++)
+ {
+ Relation part_rel;
+ bool max_hazard_found;
+
+ part_rel = table_open(pdesc->oids[i], AccessShareLock);
+ max_hazard_found = target_rel_parallel_hazard_recurse(part_rel,
+ context,
+ true,
+ false);
+ table_close(part_rel, AccessShareLock);
+
+ if (max_hazard_found)
+ return true;
+ }
+
+ return false;
+}
+
+/*
+ * target_rel_chk_constr_parallel_hazard
+ *
+ * If context->find_all is true, then find all the PARALLEL UNSAFE/RESTRICTED
+ * objects for any CHECK expressions or CHECK constraints related to the
+ * specified relation.
+ *
+ * If context->find_all is false, then find the worst parallel-hazard level.
+ */
+static bool
+target_rel_chk_constr_parallel_hazard(Relation rel,
+ max_parallel_hazard_context *context)
+{
+ char temp_hazard;
+ int i;
+ TupleDesc tupdesc;
+ List *temp_objects;
+ ConstrCheck *check;
+
+ tupdesc = RelationGetDescr(rel);
+
+ if (tupdesc->constr == NULL)
+ return false;
+
+ check = tupdesc->constr->check;
+
+ /*
+ * Determine if there are any CHECK constraints which are not
+ * parallel-safe.
+ */
+ for (i = 0; i < tupdesc->constr->num_check; i++)
+ {
+ Expr *check_expr = stringToNode(check[i].ccbin);
+
+ temp_objects = context->objects;
+ context->objects = NIL;
+ temp_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
+
+ if (parallel_hazard_walker((Node *) check_expr, context))
+ return true;
+
+ /* Add the constraint itself to the objects list */
+ if (context->objects != NIL)
+ {
+ Oid constr_oid;
+ safety_object *object;
+
+ constr_oid = get_relation_constraint_oid(rel->rd_rel->oid,
+ check->ccname,
+ true);
+
+ object = make_safety_object(constr_oid,
+ ConstraintRelationId,
+ context->max_hazard);
+
+ context->objects = lappend(context->objects, object);
+ }
+
+ (void) max_parallel_hazard_test(temp_hazard, context);
+
+ context->objects = list_concat(context->objects, temp_objects);
+ list_free(temp_objects);
+ }
+
+ return false;
+}
+
+/*
* is_parallel_allowed_for_modify
*
* Check at a high-level if parallel mode is able to be used for the specified
diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c
index 88faf4d..06d859c 100644
--- a/src/backend/utils/adt/misc.c
+++ b/src/backend/utils/adt/misc.c
@@ -23,6 +23,8 @@
#include "access/sysattr.h"
#include "access/table.h"
#include "catalog/catalog.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_type.h"
#include "catalog/system_fk_info.h"
@@ -31,6 +33,7 @@
#include "common/keywords.h"
#include "funcapi.h"
#include "miscadmin.h"
+#include "optimizer/clauses.h"
#include "parser/scansup.h"
#include "pgstat.h"
#include "postmaster/syslogger.h"
@@ -43,6 +46,7 @@
#include "utils/lsyscache.h"
#include "utils/ruleutils.h"
#include "utils/timestamp.h"
+#include "utils/varlena.h"
/*
* Common subroutine for num_nulls() and num_nonnulls().
@@ -605,6 +609,96 @@ pg_collation_for(PG_FUNCTION_ARGS)
PG_RETURN_TEXT_P(cstring_to_text(generate_collation_name(collid)));
}
+/*
+ * Find the worst parallel-hazard level in the given relation
+ *
+ * Returns the worst parallel hazard level (the earliest in this list:
+ * PROPARALLEL_UNSAFE, PROPARALLEL_RESTRICTED, PROPARALLEL_SAFE) that can
+ * be found in the given relation.
+ */
+Datum
+pg_get_table_max_parallel_dml_hazard(PG_FUNCTION_ARGS)
+{
+ char max_parallel_hazard;
+ Oid relOid = PG_GETARG_OID(0);
+
+ (void) target_rel_parallel_hazard(relOid, false,
+ PROPARALLEL_UNSAFE,
+ &max_parallel_hazard);
+
+ PG_RETURN_CHAR(max_parallel_hazard);
+}
+
+/*
+ * Determine whether the target relation is safe to execute parallel modification.
+ *
+ * Return all the PARALLEL RESTRICTED/UNSAFE objects.
+ */
+Datum
+pg_get_table_parallel_dml_safety(PG_FUNCTION_ARGS)
+{
+#define PG_GET_PARALLEL_SAFETY_COLS 3
+ List *objects;
+ ListCell *object;
+ TupleDesc tupdesc;
+ Tuplestorestate *tupstore;
+ MemoryContext per_query_ctx;
+ MemoryContext oldcontext;
+ ReturnSetInfo *rsinfo;
+ char max_parallel_hazard;
+ Oid relOid = PG_GETARG_OID(0);
+
+ rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+
+ /* check to see if caller supports us returning a tuplestore */
+ if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
+
+ if (!(rsinfo->allowedModes & SFRM_Materialize))
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("materialize mode required, but it is not allowed in this context")));
+
+ /* Build a tuple descriptor for our result type */
+ if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+ elog(ERROR, "return type must be a row type");
+
+ per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+ oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+ tupstore = tuplestore_begin_heap(true, false, work_mem);
+ rsinfo->returnMode = SFRM_Materialize;
+ rsinfo->setResult = tupstore;
+ rsinfo->setDesc = tupdesc;
+
+ MemoryContextSwitchTo(oldcontext);
+
+ objects = target_rel_parallel_hazard(relOid, true,
+ PROPARALLEL_UNSAFE,
+ &max_parallel_hazard);
+ foreach(object, objects)
+ {
+ Datum values[PG_GET_PARALLEL_SAFETY_COLS];
+ bool nulls[PG_GET_PARALLEL_SAFETY_COLS];
+ safety_object *sobject = (safety_object *) lfirst(object);
+
+ memset(nulls, 0, sizeof(nulls));
+
+ values[0] = sobject->objid;
+ values[1] = sobject->classid;
+ values[2] = sobject->proparallel;
+
+ tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+ }
+
+ /* clean up and return the tuplestore */
+ tuplestore_donestoring(tupstore);
+
+ return (Datum) 0;
+}
+
/*
* pg_relation_is_updatable - determine which update events the specified
diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c
index 326fae6..02a8f70 100644
--- a/src/backend/utils/cache/typcache.c
+++ b/src/backend/utils/cache/typcache.c
@@ -2535,6 +2535,23 @@ compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2)
}
/*
+ * GetDomainConstraints --- get DomainConstraintState list of specified domain type
+ */
+List *
+GetDomainConstraints(Oid type_id)
+{
+ TypeCacheEntry *typentry;
+ List *constraints = NIL;
+
+ typentry = lookup_type_cache(type_id, TYPECACHE_DOMAIN_CONSTR_INFO);
+
+ if(typentry->domainData != NULL)
+ constraints = typentry->domainData->constraints;
+
+ return constraints;
+}
+
+/*
* Load (or re-load) the enumData member of the typcache entry.
*/
static void
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8cd0252..4483cd1 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -3770,6 +3770,20 @@
provolatile => 's', prorettype => 'regclass', proargtypes => 'regclass',
prosrc => 'pg_get_replica_identity_index' },
+{ oid => '6122',
+ descr => 'parallel unsafe/restricted objects in the target relation',
+ proname => 'pg_get_table_parallel_dml_safety', prorows => '100',
+ proretset => 't', provolatile => 'v', proparallel => 'u',
+ prorettype => 'record', proargtypes => 'regclass',
+ proallargtypes => '{regclass,oid,oid,char}',
+ proargmodes => '{i,o,o,o}',
+ proargnames => '{table_name, objid, classid, proparallel}',
+ prosrc => 'pg_get_table_parallel_dml_safety' },
+
+{ oid => '6123', descr => 'worst parallel-hazard level in the given relation for DML',
+ proname => 'pg_get_table_max_parallel_dml_hazard', prorettype => 'char', proargtypes => 'regclass',
+ prosrc => 'pg_get_table_max_parallel_dml_hazard', provolatile => 'v', proparallel => 'u' },
+
# Deferrable unique constraint trigger
{ oid => '1250', descr => 'deferred UNIQUE constraint check',
proname => 'unique_key_recheck', provolatile => 'v', prorettype => 'trigger',
@@ -3777,11 +3791,11 @@
# Generic referential integrity constraint triggers
{ oid => '1644', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
- proname => 'RI_FKey_check_ins', provolatile => 'v', prorettype => 'trigger',
- proargtypes => '', prosrc => 'RI_FKey_check_ins' },
+ proname => 'RI_FKey_check_ins', provolatile => 'v', proparallel => 'r',
+ prorettype => 'trigger', proargtypes => '', prosrc => 'RI_FKey_check_ins' },
{ oid => '1645', descr => 'referential integrity FOREIGN KEY ... REFERENCES',
- proname => 'RI_FKey_check_upd', provolatile => 'v', prorettype => 'trigger',
- proargtypes => '', prosrc => 'RI_FKey_check_upd' },
+ proname => 'RI_FKey_check_upd', provolatile => 'v', proparallel => 'r',
+ prorettype => 'trigger', proargtypes => '', prosrc => 'RI_FKey_check_upd' },
{ oid => '1646', descr => 'referential integrity ON DELETE CASCADE',
proname => 'RI_FKey_cascade_del', provolatile => 'v', prorettype => 'trigger',
proargtypes => '', prosrc => 'RI_FKey_cascade_del' },
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 32b5656..f8b2a72 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -23,6 +23,17 @@ typedef struct
List **windowFuncs; /* lists of WindowFuncs for each winref */
} WindowFuncLists;
+/*
+ * Information about a table-related object which could affect the safety of
+ * parallel data modification on table.
+ */
+typedef struct safety_object
+{
+ Oid objid; /* OID of object itself */
+ Oid classid; /* OID of its catalog */
+ char proparallel; /* parallel safety of the object */
+} safety_object;
+
extern bool contain_agg_clause(Node *clause);
extern bool contain_window_function(Node *clause);
@@ -54,5 +65,8 @@ extern Query *inline_set_returning_function(PlannerInfo *root,
RangeTblEntry *rte);
extern bool is_parallel_allowed_for_modify(Query *parse);
+extern List *target_rel_parallel_hazard(Oid relOid, bool findall,
+ char max_interesting,
+ char *max_hazard);
#endif /* CLAUSES_H */
diff --git a/src/include/utils/typcache.h b/src/include/utils/typcache.h
index 1d68a9a..28ca7d8 100644
--- a/src/include/utils/typcache.h
+++ b/src/include/utils/typcache.h
@@ -199,6 +199,8 @@ extern uint64 assign_record_type_identifier(Oid type_id, int32 typmod);
extern int compare_values_of_enum(TypeCacheEntry *tcache, Oid arg1, Oid arg2);
+extern List *GetDomainConstraints(Oid type_id);
+
extern size_t SharedRecordTypmodRegistryEstimate(void);
extern void SharedRecordTypmodRegistryInit(SharedRecordTypmodRegistry *,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 37cf4b2..307bb97 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3491,6 +3491,7 @@ rm_detail_t
role_auth_extra
row_security_policy_hook_type
rsv_callback
+safety_object
saophash_hash
save_buffer
scram_state
--
2.7.2.windows.1
v19-0004-Cache-parallel-dml-safety.patchapplication/octet-stream; name=v19-0004-Cache-parallel-dml-safety.patchDownload
From dffebd8f53ffe275814f151ed1ff2dd4dac05707 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@fujitsu.com>
Date: Thu, 19 Aug 2021 13:48:50 +0800
Subject: [PATCH] Cache parallel dml safety
The planner is updated to perform additional parallel-safety checks For
non-partitioned table if pg_class.relparalleldml is DEFAULT('d'), and cache the
parallel safety for the relation.
Whenever any function's parallel-safety is changed, invalidate the cached
parallel-safety for all relations in relcache for a particular database.
For partitioned table, If pg_class.relparalleldml is DEFAULT('d'), assume that
the table is UNSAFE to be modified in parallel mode.
If pg_class.relparalleldml is SAFE/RESTRICTED/UNSAFE, respect the specified
parallel dml safety instead of checking it again.
---
src/backend/catalog/pg_proc.c | 13 +++++
src/backend/commands/functioncmds.c | 18 ++++++-
src/backend/optimizer/util/clauses.c | 78 ++++++++++++++++++++++------
src/backend/utils/cache/inval.c | 53 +++++++++++++++++++
src/backend/utils/cache/relcache.c | 19 +++++++
src/include/storage/sinval.h | 8 +++
src/include/utils/inval.h | 2 +
src/include/utils/rel.h | 1 +
src/include/utils/relcache.h | 2 +
9 files changed, 176 insertions(+), 18 deletions(-)
diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c
index 1454d2fb67..9745ee8558 100644
--- a/src/backend/catalog/pg_proc.c
+++ b/src/backend/catalog/pg_proc.c
@@ -39,6 +39,7 @@
#include "tcop/tcopprot.h"
#include "utils/acl.h"
#include "utils/builtins.h"
+#include "utils/inval.h"
#include "utils/lsyscache.h"
#include "utils/regproc.h"
#include "utils/rel.h"
@@ -367,6 +368,9 @@ ProcedureCreate(const char *procedureName,
Datum proargnames;
bool isnull;
const char *dropcmd;
+ char old_proparallel;
+
+ old_proparallel = oldproc->proparallel;
if (!replace)
ereport(ERROR,
@@ -559,6 +563,15 @@ ProcedureCreate(const char *procedureName,
tup = heap_modify_tuple(oldtup, tupDesc, values, nulls, replaces);
CatalogTupleUpdate(rel, &tup->t_self, tup);
+ /*
+ * If the function's parallel safety changed, the tables that depend
+ * on this function won't be safe to be modified in parallel mode
+ * anymore. So, we need to invalidate the parallel dml flag in
+ * relcache.
+ */
+ if (old_proparallel != parallel)
+ CacheInvalidateParallelDML();
+
ReleaseSysCache(oldtup);
is_update = true;
}
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index 79d875ab10..57d9ca52e5 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -70,6 +70,7 @@
#include "utils/builtins.h"
#include "utils/fmgroids.h"
#include "utils/guc.h"
+#include "utils/inval.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
#include "utils/rel.h"
@@ -1504,7 +1505,22 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
repl_val, repl_null, repl_repl);
}
if (parallel_item)
- procForm->proparallel = interpret_func_parallel(parallel_item);
+ {
+ char proparallel;
+
+ proparallel = interpret_func_parallel(parallel_item);
+
+ /*
+ * If the function's parallel safety changed, the tables that depends
+ * on this function won't be safe to be modified in parallel mode
+ * anymore. So, we need to invalidate the parallel dml flag in
+ * relcache.
+ */
+ if (proparallel != procForm->proparallel)
+ CacheInvalidateParallelDML();
+
+ procForm->proparallel = proparallel;
+ }
/* Do the update */
CatalogTupleUpdate(rel, &tup->t_self, tup);
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 749cb0dacd..5c27fc222e 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -187,7 +187,7 @@ static Node *substitute_actual_srf_parameters_mutator(Node *node,
substitute_actual_srf_parameters_context *context);
static bool max_parallel_hazard_test(char proparallel, max_parallel_hazard_context *context);
static safety_object *make_safety_object(Oid objid, Oid classid, char proparallel);
-
+static char max_parallel_dml_hazard(Query *parse, max_parallel_hazard_context *context);
/*****************************************************************************
* Aggregate-function clause manipulation
@@ -654,7 +654,6 @@ contain_volatile_functions_not_nextval_walker(Node *node, void *context)
char
max_parallel_hazard(Query *parse)
{
- bool max_hazard_found;
max_parallel_hazard_context context;
context.max_hazard = PROPARALLEL_SAFE;
@@ -664,28 +663,73 @@ max_parallel_hazard(Query *parse)
context.objects = NIL;
context.partition_directory = NULL;
- max_hazard_found = max_parallel_hazard_walker((Node *) parse, &context);
+ if(!max_parallel_hazard_walker((Node *) parse, &context))
+ (void) max_parallel_dml_hazard(parse, &context);
+
+ return context.max_hazard;
+}
+
+/* Check the safety of parallel data modification */
+static char
+max_parallel_dml_hazard(Query *parse,
+ max_parallel_hazard_context *context)
+{
+ RangeTblEntry *rte;
+ Relation target_rel;
+ char hazard;
+
+ if (!IsModifySupportedInParallelMode(parse->commandType))
+ return context->max_hazard;
+
+ /*
+ * The target table is already locked by the caller (this is done in the
+ * parse/analyze phase), and remains locked until end-of-transaction.
+ */
+ rte = rt_fetch(parse->resultRelation, parse->rtable);
+ target_rel = table_open(rte->relid, NoLock);
+
+ /*
+ * If user set specific parallel dml safety safe/restricted/unsafe, we
+ * respect what user has set. If not set, for non-partitioned table, check
+ * the safety automatically, for partitioned table, consider it as unsafe.
+ */
+ hazard = target_rel->rd_rel->relparalleldml;
+ if (target_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE &&
+ hazard == PROPARALLEL_DEFAULT)
+ hazard = PROPARALLEL_UNSAFE;
+
+ if (hazard != PROPARALLEL_DEFAULT)
+ (void) max_parallel_hazard_test(hazard, context);
- if (!max_hazard_found &&
- IsModifySupportedInParallelMode(parse->commandType))
+ /* Do parallel safety check for the target relation */
+ else if (!target_rel->rd_paralleldml)
{
- RangeTblEntry *rte;
- Relation target_rel;
+ bool max_hazard_found;
+ char pre_max_hazard = context->max_hazard;
+ context->max_hazard = PROPARALLEL_SAFE;
- rte = rt_fetch(parse->resultRelation, parse->rtable);
+ max_hazard_found = target_rel_parallel_hazard_recurse(target_rel,
+ context,
+ false,
+ false);
- /*
- * The target table is already locked by the caller (this is done in the
- * parse/analyze phase), and remains locked until end-of-transaction.
- */
- target_rel = table_open(rte->relid, NoLock);
+ /* Cache the parallel dml safety of this relation */
+ target_rel->rd_paralleldml = context->max_hazard;
- (void) max_parallel_hazard_test(target_rel->rd_rel->relparalleldml,
- &context);
- table_close(target_rel, NoLock);
+ if (!max_hazard_found)
+ (void) max_parallel_hazard_test(pre_max_hazard, context);
}
- return context.max_hazard;
+ /*
+ * If we already cached the parallel dml safety of this relation, we don't
+ * need to check it again.
+ */
+ else
+ (void) max_parallel_hazard_test(target_rel->rd_paralleldml, context);
+
+ table_close(target_rel, NoLock);
+
+ return context->max_hazard;
}
/*
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 9352c68090..bacb18e10e 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -478,6 +478,27 @@ AddSnapshotInvalidationMessage(InvalidationMsgsGroup *group,
AddInvalidationMessage(group, RelCacheMsgs, &msg);
}
+/*
+ * Add a parallel dml inval entry
+ */
+static void
+AddParallelDMLInvalidationMessage(InvalidationMsgsGroup *group)
+{
+ SharedInvalidationMessage msg;
+
+ /* Don't add a duplicate item. */
+ ProcessMessageSubGroup(group, RelCacheMsgs,
+ if (msg->rc.id == SHAREDINVALPARALLELDML_ID)
+ return);
+
+ /* OK, add the item */
+ msg.pd.id = SHAREDINVALPARALLELDML_ID;
+ /* check AddCatcacheInvalidationMessage() for an explanation */
+ VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
+
+ AddInvalidationMessage(group, RelCacheMsgs, &msg);
+}
+
/*
* Append one group of invalidation messages to another, resetting
* the source group to empty.
@@ -576,6 +597,21 @@ RegisterRelcacheInvalidation(Oid dbId, Oid relId)
transInvalInfo->RelcacheInitFileInval = true;
}
+/*
+ * RegisterParallelDMLInvalidation
+ *
+ * Register a invalidation event for paralleldml in all relcache.
+ */
+static void
+RegisterParallelDMLInvalidation()
+{
+ AddParallelDMLInvalidationMessage(&transInvalInfo->CurrentCmdInvalidMsgs);
+
+ (void) GetCurrentCommandId(true);
+
+ transInvalInfo->RelcacheInitFileInval = true;
+}
+
/*
* RegisterSnapshotInvalidation
*
@@ -668,6 +704,11 @@ LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
else if (msg->sn.dbId == MyDatabaseId)
InvalidateCatalogSnapshot();
}
+ else if (msg->id == SHAREDINVALPARALLELDML_ID)
+ {
+ /* Invalid all the relcache's parallel dml flag */
+ ParallelDMLInvalidate();
+ }
else
elog(FATAL, "unrecognized SI message ID: %d", msg->id);
}
@@ -1370,6 +1411,18 @@ CacheInvalidateRelcacheAll(void)
RegisterRelcacheInvalidation(InvalidOid, InvalidOid);
}
+/*
+ * CacheInvalidateParallelDML
+ * Register invalidation of the whole relcache at the end of command.
+ */
+void
+CacheInvalidateParallelDML(void)
+{
+ PrepareInvalidationState();
+
+ RegisterParallelDMLInvalidation();
+}
+
/*
* CacheInvalidateRelcacheByTuple
* As above, but relation is identified by passing its pg_class tuple.
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 70d8ecb1dd..57fe97dcd4 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2934,6 +2934,25 @@ RelationCacheInvalidate(void)
list_free(rebuildList);
}
+/*
+ * ParallelDMLInvalidate
+ * Invalidate all the relcache's parallel dml flag.
+ */
+void
+ParallelDMLInvalidate(void)
+{
+ HASH_SEQ_STATUS status;
+ RelIdCacheEnt *idhentry;
+ Relation relation;
+
+ hash_seq_init(&status, RelationIdCache);
+
+ while ((idhentry = (RelIdCacheEnt *) hash_seq_search(&status)) != NULL)
+ {
+ relation = idhentry->reldesc;
+ relation->rd_paralleldml = 0;
+ }
+}
/*
* RelationCloseSmgrByOid - close a relcache entry's smgr link
*
diff --git a/src/include/storage/sinval.h b/src/include/storage/sinval.h
index f03dc23b14..9859a3bea0 100644
--- a/src/include/storage/sinval.h
+++ b/src/include/storage/sinval.h
@@ -110,6 +110,13 @@ typedef struct
Oid relId; /* relation ID */
} SharedInvalSnapshotMsg;
+#define SHAREDINVALPARALLELDML_ID (-6)
+
+typedef struct
+{
+ int8 id; /* type field --- must be first */
+} SharedInvalParallelDMLMsg;
+
typedef union
{
int8 id; /* type field --- must be first */
@@ -119,6 +126,7 @@ typedef union
SharedInvalSmgrMsg sm;
SharedInvalRelmapMsg rm;
SharedInvalSnapshotMsg sn;
+ SharedInvalParallelDMLMsg pd;
} SharedInvalidationMessage;
diff --git a/src/include/utils/inval.h b/src/include/utils/inval.h
index 770672890b..f1ce1462c1 100644
--- a/src/include/utils/inval.h
+++ b/src/include/utils/inval.h
@@ -64,4 +64,6 @@ extern void CallSyscacheCallbacks(int cacheid, uint32 hashvalue);
extern void InvalidateSystemCaches(void);
extern void LogLogicalInvalidations(void);
+
+extern void CacheInvalidateParallelDML(void);
#endif /* INVAL_H */
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index b4faa1c123..52574e9d40 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -63,6 +63,7 @@ typedef struct RelationData
bool rd_indexvalid; /* is rd_indexlist valid? (also rd_pkindex and
* rd_replidindex) */
bool rd_statvalid; /* is rd_statlist valid? */
+ char rd_paralleldml; /* parallel dml safety */
/*----------
* rd_createSubid is the ID of the highest subtransaction the rel has
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 5ea225ac2d..5813aa50a0 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -128,6 +128,8 @@ extern void RelationCacheInvalidate(void);
extern void RelationCloseSmgrByOid(Oid relationId);
+extern void ParallelDMLInvalidate(void);
+
#ifdef USE_ASSERT_CHECKING
extern void AssertPendingSyncs_RelationCache(void);
#else
--
2.18.4
v19-0005-Regression-test-and-doc-updates.patchapplication/octet-stream; name=v19-0005-Regression-test-and-doc-updates.patchDownload
From 7ec228cd54743d92b67c80c6c362938de06e6305 Mon Sep 17 00:00:00 2001
From: "houzj.fnst" <houzj.fnst@fujitsu.com>
Date: Wed, 1 Sep 2021 15:58:39 +0800
Subject: [PATCH] Regression-test-and-doc-updates
---
contrib/test_decoding/expected/ddl.out | 4 +
doc/src/sgml/func.sgml | 61 ++
doc/src/sgml/ref/alter_foreign_table.sgml | 13 +
doc/src/sgml/ref/alter_function.sgml | 2 +-
doc/src/sgml/ref/alter_table.sgml | 12 +
doc/src/sgml/ref/create_foreign_table.sgml | 39 +
doc/src/sgml/ref/create_table.sgml | 44 ++
doc/src/sgml/ref/create_table_as.sgml | 38 +
src/test/regress/expected/alter_table.out | 2 +
src/test/regress/expected/compression_1.out | 9 +
src/test/regress/expected/copy2.out | 1 +
src/test/regress/expected/create_table.out | 14 +
.../regress/expected/create_table_like.out | 8 +
src/test/regress/expected/domain.out | 2 +
src/test/regress/expected/foreign_data.out | 42 ++
src/test/regress/expected/identity.out | 1 +
src/test/regress/expected/inherit.out | 13 +
src/test/regress/expected/insert.out | 12 +
src/test/regress/expected/insert_parallel.out | 713 ++++++++++++++++++
src/test/regress/expected/psql.out | 58 +-
src/test/regress/expected/publication.out | 4 +
.../regress/expected/replica_identity.out | 1 +
src/test/regress/expected/rowsecurity.out | 1 +
src/test/regress/expected/rules.out | 3 +
src/test/regress/expected/stats_ext.out | 1 +
src/test/regress/expected/triggers.out | 1 +
src/test/regress/expected/update.out | 1 +
src/test/regress/output/tablespace.source | 2 +
src/test/regress/parallel_schedule | 1 +
src/test/regress/sql/insert_parallel.sql | 381 ++++++++++
30 files changed, 1456 insertions(+), 28 deletions(-)
create mode 100644 src/test/regress/expected/insert_parallel.out
create mode 100644 src/test/regress/sql/insert_parallel.sql
diff --git a/contrib/test_decoding/expected/ddl.out b/contrib/test_decoding/expected/ddl.out
index 4ff0044c78..45aa25bff8 100644
--- a/contrib/test_decoding/expected/ddl.out
+++ b/contrib/test_decoding/expected/ddl.out
@@ -446,6 +446,7 @@ WITH (user_catalog_table = true)
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=true
INSERT INTO replication_metadata(relation, options)
@@ -460,6 +461,7 @@ ALTER TABLE replication_metadata RESET (user_catalog_table);
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
INSERT INTO replication_metadata(relation, options)
VALUES ('bar', ARRAY['a', 'b']);
@@ -473,6 +475,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = true);
options | text[] | | | | extended | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=true
INSERT INTO replication_metadata(relation, options)
@@ -492,6 +495,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = false);
rewritemeornot | integer | | | | plain | |
Indexes:
"replication_metadata_pkey" PRIMARY KEY, btree (id)
+Parallel DML: default
Options: user_catalog_table=false
INSERT INTO replication_metadata(relation, options)
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 78812b2dbe..49278d9e21 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -24250,6 +24250,67 @@ SELECT collation for ('foo' COLLATE "de_DE");
Undefined objects are identified with <literal>NULL</literal> values.
</para></entry>
</row>
+
+ <row>
+ <entry role="func_table_entry"><para role="func_signature">
+ <indexterm>
+ <primary>pg_get_table_parallel_dml_safety</primary>
+ </indexterm>
+ <function>pg_get_table_parallel_dml_safety</function> ( <parameter>table_name</parameter> <type>regclass</type> )
+ <returnvalue>record</returnvalue>
+ ( <parameter>objid</parameter> <type>oid</type>,
+ <parameter>classid</parameter> <type>oid</type>,
+ <parameter>proparallel</parameter> <type>char</type> )
+ </para>
+ <para>
+ Returns a row containing enough information to uniquely identify the
+ parallel unsafe/restricted table-related objects from which the
+ table's parallel DML safety is determined. The user can use this
+ information during development in order to accurately declare a
+ table's parallel DML safety, or to identify any problematic objects
+ if parallel DML fails or behaves unexpectedly. Note that when the
+ use of an object-related parallel unsafe/restricted function is
+ detected, both the function OID and the object OID are returned.
+ <parameter>classid</parameter> is the OID of the system catalog
+ containing the object;
+ <parameter>objid</parameter> is the OID of the object itself.
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="func_table_entry"><para role="func_signature">
+ <indexterm>
+ <primary>pg_get_table_max_parallel_dml_hazard</primary>
+ </indexterm>
+ <function>pg_get_table_max_parallel_dml_hazard</function> ( <type>regclass</type> )
+ <returnvalue>char</returnvalue>
+ </para>
+ <para>
+ Returns the worst parallel DML safety hazard that can be found in the
+ given relation:
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal>s</literal> safe
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>r</literal> restricted
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>u</literal> unsafe
+ </para>
+ </listitem>
+ </itemizedlist>
+ </para>
+ <para>
+ Users can use this function to do a quick check without caring about
+ specific parallel-related objects.
+ </para></entry>
+ </row>
</tbody>
</tgroup>
</table>
diff --git a/doc/src/sgml/ref/alter_foreign_table.sgml b/doc/src/sgml/ref/alter_foreign_table.sgml
index 7ca03f3ac9..ca4b1c261e 100644
--- a/doc/src/sgml/ref/alter_foreign_table.sgml
+++ b/doc/src/sgml/ref/alter_foreign_table.sgml
@@ -29,6 +29,8 @@ ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceab
RENAME TO <replaceable class="parameter">new_name</replaceable>
ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
SET SCHEMA <replaceable class="parameter">new_schema</replaceable>
+ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
<phrase>where <replaceable class="parameter">action</replaceable> is one of:</phrase>
@@ -299,6 +301,17 @@ ALTER FOREIGN TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceab
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML</literal></term>
+ <listitem>
+ <para>
+ Change whether the data in the table can be modified in parallel mode.
+ See the similar form of <link linkend="sql-altertable"><command>ALTER TABLE</command></link>
+ for more details.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</para>
diff --git a/doc/src/sgml/ref/alter_function.sgml b/doc/src/sgml/ref/alter_function.sgml
index 0ee756a94d..a7088bc1cb 100644
--- a/doc/src/sgml/ref/alter_function.sgml
+++ b/doc/src/sgml/ref/alter_function.sgml
@@ -38,7 +38,7 @@ ALTER FUNCTION <replaceable>name</replaceable> [ ( [ [ <replaceable class="param
IMMUTABLE | STABLE | VOLATILE
[ NOT ] LEAKPROOF
[ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER
- PARALLEL { UNSAFE | RESTRICTED | SAFE }
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
COST <replaceable class="parameter">execution_cost</replaceable>
ROWS <replaceable class="parameter">result_rows</replaceable>
SUPPORT <replaceable class="parameter">support_function</replaceable>
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index 81291577f8..53bbacf9db 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -37,6 +37,8 @@ ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
ATTACH PARTITION <replaceable class="parameter">partition_name</replaceable> { FOR VALUES <replaceable class="parameter">partition_bound_spec</replaceable> | DEFAULT }
ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
DETACH PARTITION <replaceable class="parameter">partition_name</replaceable> [ CONCURRENTLY | FINALIZE ]
+ALTER TABLE [ IF EXISTS ] <replaceable class="parameter">name</replaceable>
+ PARALLEL { DEFAULT | UNSAFE | RESTRICTED | SAFE }
<phrase>where <replaceable class="parameter">action</replaceable> is one of:</phrase>
@@ -1030,6 +1032,16 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML</literal></term>
+ <listitem>
+ <para>
+ Change whether the data in the table can be modified in parallel mode.
+ See <link linkend="sql-createtable"><command>CREATE TABLE</command></link> for details.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</para>
diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml
index f9477efe58..32372beed0 100644
--- a/doc/src/sgml/ref/create_foreign_table.sgml
+++ b/doc/src/sgml/ref/create_foreign_table.sgml
@@ -27,6 +27,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name
[, ... ]
] )
[ INHERITS ( <replaceable>parent_table</replaceable> [, ... ] ) ]
+[ PARALLEL DML { NOTESET | UNSAFE | RESTRICTED | SAFE } ]
SERVER <replaceable class="parameter">server_name</replaceable>
[ OPTIONS ( <replaceable class="parameter">option</replaceable> '<replaceable class="parameter">value</replaceable>' [, ... ] ) ]
@@ -36,6 +37,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name
| <replaceable>table_constraint</replaceable> }
[, ... ]
) ] <replaceable class="parameter">partition_bound_spec</replaceable>
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
SERVER <replaceable class="parameter">server_name</replaceable>
[ OPTIONS ( <replaceable class="parameter">option</replaceable> '<replaceable class="parameter">value</replaceable>' [, ... ] ) ]
@@ -290,6 +292,43 @@ CHECK ( <replaceable class="parameter">expression</replaceable> ) [ NO INHERIT ]
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML DEFAULT</literal> indicates that the safety of
+ parallel modification will be checked automatically. This is default.
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the
+ table can't be modified in parallel mode, and this forces a serial
+ execution plan for DML statements operating on the table.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader. <literal>PARALLEL DML
+ SAFE</literal> indicates that the data in the table can be modified in
+ parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ the table (e.g., functions in triggers/index expression/constraints etc.).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable class="parameter">server_name</replaceable></term>
<listitem>
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 473a0a4aeb..5521f5123e 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -33,6 +33,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name</replaceable>
OF <replaceable class="parameter">type_name</replaceable> [ (
@@ -45,6 +46,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] <replaceable class="parameter">table_name</replaceable>
PARTITION OF <replaceable class="parameter">parent_table</replaceable> [ (
@@ -57,6 +59,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+[ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
<phrase>where <replaceable class="parameter">column_constraint</replaceable> is:</phrase>
@@ -1336,6 +1339,47 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry id="sql-createtable-paralleldmlsafety">
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the table
+ can't be modified in parallel mode, and this forces a serial execution plan
+ for DML statements operating on the table. This is the default.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader.
+ <literal>PARALLEL DML SAFE</literal> indicates that the data in the table
+ can be modified in parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Note that for partitioned table, <literal>PARALLEL DML DEFAULT</literal>
+ is the same as <literal>PARALLEL DML UNSAFE</literal> which indicates
+ that the data in the table can't be modified in parallel mode.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ the table
+ (e.g., functions in triggers/index expressions/constraints etc.).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>USING INDEX TABLESPACE <replaceable class="parameter">tablespace_name</replaceable></literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml
index 07558ab56c..ba5f80d45c 100644
--- a/doc/src/sgml/ref/create_table_as.sgml
+++ b/doc/src/sgml/ref/create_table_as.sgml
@@ -27,6 +27,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
[ WITH ( <replaceable class="parameter">storage_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> ]
+ [ PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } ]
AS <replaceable>query</replaceable>
[ WITH [ NO ] DATA ]
</synopsis>
@@ -223,6 +224,43 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE } </literal></term>
+ <listitem>
+ <para>
+ <literal>PARALLEL DML DEFAULT</literal> indicates that the safety of
+ parallel modification will be checked automatically. This is default.
+ <literal>PARALLEL DML UNSAFE</literal> indicates that the data in the
+ table can't be modified in parallel mode, and this forces a serial
+ execution plan for DML statements operating on the table.
+ <literal>PARALLEL DML RESTRICTED</literal> indicates that the data in the
+ table can be modified in parallel mode, but the modification is
+ restricted to the parallel group leader. <literal>PARALLEL DML
+ SAFE</literal> indicates that the data in the table can be modified in
+ parallel mode without restriction. Note that
+ <productname>PostgreSQL</productname> currently does not support data
+ modification by parallel workers.
+ </para>
+
+ <para>
+ Tables should be labeled parallel dml unsafe/restricted if any parallel
+ unsafe/restricted function could be executed when modifying the data in
+ table (e.g., functions in trigger/index expression/constraints ...).
+ </para>
+
+ <para>
+ To assist in correctly labeling the parallel DML safety level of a table,
+ PostgreSQL provides some utility functions that may be used during
+ application development. Refer to
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_parallel_dml_safety()</function></link> and
+ <link linkend="functions-info-object-table">
+ <function>pg_get_table_max_parallel_dml_hazard()</function></link> for more information.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable>query</replaceable></term>
<listitem>
diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out
index 4bee0c1173..5fefbe9347 100644
--- a/src/test/regress/expected/alter_table.out
+++ b/src/test/regress/expected/alter_table.out
@@ -2206,6 +2206,7 @@ alter table test_storage alter column a set storage external;
b | integer | | | 0 | plain | |
Indexes:
"test_storage_idx" btree (b, a)
+Parallel DML: default
\d+ test_storage_idx
Index "public.test_storage_idx"
@@ -4193,6 +4194,7 @@ ALTER TABLE range_parted2 DETACH PARTITION part_rp CONCURRENTLY;
a | integer | | | | plain | |
Partition key: RANGE (a)
Number of partitions: 0
+Parallel DML: default
-- constraint should be created
\d part_rp
diff --git a/src/test/regress/expected/compression_1.out b/src/test/regress/expected/compression_1.out
index 1ce2962d55..ad2b1ff001 100644
--- a/src/test/regress/expected/compression_1.out
+++ b/src/test/regress/expected/compression_1.out
@@ -12,6 +12,7 @@ INSERT INTO cmdata VALUES(repeat('1234567890', 1000));
f1 | text | | | | extended | pglz | |
Indexes:
"idx" btree (f1)
+Parallel DML: default
CREATE TABLE cmdata1(f1 TEXT COMPRESSION lz4);
ERROR: compression method lz4 not supported
@@ -51,6 +52,7 @@ SELECT * INTO cmmove1 FROM cmdata;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | text | | | | extended | | |
+Parallel DML: default
SELECT pg_column_compression(f1) FROM cmmove1;
pg_column_compression
@@ -138,6 +140,7 @@ CREATE TABLE cmdata2 (f1 int);
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | integer | | | | plain | | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;
\d+ cmdata2
@@ -145,6 +148,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | character varying | | | | extended | | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE int USING f1::integer;
\d+ cmdata2
@@ -152,6 +156,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE int USING f1::integer;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | integer | | | | plain | | |
+Parallel DML: default
--changing column storage should not impact the compression method
--but the data should not be compressed
@@ -162,6 +167,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION pglz;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+----------+-------------+--------------+-------------
f1 | character varying | | | | extended | pglz | |
+Parallel DML: default
ALTER TABLE cmdata2 ALTER COLUMN f1 SET STORAGE plain;
\d+ cmdata2
@@ -169,6 +175,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET STORAGE plain;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | character varying | | | | plain | pglz | |
+Parallel DML: default
INSERT INTO cmdata2 VALUES (repeat('123456789', 800));
SELECT pg_column_compression(f1) FROM cmdata2;
@@ -249,6 +256,7 @@ INSERT INTO cmdata VALUES (repeat('123456789', 4004));
f1 | text | | | | extended | pglz | |
Indexes:
"idx" btree (f1)
+Parallel DML: default
SELECT pg_column_compression(f1) FROM cmdata;
pg_column_compression
@@ -263,6 +271,7 @@ ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION default;
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+-------------------+-----------+----------+---------+---------+-------------+--------------+-------------
f1 | character varying | | | | plain | | |
+Parallel DML: default
-- test alter compression method for materialized views
ALTER MATERIALIZED VIEW compressmv ALTER COLUMN x SET COMPRESSION lz4;
diff --git a/src/test/regress/expected/copy2.out b/src/test/regress/expected/copy2.out
index 5f3685e9ef..cd0d153461 100644
--- a/src/test/regress/expected/copy2.out
+++ b/src/test/regress/expected/copy2.out
@@ -519,6 +519,7 @@ alter table check_con_tbl add check (check_con_function(check_con_tbl.*));
f1 | integer | | | | plain | |
Check constraints:
"check_con_tbl_check" CHECK (check_con_function(check_con_tbl.*))
+Parallel DML: default
copy check_con_tbl from stdin;
NOTICE: input = {"f1":1}
diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out
index a958b84979..fe10ac8bb0 100644
--- a/src/test/regress/expected/create_table.out
+++ b/src/test/regress/expected/create_table.out
@@ -505,6 +505,7 @@ Number of partitions: 0
b | text | | | | extended | |
Partition key: RANGE (((a + 1)), substr(b, 1, 5))
Number of partitions: 0
+Parallel DML: default
INSERT INTO partitioned2 VALUES (1, 'hello');
ERROR: no partition of relation "partitioned2" found for row
@@ -518,6 +519,7 @@ CREATE TABLE part2_1 PARTITION OF partitioned2 FOR VALUES FROM (-1, 'aaaaa') TO
b | text | | | | extended | |
Partition of: partitioned2 FOR VALUES FROM ('-1', 'aaaaa') TO (100, 'ccccc')
Partition constraint: (((a + 1) IS NOT NULL) AND (substr(b, 1, 5) IS NOT NULL) AND (((a + 1) > '-1'::integer) OR (((a + 1) = '-1'::integer) AND (substr(b, 1, 5) >= 'aaaaa'::text))) AND (((a + 1) < 100) OR (((a + 1) = 100) AND (substr(b, 1, 5) < 'ccccc'::text))))
+Parallel DML: default
DROP TABLE partitioned, partitioned2;
-- check reference to partitioned table's rowtype in partition descriptor
@@ -559,6 +561,7 @@ select * from partitioned where partitioned = '(1,2)'::partitioned;
b | integer | | | | plain | |
Partition of: partitioned FOR VALUES IN ('(1,2)')
Partition constraint: (((partitioned1.*)::partitioned IS DISTINCT FROM NULL) AND ((partitioned1.*)::partitioned = '(1,2)'::partitioned))
+Parallel DML: default
drop table partitioned;
-- check that dependencies of partition columns are handled correctly
@@ -618,6 +621,7 @@ Partitions: part_null FOR VALUES IN (NULL),
part_p1 FOR VALUES IN (1),
part_p2 FOR VALUES IN (2),
part_p3 FOR VALUES IN (3)
+Parallel DML: default
-- forbidden expressions for partition bound with list partitioned table
CREATE TABLE part_bogus_expr_fail PARTITION OF list_parted FOR VALUES IN (somename);
@@ -1064,6 +1068,7 @@ drop table test_part_coll_posix;
b | integer | | not null | 1 | plain | |
Partition of: parted FOR VALUES IN ('b')
Partition constraint: ((a IS NOT NULL) AND (a = 'b'::text))
+Parallel DML: default
-- Both partition bound and partition key in describe output
\d+ part_c
@@ -1076,6 +1081,7 @@ Partition of: parted FOR VALUES IN ('c')
Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text))
Partition key: RANGE (b)
Partitions: part_c_1_10 FOR VALUES FROM (1) TO (10)
+Parallel DML: default
-- a level-2 partition's constraint will include the parent's expressions
\d+ part_c_1_10
@@ -1086,6 +1092,7 @@ Partitions: part_c_1_10 FOR VALUES FROM (1) TO (10)
b | integer | | not null | 0 | plain | |
Partition of: part_c FOR VALUES FROM (1) TO (10)
Partition constraint: ((a IS NOT NULL) AND (a = 'c'::text) AND (b IS NOT NULL) AND (b >= 1) AND (b < 10))
+Parallel DML: default
-- Show partition count in the parent's describe output
-- Tempted to include \d+ output listing partitions with bound info but
@@ -1120,6 +1127,7 @@ CREATE TABLE unbounded_range_part PARTITION OF range_parted4 FOR VALUES FROM (MI
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (MAXVALUE, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL))
+Parallel DML: default
DROP TABLE unbounded_range_part;
CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE);
@@ -1132,6 +1140,7 @@ CREATE TABLE range_parted4_1 PARTITION OF range_parted4 FOR VALUES FROM (MINVALU
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (MINVALUE, MINVALUE, MINVALUE) TO (1, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND (abs(a) <= 1))
+Parallel DML: default
CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE);
\d+ range_parted4_2
@@ -1143,6 +1152,7 @@ CREATE TABLE range_parted4_2 PARTITION OF range_parted4 FOR VALUES FROM (3, 4, 5
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (3, 4, 5) TO (6, 7, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 3) OR ((abs(a) = 3) AND (abs(b) > 4)) OR ((abs(a) = 3) AND (abs(b) = 4) AND (c >= 5))) AND ((abs(a) < 6) OR ((abs(a) = 6) AND (abs(b) <= 7))))
+Parallel DML: default
CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE);
\d+ range_parted4_3
@@ -1154,6 +1164,7 @@ CREATE TABLE range_parted4_3 PARTITION OF range_parted4 FOR VALUES FROM (6, 8, M
c | integer | | | | plain | |
Partition of: range_parted4 FOR VALUES FROM (6, 8, MINVALUE) TO (9, MAXVALUE, MAXVALUE)
Partition constraint: ((abs(a) IS NOT NULL) AND (abs(b) IS NOT NULL) AND (c IS NOT NULL) AND ((abs(a) > 6) OR ((abs(a) = 6) AND (abs(b) >= 8))) AND (abs(a) <= 9))
+Parallel DML: default
DROP TABLE range_parted4;
-- user-defined operator class in partition key
@@ -1190,6 +1201,7 @@ SELECT obj_description('parted_col_comment'::regclass);
b | text | | | | extended | |
Partition key: LIST (a)
Number of partitions: 0
+Parallel DML: default
DROP TABLE parted_col_comment;
-- list partitioning on array type column
@@ -1202,6 +1214,7 @@ CREATE TABLE arrlp12 PARTITION OF arrlp FOR VALUES IN ('{1}', '{2}');
a | integer[] | | | | extended | |
Partition of: arrlp FOR VALUES IN ('{1}', '{2}')
Partition constraint: ((a IS NOT NULL) AND ((a = '{1}'::integer[]) OR (a = '{2}'::integer[])))
+Parallel DML: default
DROP TABLE arrlp;
-- partition on boolean column
@@ -1216,6 +1229,7 @@ create table boolspart_f partition of boolspart for values in (false);
Partition key: LIST (a)
Partitions: boolspart_f FOR VALUES IN (false),
boolspart_t FOR VALUES IN (true)
+Parallel DML: default
drop table boolspart;
-- partitions mixing temporary and permanent relations
diff --git a/src/test/regress/expected/create_table_like.out b/src/test/regress/expected/create_table_like.out
index 0ed94f1d2f..3757e2f8d0 100644
--- a/src/test/regress/expected/create_table_like.out
+++ b/src/test/regress/expected/create_table_like.out
@@ -333,6 +333,7 @@ CREATE TABLE ctlt12_storage (LIKE ctlt1 INCLUDING STORAGE, LIKE ctlt2 INCLUDING
a | text | | not null | | main | |
b | text | | | | extended | |
c | text | | | | external | |
+Parallel DML: default
CREATE TABLE ctlt12_comments (LIKE ctlt1 INCLUDING COMMENTS, LIKE ctlt2 INCLUDING COMMENTS);
\d+ ctlt12_comments
@@ -342,6 +343,7 @@ CREATE TABLE ctlt12_comments (LIKE ctlt1 INCLUDING COMMENTS, LIKE ctlt2 INCLUDIN
a | text | | not null | | extended | | A
b | text | | | | extended | | B
c | text | | | | extended | | C
+Parallel DML: default
CREATE TABLE ctlt1_inh (LIKE ctlt1 INCLUDING CONSTRAINTS INCLUDING COMMENTS) INHERITS (ctlt1);
NOTICE: merging column "a" with inherited definition
@@ -356,6 +358,7 @@ NOTICE: merging constraint "ctlt1_a_check" with inherited definition
Check constraints:
"ctlt1_a_check" CHECK (length(a) > 2)
Inherits: ctlt1
+Parallel DML: default
SELECT description FROM pg_description, pg_constraint c WHERE classoid = 'pg_constraint'::regclass AND objoid = c.oid AND c.conrelid = 'ctlt1_inh'::regclass;
description
@@ -378,6 +381,7 @@ Check constraints:
"ctlt3_c_check" CHECK (length(c) < 7)
Inherits: ctlt1,
ctlt3
+Parallel DML: default
CREATE TABLE ctlt13_like (LIKE ctlt3 INCLUDING CONSTRAINTS INCLUDING INDEXES INCLUDING COMMENTS INCLUDING STORAGE) INHERITS (ctlt1);
NOTICE: merging column "a" with inherited definition
@@ -395,6 +399,7 @@ Check constraints:
"ctlt3_a_check" CHECK (length(a) < 5)
"ctlt3_c_check" CHECK (length(c) < 7)
Inherits: ctlt1
+Parallel DML: default
SELECT description FROM pg_description, pg_constraint c WHERE classoid = 'pg_constraint'::regclass AND objoid = c.oid AND c.conrelid = 'ctlt13_like'::regclass;
description
@@ -418,6 +423,7 @@ Check constraints:
Statistics objects:
"public.ctlt_all_a_b_stat" ON a, b FROM ctlt_all
"public.ctlt_all_expr_stat" ON (a || b) FROM ctlt_all
+Parallel DML: default
SELECT c.relname, objsubid, description FROM pg_description, pg_index i, pg_class c WHERE classoid = 'pg_class'::regclass AND objoid = i.indexrelid AND c.oid = i.indexrelid AND i.indrelid = 'ctlt_all'::regclass ORDER BY c.relname, objsubid;
relname | objsubid | description
@@ -458,6 +464,7 @@ Check constraints:
Statistics objects:
"public.pg_attrdef_a_b_stat" ON a, b FROM public.pg_attrdef
"public.pg_attrdef_expr_stat" ON (a || b) FROM public.pg_attrdef
+Parallel DML: default
DROP TABLE public.pg_attrdef;
-- Check that LIKE isn't confused when new table masks the old, either
@@ -480,6 +487,7 @@ Check constraints:
Statistics objects:
"ctl_schema.ctlt1_a_b_stat" ON a, b FROM ctlt1
"ctl_schema.ctlt1_expr_stat" ON (a || b) FROM ctlt1
+Parallel DML: default
ROLLBACK;
DROP TABLE ctlt1, ctlt2, ctlt3, ctlt4, ctlt12_storage, ctlt12_comments, ctlt1_inh, ctlt13_inh, ctlt13_like, ctlt_all, ctla, ctlb CASCADE;
diff --git a/src/test/regress/expected/domain.out b/src/test/regress/expected/domain.out
index 411d5c003e..cc0bbe85d1 100644
--- a/src/test/regress/expected/domain.out
+++ b/src/test/regress/expected/domain.out
@@ -276,6 +276,7 @@ Rules:
silly AS
ON DELETE TO dcomptable DO INSTEAD UPDATE dcomptable SET d1.r = (dcomptable.d1).r - 1::double precision, d1.i = (dcomptable.d1).i + 1::double precision
WHERE (dcomptable.d1).i > 0::double precision
+Parallel DML: default
drop table dcomptable;
drop type comptype cascade;
@@ -413,6 +414,7 @@ Rules:
silly AS
ON DELETE TO dcomptable DO INSTEAD UPDATE dcomptable SET d1[1].r = dcomptable.d1[1].r - 1::double precision, d1[1].i = dcomptable.d1[1].i + 1::double precision
WHERE dcomptable.d1[1].i > 0::double precision
+Parallel DML: default
drop table dcomptable;
drop type comptype cascade;
diff --git a/src/test/regress/expected/foreign_data.out b/src/test/regress/expected/foreign_data.out
index 426080ae39..dcbcdb512a 100644
--- a/src/test/regress/expected/foreign_data.out
+++ b/src/test/regress/expected/foreign_data.out
@@ -735,6 +735,7 @@ Check constraints:
"ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
\det+
List of foreign tables
@@ -857,6 +858,7 @@ Check constraints:
"ft1_c3_check" CHECK (c3 >= '01-01-1994'::date AND c3 <= '01-31-1994'::date)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- can't change the column type if it's used elsewhere
CREATE TABLE use_ft1_column_type (x ft1);
@@ -1396,6 +1398,7 @@ CREATE FOREIGN TABLE ft2 () INHERITS (fd_pt1)
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1407,6 +1410,7 @@ Child tables: ft2
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
DROP FOREIGN TABLE ft2;
\d+ fd_pt1
@@ -1416,6 +1420,7 @@ DROP FOREIGN TABLE ft2;
c1 | integer | | not null | | plain | |
c2 | text | | | | extended | |
c3 | date | | | | plain | |
+Parallel DML: default
CREATE FOREIGN TABLE ft2 (
c1 integer NOT NULL,
@@ -1431,6 +1436,7 @@ CREATE FOREIGN TABLE ft2 (
c3 | date | | | | | plain | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER FOREIGN TABLE ft2 INHERIT fd_pt1;
\d+ fd_pt1
@@ -1441,6 +1447,7 @@ ALTER FOREIGN TABLE ft2 INHERIT fd_pt1;
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1452,6 +1459,7 @@ Child tables: ft2
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
CREATE TABLE ct3() INHERITS(ft2);
CREATE FOREIGN TABLE ft3 (
@@ -1475,6 +1483,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
\d+ ct3
Table "public.ct3"
@@ -1484,6 +1493,7 @@ Child tables: ct3,
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Inherits: ft2
+Parallel DML: default
\d+ ft3
Foreign table "public.ft3"
@@ -1494,6 +1504,7 @@ Inherits: ft2
c3 | date | | | | | plain | |
Server: s0
Inherits: ft2
+Parallel DML: default
-- add attributes recursively
ALTER TABLE fd_pt1 ADD COLUMN c4 integer;
@@ -1514,6 +1525,7 @@ ALTER TABLE fd_pt1 ADD COLUMN c8 integer;
c7 | integer | | not null | | plain | |
c8 | integer | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1532,6 +1544,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
\d+ ct3
Table "public.ct3"
@@ -1546,6 +1559,7 @@ Child tables: ct3,
c7 | integer | | not null | | plain | |
c8 | integer | | | | plain | |
Inherits: ft2
+Parallel DML: default
\d+ ft3
Foreign table "public.ft3"
@@ -1561,6 +1575,7 @@ Inherits: ft2
c8 | integer | | | | | plain | |
Server: s0
Inherits: ft2
+Parallel DML: default
-- alter attributes recursively
ALTER TABLE fd_pt1 ALTER COLUMN c4 SET DEFAULT 0;
@@ -1588,6 +1603,7 @@ ALTER TABLE fd_pt1 ALTER COLUMN c8 SET STORAGE EXTERNAL;
c7 | integer | | | | plain | |
c8 | text | | | | external | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1606,6 +1622,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
-- drop attributes recursively
ALTER TABLE fd_pt1 DROP COLUMN c4;
@@ -1621,6 +1638,7 @@ ALTER TABLE fd_pt1 DROP COLUMN c8;
c2 | text | | | | extended | |
c3 | date | | | | plain | |
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1634,6 +1652,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
-- add constraints recursively
ALTER TABLE fd_pt1 ADD CONSTRAINT fd_pt1chk1 CHECK (c1 > 0) NO INHERIT;
@@ -1661,6 +1680,7 @@ Check constraints:
"fd_pt1chk1" CHECK (c1 > 0) NO INHERIT
"fd_pt1chk2" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1676,6 +1696,7 @@ FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
Child tables: ct3,
ft3
+Parallel DML: default
DROP FOREIGN TABLE ft2; -- ERROR
ERROR: cannot drop foreign table ft2 because other objects depend on it
@@ -1708,6 +1729,7 @@ Check constraints:
"fd_pt1chk1" CHECK (c1 > 0) NO INHERIT
"fd_pt1chk2" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1721,6 +1743,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- drop constraints recursively
ALTER TABLE fd_pt1 DROP CONSTRAINT fd_pt1chk1 CASCADE;
@@ -1738,6 +1761,7 @@ ALTER TABLE fd_pt1 ADD CONSTRAINT fd_pt1chk3 CHECK (c2 <> '') NOT VALID;
Check constraints:
"fd_pt1chk3" CHECK (c2 <> ''::text) NOT VALID
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1752,6 +1776,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- VALIDATE CONSTRAINT need do nothing on foreign tables
ALTER TABLE fd_pt1 VALIDATE CONSTRAINT fd_pt1chk3;
@@ -1765,6 +1790,7 @@ ALTER TABLE fd_pt1 VALIDATE CONSTRAINT fd_pt1chk3;
Check constraints:
"fd_pt1chk3" CHECK (c2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1779,6 +1805,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- changes name of an attribute recursively
ALTER TABLE fd_pt1 RENAME COLUMN c1 TO f1;
@@ -1796,6 +1823,7 @@ ALTER TABLE fd_pt1 RENAME CONSTRAINT fd_pt1chk3 TO f2_check;
Check constraints:
"f2_check" CHECK (f2 <> ''::text)
Child tables: ft2
+Parallel DML: default
\d+ ft2
Foreign table "public.ft2"
@@ -1810,6 +1838,7 @@ Check constraints:
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
Inherits: fd_pt1
+Parallel DML: default
-- TRUNCATE doesn't work on foreign tables, either directly or recursively
TRUNCATE ft2; -- ERROR
@@ -1859,6 +1888,7 @@ CREATE FOREIGN TABLE fd_pt2_1 PARTITION OF fd_pt2 FOR VALUES IN (1)
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1871,6 +1901,7 @@ Partition of: fd_pt2 FOR VALUES IN (1)
Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1))
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- partition cannot have additional columns
DROP FOREIGN TABLE fd_pt2_1;
@@ -1890,6 +1921,7 @@ CREATE FOREIGN TABLE fd_pt2_1 (
c4 | character(1) | | | | | extended | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: table "fd_pt2_1" contains column "c4" not found in parent "fd_pt2"
@@ -1904,6 +1936,7 @@ DROP FOREIGN TABLE fd_pt2_1;
c3 | date | | | | plain | |
Partition key: LIST (c1)
Number of partitions: 0
+Parallel DML: default
CREATE FOREIGN TABLE fd_pt2_1 (
c1 integer NOT NULL,
@@ -1919,6 +1952,7 @@ CREATE FOREIGN TABLE fd_pt2_1 (
c3 | date | | | | | plain | |
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- no attach partition validation occurs for foreign tables
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1);
@@ -1931,6 +1965,7 @@ ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1);
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1943,6 +1978,7 @@ Partition of: fd_pt2 FOR VALUES IN (1)
Partition constraint: ((c1 IS NOT NULL) AND (c1 = 1))
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- cannot add column to a partition
ALTER TABLE fd_pt2_1 ADD c4 char;
@@ -1959,6 +1995,7 @@ ALTER TABLE fd_pt2_1 ADD CONSTRAINT p21chk CHECK (c2 <> '');
c3 | date | | | | plain | |
Partition key: LIST (c1)
Partitions: fd_pt2_1 FOR VALUES IN (1)
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -1973,6 +2010,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
-- cannot drop inherited NOT NULL constraint from a partition
ALTER TABLE fd_pt2_1 ALTER c1 DROP NOT NULL;
@@ -1989,6 +2027,7 @@ ALTER TABLE fd_pt2 ALTER c2 SET NOT NULL;
c3 | date | | | | plain | |
Partition key: LIST (c1)
Number of partitions: 0
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -2001,6 +2040,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: column "c2" in child table must be marked NOT NULL
@@ -2019,6 +2059,7 @@ Partition key: LIST (c1)
Check constraints:
"fd_pt2chk1" CHECK (c1 > 0)
Number of partitions: 0
+Parallel DML: default
\d+ fd_pt2_1
Foreign table "public.fd_pt2_1"
@@ -2031,6 +2072,7 @@ Check constraints:
"p21chk" CHECK (c2 <> ''::text)
Server: s0
FDW options: (delimiter ',', quote '"', "be quoted" 'value')
+Parallel DML: default
ALTER TABLE fd_pt2 ATTACH PARTITION fd_pt2_1 FOR VALUES IN (1); -- ERROR
ERROR: child table is missing constraint "fd_pt2chk1"
diff --git a/src/test/regress/expected/identity.out b/src/test/regress/expected/identity.out
index 99811570b7..a5b5a1b24d 100644
--- a/src/test/regress/expected/identity.out
+++ b/src/test/regress/expected/identity.out
@@ -506,6 +506,7 @@ TABLE itest8;
f3 | integer | | not null | generated by default as identity | plain | |
f4 | bigint | | not null | generated always as identity | plain | |
f5 | bigint | | | | plain | |
+Parallel DML: default
\d itest8_f2_seq
Sequence "public.itest8_f2_seq"
diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out
index 2d49e765de..0a720862eb 100644
--- a/src/test/regress/expected/inherit.out
+++ b/src/test/regress/expected/inherit.out
@@ -1059,6 +1059,7 @@ ALTER TABLE inhts RENAME d TO dd;
dd | integer | | | | plain | |
Inherits: inht1,
inhs1
+Parallel DML: default
DROP TABLE inhts;
-- Test for renaming in diamond inheritance
@@ -1079,6 +1080,7 @@ ALTER TABLE inht1 RENAME aa TO aaa;
z | integer | | | | plain | |
Inherits: inht2,
inht3
+Parallel DML: default
CREATE TABLE inhts (d int) INHERITS (inht2, inhs1);
NOTICE: merging multiple inherited definitions of column "b"
@@ -1096,6 +1098,7 @@ ERROR: cannot rename inherited column "b"
d | integer | | | | plain | |
Inherits: inht2,
inhs1
+Parallel DML: default
WITH RECURSIVE r AS (
SELECT 'inht1'::regclass AS inhrelid
@@ -1142,6 +1145,7 @@ CREATE TABLE test_constraints_inh () INHERITS (test_constraints);
Indexes:
"test_constraints_val1_val2_key" UNIQUE CONSTRAINT, btree (val1, val2)
Child tables: test_constraints_inh
+Parallel DML: default
ALTER TABLE ONLY test_constraints DROP CONSTRAINT test_constraints_val1_val2_key;
\d+ test_constraints
@@ -1152,6 +1156,7 @@ ALTER TABLE ONLY test_constraints DROP CONSTRAINT test_constraints_val1_val2_key
val1 | character varying | | | | extended | |
val2 | integer | | | | plain | |
Child tables: test_constraints_inh
+Parallel DML: default
\d+ test_constraints_inh
Table "public.test_constraints_inh"
@@ -1161,6 +1166,7 @@ Child tables: test_constraints_inh
val1 | character varying | | | | extended | |
val2 | integer | | | | plain | |
Inherits: test_constraints
+Parallel DML: default
DROP TABLE test_constraints_inh;
DROP TABLE test_constraints;
@@ -1177,6 +1183,7 @@ CREATE TABLE test_ex_constraints_inh () INHERITS (test_ex_constraints);
Indexes:
"test_ex_constraints_c_excl" EXCLUDE USING gist (c WITH &&)
Child tables: test_ex_constraints_inh
+Parallel DML: default
ALTER TABLE test_ex_constraints DROP CONSTRAINT test_ex_constraints_c_excl;
\d+ test_ex_constraints
@@ -1185,6 +1192,7 @@ ALTER TABLE test_ex_constraints DROP CONSTRAINT test_ex_constraints_c_excl;
--------+--------+-----------+----------+---------+---------+--------------+-------------
c | circle | | | | plain | |
Child tables: test_ex_constraints_inh
+Parallel DML: default
\d+ test_ex_constraints_inh
Table "public.test_ex_constraints_inh"
@@ -1192,6 +1200,7 @@ Child tables: test_ex_constraints_inh
--------+--------+-----------+----------+---------+---------+--------------+-------------
c | circle | | | | plain | |
Inherits: test_ex_constraints
+Parallel DML: default
DROP TABLE test_ex_constraints_inh;
DROP TABLE test_ex_constraints;
@@ -1208,6 +1217,7 @@ Indexes:
"test_primary_constraints_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "test_foreign_constraints" CONSTRAINT "test_foreign_constraints_id1_fkey" FOREIGN KEY (id1) REFERENCES test_primary_constraints(id)
+Parallel DML: default
\d+ test_foreign_constraints
Table "public.test_foreign_constraints"
@@ -1217,6 +1227,7 @@ Referenced by:
Foreign-key constraints:
"test_foreign_constraints_id1_fkey" FOREIGN KEY (id1) REFERENCES test_primary_constraints(id)
Child tables: test_foreign_constraints_inh
+Parallel DML: default
ALTER TABLE test_foreign_constraints DROP CONSTRAINT test_foreign_constraints_id1_fkey;
\d+ test_foreign_constraints
@@ -1225,6 +1236,7 @@ ALTER TABLE test_foreign_constraints DROP CONSTRAINT test_foreign_constraints_id
--------+---------+-----------+----------+---------+---------+--------------+-------------
id1 | integer | | | | plain | |
Child tables: test_foreign_constraints_inh
+Parallel DML: default
\d+ test_foreign_constraints_inh
Table "public.test_foreign_constraints_inh"
@@ -1232,6 +1244,7 @@ Child tables: test_foreign_constraints_inh
--------+---------+-----------+----------+---------+---------+--------------+-------------
id1 | integer | | | | plain | |
Inherits: test_foreign_constraints
+Parallel DML: default
DROP TABLE test_foreign_constraints_inh;
DROP TABLE test_foreign_constraints;
diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out
index 5063a3dc22..c8440449c1 100644
--- a/src/test/regress/expected/insert.out
+++ b/src/test/regress/expected/insert.out
@@ -177,6 +177,7 @@ Rules:
irule3 AS
ON INSERT TO inserttest2 DO INSERT INTO inserttest (f4[1].if1, f4[1].if2[2]) SELECT new.f1,
new.f2
+Parallel DML: default
drop table inserttest2;
drop table inserttest;
@@ -482,6 +483,7 @@ Partitions: part_aa_bb FOR VALUES IN ('aa', 'bb'),
part_null FOR VALUES IN (NULL),
part_xx_yy FOR VALUES IN ('xx', 'yy'), PARTITIONED,
part_default DEFAULT, PARTITIONED
+Parallel DML: default
-- cleanup
drop table range_parted, list_parted;
@@ -497,6 +499,7 @@ create table part_default partition of list_parted default;
a | integer | | | | plain | |
Partition of: list_parted DEFAULT
No partition constraint
+Parallel DML: default
insert into part_default values (null);
insert into part_default values (1);
@@ -888,6 +891,7 @@ Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVAL
mcrparted6_common_ge_10 FOR VALUES FROM ('common', 10) TO ('common', MAXVALUE),
mcrparted7_gt_common_lt_d FOR VALUES FROM ('common', MAXVALUE) TO ('d', MINVALUE),
mcrparted8_ge_d FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE)
+Parallel DML: default
\d+ mcrparted1_lt_b
Table "public.mcrparted1_lt_b"
@@ -897,6 +901,7 @@ Partitions: mcrparted1_lt_b FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVAL
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM (MINVALUE, MINVALUE) TO ('b', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a < 'b'::text))
+Parallel DML: default
\d+ mcrparted2_b
Table "public.mcrparted2_b"
@@ -906,6 +911,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a < 'b'::text))
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('b', MINVALUE) TO ('c', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'b'::text) AND (a < 'c'::text))
+Parallel DML: default
\d+ mcrparted3_c_to_common
Table "public.mcrparted3_c_to_common"
@@ -915,6 +921,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'b'::text)
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('c', MINVALUE) TO ('common', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'c'::text) AND (a < 'common'::text))
+Parallel DML: default
\d+ mcrparted4_common_lt_0
Table "public.mcrparted4_common_lt_0"
@@ -924,6 +931,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'c'::text)
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', MINVALUE) TO ('common', 0)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b < 0))
+Parallel DML: default
\d+ mcrparted5_common_0_to_10
Table "public.mcrparted5_common_0_to_10"
@@ -933,6 +941,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', 0) TO ('common', 10)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b >= 0) AND (b < 10))
+Parallel DML: default
\d+ mcrparted6_common_ge_10
Table "public.mcrparted6_common_ge_10"
@@ -942,6 +951,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', 10) TO ('common', MAXVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::text) AND (b >= 10))
+Parallel DML: default
\d+ mcrparted7_gt_common_lt_d
Table "public.mcrparted7_gt_common_lt_d"
@@ -951,6 +961,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('common', MAXVALUE) TO ('d', MINVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a > 'common'::text) AND (a < 'd'::text))
+Parallel DML: default
\d+ mcrparted8_ge_d
Table "public.mcrparted8_ge_d"
@@ -960,6 +971,7 @@ Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a > 'common'::te
b | integer | | | | plain | |
Partition of: mcrparted FOR VALUES FROM ('d', MINVALUE) TO (MAXVALUE, MAXVALUE)
Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a >= 'd'::text))
+Parallel DML: default
insert into mcrparted values ('aaa', 0), ('b', 0), ('bz', 10), ('c', -10),
('comm', -10), ('common', -10), ('common', 0), ('common', 10),
diff --git a/src/test/regress/expected/insert_parallel.out b/src/test/regress/expected/insert_parallel.out
new file mode 100644
index 0000000000..304237f619
--- /dev/null
+++ b/src/test/regress/expected/insert_parallel.out
@@ -0,0 +1,713 @@
+--
+-- PARALLEL
+--
+--
+-- START: setup some tables and data needed by the tests.
+--
+-- Setup - index expressions test
+create function pg_class_relname(Oid)
+returns name language sql parallel unsafe
+as 'select relname from pg_class where $1 = oid';
+-- For testing purposes, we'll mark this function as parallel-unsafe
+create or replace function fullname_parallel_unsafe(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel unsafe;
+create or replace function fullname_parallel_restricted(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel restricted;
+create table names(index int, first_name text, last_name text);
+create table names2(index int, first_name text, last_name text);
+create index names2_fullname_idx on names2 (fullname_parallel_unsafe(first_name, last_name));
+create table names4(index int, first_name text, last_name text);
+create index names4_fullname_idx on names4 (fullname_parallel_restricted(first_name, last_name));
+insert into names values
+ (1, 'albert', 'einstein'),
+ (2, 'niels', 'bohr'),
+ (3, 'erwin', 'schrodinger'),
+ (4, 'leonhard', 'euler'),
+ (5, 'stephen', 'hawking'),
+ (6, 'isaac', 'newton'),
+ (7, 'alan', 'turing'),
+ (8, 'richard', 'feynman');
+-- Setup - column default tests
+create or replace function bdefault_unsafe ()
+returns int language plpgsql parallel unsafe as $$
+begin
+ RETURN 5;
+end $$;
+create or replace function cdefault_restricted ()
+returns int language plpgsql parallel restricted as $$
+begin
+ RETURN 10;
+end $$;
+create or replace function ddefault_safe ()
+returns int language plpgsql parallel safe as $$
+begin
+ RETURN 20;
+end $$;
+create table testdef(a int, b int default bdefault_unsafe(), c int default cdefault_restricted(), d int default ddefault_safe());
+create table test_data(a int);
+insert into test_data select * from generate_series(1,10);
+--
+-- END: setup some tables and data needed by the tests.
+--
+begin;
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+create table para_insert_p1 (
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+);
+create table para_insert_f1 (
+ unique1 int4 REFERENCES para_insert_p1(unique1),
+ stringu1 name
+);
+create table para_insert_with_parallel_unsafe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml unsafe;
+create table para_insert_with_parallel_restricted(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml restricted;
+create table para_insert_with_parallel_safe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml safe;
+create table para_insert_with_parallel_auto(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml default;
+-- Check FK trigger
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('para_insert_f1');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | r
+ pg_trigger | r
+ pg_proc | r
+ pg_trigger | r
+(4 rows)
+
+select pg_get_table_max_parallel_dml_hazard('para_insert_f1');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+--
+-- Test INSERT with underlying query.
+-- Set parallel dml safe.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+alter table para_insert_p1 parallel dml safe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into para_insert_p1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+ count
+-------
+ 1
+(1 row)
+
+explain (costs off) insert into para_insert_with_parallel_safe select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------
+ Insert on para_insert_with_parallel_safe
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Set parallel dml unsafe.
+-- (should not create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml unsafe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+--------------------------
+ Insert on para_insert_p1
+ -> Seq Scan on tenk1
+(2 rows)
+
+explain (costs off) insert into para_insert_with_parallel_unsafe select unique1, stringu1 from tenk1;
+ QUERY PLAN
+--------------------------------------------
+ Insert on para_insert_with_parallel_unsafe
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Set parallel dml restricted.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml restricted;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+explain (costs off) insert into para_insert_with_parallel_restricted select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------------
+ Insert on para_insert_with_parallel_restricted
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Reset parallel dml.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml default;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_p1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+explain (costs off) insert into para_insert_with_parallel_auto select unique1, stringu1 from tenk1;
+ QUERY PLAN
+------------------------------------------
+ Insert on para_insert_with_parallel_auto
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+--
+-- Test INSERT with ordered underlying query.
+-- (should create plan with parallel SELECT, GatherMerge parent node)
+--
+truncate para_insert_p1 cascade;
+NOTICE: truncate cascades to table "para_insert_f1"
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+ QUERY PLAN
+----------------------------------------------
+ Insert on para_insert_p1
+ -> Gather Merge
+ Workers Planned: 4
+ -> Sort
+ Sort Key: tenk1.unique1
+ -> Parallel Seq Scan on tenk1
+(6 rows)
+
+insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+ count
+-------
+ 1
+(1 row)
+
+--
+-- Test INSERT with RETURNING clause.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+create table test_data1(like test_data);
+explain (costs off) insert into test_data1 select * from test_data where a = 10 returning a as data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on test_data1
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+ Filter: (a = 10)
+(5 rows)
+
+insert into test_data1 select * from test_data where a = 10 returning a as data;
+ data
+------
+ 10
+(1 row)
+
+--
+-- Test INSERT into a table with a foreign key.
+-- (Insert into a table with a foreign key is parallel-restricted,
+-- as doing this in a parallel worker would create a new commandId
+-- and within a worker this is not currently supported)
+--
+explain (costs off) insert into para_insert_f1 select unique1, stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on para_insert_f1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into para_insert_f1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the insert worked
+select count(*), sum(unique1) from para_insert_f1;
+ count | sum
+-------+----------
+ 10000 | 49995000
+(1 row)
+
+--
+-- Test INSERT with ON CONFLICT ... DO UPDATE ...
+-- (should not create a parallel plan)
+--
+create table test_conflict_table(id serial primary key, somedata int);
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on test_conflict_table
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+(4 rows)
+
+insert into test_conflict_table(id, somedata) select a, a from test_data;
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data ON CONFLICT(id) DO UPDATE SET somedata = EXCLUDED.somedata + 1;
+ QUERY PLAN
+------------------------------------------------------
+ Insert on test_conflict_table
+ Conflict Resolution: UPDATE
+ Conflict Arbiter Indexes: test_conflict_table_pkey
+ -> Seq Scan on test_data
+(4 rows)
+
+--
+-- Test INSERT with parallel-unsafe index expression
+-- (should not create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names2');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_index | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names2');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into names2 select * from names;
+ QUERY PLAN
+-------------------------
+ Insert on names2
+ -> Seq Scan on names
+(2 rows)
+
+--
+-- Test INSERT with parallel-restricted index expression
+-- (should create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names4');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | r
+ pg_index | r
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names4');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+explain (costs off) insert into names4 select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on names4
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+--
+-- Test INSERT with underlying query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names5 (like names);
+explain (costs off) insert into names5 select * from names returning *;
+ QUERY PLAN
+----------------------------------------
+ Insert on names5
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names6 (like names);
+explain (costs off) insert into names6 select * from names order by last_name returning *;
+ QUERY PLAN
+----------------------------------------------
+ Insert on names6
+ -> Gather Merge
+ Workers Planned: 3
+ -> Sort
+ Sort Key: names.last_name
+ -> Parallel Seq Scan on names
+(6 rows)
+
+insert into names6 select * from names order by last_name returning *;
+ index | first_name | last_name
+-------+------------+-------------
+ 2 | niels | bohr
+ 1 | albert | einstein
+ 4 | leonhard | euler
+ 8 | richard | feynman
+ 5 | stephen | hawking
+ 6 | isaac | newton
+ 3 | erwin | schrodinger
+ 7 | alan | turing
+(8 rows)
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (with projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names7 (like names);
+explain (costs off) insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+ QUERY PLAN
+----------------------------------------------
+ Insert on names7
+ -> Gather Merge
+ Workers Planned: 3
+ -> Sort
+ Sort Key: names.last_name
+ -> Parallel Seq Scan on names
+(6 rows)
+
+insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+ last_name_then_first_name
+---------------------------
+ bohr, niels
+ einstein, albert
+ euler, leonhard
+ feynman, richard
+ hawking, stephen
+ newton, isaac
+ schrodinger, erwin
+ turing, alan
+(8 rows)
+
+--
+-- Test INSERT into temporary table with underlying query.
+-- (Insert into a temp table is parallel-restricted;
+-- should create a parallel plan; parallel SELECT)
+--
+create temporary table temp_names (like names);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('temp_names');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_class | r
+(1 row)
+
+select pg_get_table_max_parallel_dml_hazard('temp_names');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ r
+(1 row)
+
+explain (costs off) insert into temp_names select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on temp_names
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+insert into temp_names select * from names;
+--
+-- Test INSERT with column defaults
+--
+--
+--
+-- Parallel INSERT with unsafe column default, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,c,d) select a,a*4,a*8 from test_data;
+ QUERY PLAN
+-----------------------------
+ Insert on testdef
+ -> Seq Scan on test_data
+(2 rows)
+
+--
+-- Parallel INSERT with restricted column default, should use parallel SELECT
+--
+explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+ QUERY PLAN
+--------------------------------------------
+ Insert on testdef
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on test_data
+(4 rows)
+
+insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+select * from testdef order by a;
+ a | b | c | d
+----+----+----+----
+ 1 | 2 | 10 | 8
+ 2 | 4 | 10 | 16
+ 3 | 6 | 10 | 24
+ 4 | 8 | 10 | 32
+ 5 | 10 | 10 | 40
+ 6 | 12 | 10 | 48
+ 7 | 14 | 10 | 56
+ 8 | 16 | 10 | 64
+ 9 | 18 | 10 | 72
+ 10 | 20 | 10 | 80
+(10 rows)
+
+truncate testdef;
+--
+-- Parallel INSERT with restricted and unsafe column defaults, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,d) select a,a*8 from test_data;
+ QUERY PLAN
+-----------------------------
+ Insert on testdef
+ -> Seq Scan on test_data
+(2 rows)
+
+--
+-- Test INSERT into partition with underlying query.
+--
+create table parttable1 (a int, b name) partition by range (a);
+create table parttable1_1 partition of parttable1 for values from (0) to (5000);
+create table parttable1_2 partition of parttable1 for values from (5000) to (10000);
+alter table parttable1 parallel dml safe;
+explain (costs off) insert into parttable1 select unique1,stringu1 from tenk1;
+ QUERY PLAN
+----------------------------------------
+ Insert on parttable1
+ -> Gather
+ Workers Planned: 4
+ -> Parallel Seq Scan on tenk1
+(4 rows)
+
+insert into parttable1 select unique1,stringu1 from tenk1;
+select count(*) from parttable1_1;
+ count
+-------
+ 5000
+(1 row)
+
+select count(*) from parttable1_2;
+ count
+-------
+ 5000
+(1 row)
+
+--
+-- Test table with parallel-unsafe check constraint
+--
+create or replace function check_b_unsafe(b name) returns boolean as $$
+ begin
+ return (b <> 'XXXXXX');
+ end;
+$$ language plpgsql parallel unsafe;
+create table table_check_b(a int4, b name check (check_b_unsafe(b)), c name);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('table_check_b');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_constraint | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('table_check_b');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into table_check_b(a,b,c) select unique1, unique2, stringu1 from tenk1;
+ QUERY PLAN
+-------------------------
+ Insert on table_check_b
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Test table with parallel-safe before stmt-level triggers
+-- (should create a parallel SELECT plan; triggers should fire)
+--
+create table names_with_safe_trigger (like names);
+create or replace function insert_before_trigger_safe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_safe';
+ return new;
+ end;
+$$ language plpgsql parallel safe;
+create trigger insert_before_trigger_safe before insert on names_with_safe_trigger
+ for each statement execute procedure insert_before_trigger_safe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_safe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+(0 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names_with_safe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ s
+(1 row)
+
+explain (costs off) insert into names_with_safe_trigger select * from names;
+ QUERY PLAN
+----------------------------------------
+ Insert on names_with_safe_trigger
+ -> Gather
+ Workers Planned: 3
+ -> Parallel Seq Scan on names
+(4 rows)
+
+insert into names_with_safe_trigger select * from names;
+NOTICE: hello from insert_before_trigger_safe
+--
+-- Test table with parallel-unsafe before stmt-level triggers
+-- (should not create a parallel plan; triggers should fire)
+--
+create table names_with_unsafe_trigger (like names);
+create or replace function insert_before_trigger_unsafe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_unsafe';
+ return new;
+ end;
+$$ language plpgsql parallel unsafe;
+create trigger insert_before_trigger_unsafe before insert on names_with_unsafe_trigger
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_unsafe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_trigger | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('names_with_unsafe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into names_with_unsafe_trigger select * from names;
+ QUERY PLAN
+-------------------------------------
+ Insert on names_with_unsafe_trigger
+ -> Seq Scan on names
+(2 rows)
+
+insert into names_with_unsafe_trigger select * from names;
+NOTICE: hello from insert_before_trigger_unsafe
+--
+-- Test partition with parallel-unsafe trigger
+-- (should not create a parallel plan)
+--
+create table part_unsafe_trigger (a int4, b name) partition by range (a);
+create table part_unsafe_trigger_1 partition of part_unsafe_trigger for values from (0) to (5000);
+create table part_unsafe_trigger_2 partition of part_unsafe_trigger for values from (5000) to (10000);
+create trigger part_insert_before_trigger_unsafe before insert on part_unsafe_trigger_1
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('part_unsafe_trigger');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_trigger | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('part_unsafe_trigger');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into part_unsafe_trigger select unique1, stringu1 from tenk1;
+ QUERY PLAN
+-------------------------------
+ Insert on part_unsafe_trigger
+ -> Seq Scan on tenk1
+(2 rows)
+
+--
+-- Test DOMAIN column with a CHECK constraint
+--
+create function sql_is_distinct_from_u(anyelement, anyelement)
+returns boolean language sql parallel unsafe
+as 'select $1 is distinct from $2 limit 1';
+create domain inotnull_u int
+ check (sql_is_distinct_from_u(value, null));
+create table dom_table_u (x inotnull_u, y int);
+-- Test DOMAIN column with parallel-unsafe CHECK constraint
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('dom_table_u');
+ pg_class_relname | proparallel
+------------------+-------------
+ pg_proc | u
+ pg_constraint | u
+(2 rows)
+
+select pg_get_table_max_parallel_dml_hazard('dom_table_u');
+ pg_get_table_max_parallel_dml_hazard
+--------------------------------------
+ u
+(1 row)
+
+explain (costs off) insert into dom_table_u select unique1, unique2 from tenk1;
+ QUERY PLAN
+-------------------------
+ Insert on dom_table_u
+ -> Seq Scan on tenk1
+(2 rows)
+
+rollback;
+--
+-- Clean up anything not created in the transaction
+--
+drop table names;
+drop index names2_fullname_idx;
+drop table names2;
+drop index names4_fullname_idx;
+drop table names4;
+drop table testdef;
+drop table test_data;
+drop function bdefault_unsafe;
+drop function cdefault_restricted;
+drop function ddefault_safe;
+drop function fullname_parallel_unsafe;
+drop function fullname_parallel_restricted;
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index 1b2f6bc418..760abca4e8 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -2818,6 +2818,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
--------+----------------+-----------+----------+---------+----------+--------------+-------------
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
+Parallel DML: default
\d+ tbl_heap
Table "tableam_display.tbl_heap"
@@ -2825,6 +2826,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
--------+----------------+-----------+----------+---------+----------+--------------+-------------
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
+Parallel DML: default
\set HIDE_TABLEAM off
\d+ tbl_heap_psql
@@ -2834,6 +2836,7 @@ CREATE MATERIALIZED VIEW mat_view_heap_psql USING heap_psql AS SELECT f1 from tb
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
Access method: heap_psql
+Parallel DML: default
\d+ tbl_heap
Table "tableam_display.tbl_heap"
@@ -2842,50 +2845,51 @@ Access method: heap_psql
f1 | integer | | | | plain | |
f2 | character(100) | | | | extended | |
Access method: heap
+Parallel DML: default
-- AM is displayed for tables, indexes and materialized views.
\d+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | 0 bytes |
- tableam_display | tbl_heap | table | regress_display_role | permanent | heap | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | 0 bytes |
- tableam_display | view_heap_psql | view | regress_display_role | permanent | | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | default | 0 bytes |
+ tableam_display | tbl_heap | table | regress_display_role | permanent | heap | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | default | 0 bytes |
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | | default | 0 bytes |
(4 rows)
\dt+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+---------------+-------+----------------------+-------------+---------------+---------+-------------
- tableam_display | tbl_heap | table | regress_display_role | permanent | heap | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+---------------+-------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | tbl_heap | table | regress_display_role | permanent | heap | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | heap_psql | default | 0 bytes |
(2 rows)
\dm+
- List of relations
- Schema | Name | Type | Owner | Persistence | Access method | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Access method | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+---------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | heap_psql | default | 0 bytes |
(1 row)
-- But not for views and sequences.
\dv+
- List of relations
- Schema | Name | Type | Owner | Persistence | Size | Description
------------------+----------------+------+----------------------+-------------+---------+-------------
- tableam_display | view_heap_psql | view | regress_display_role | permanent | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Parallel DML | Size | Description
+-----------------+----------------+------+----------------------+-------------+--------------+---------+-------------
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | default | 0 bytes |
(1 row)
\set HIDE_TABLEAM on
\d+
- List of relations
- Schema | Name | Type | Owner | Persistence | Size | Description
------------------+--------------------+-------------------+----------------------+-------------+---------+-------------
- tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | 0 bytes |
- tableam_display | tbl_heap | table | regress_display_role | permanent | 0 bytes |
- tableam_display | tbl_heap_psql | table | regress_display_role | permanent | 0 bytes |
- tableam_display | view_heap_psql | view | regress_display_role | permanent | 0 bytes |
+ List of relations
+ Schema | Name | Type | Owner | Persistence | Parallel DML | Size | Description
+-----------------+--------------------+-------------------+----------------------+-------------+--------------+---------+-------------
+ tableam_display | mat_view_heap_psql | materialized view | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | tbl_heap | table | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | tbl_heap_psql | table | regress_display_role | permanent | default | 0 bytes |
+ tableam_display | view_heap_psql | view | regress_display_role | permanent | default | 0 bytes |
(4 rows)
RESET ROLE;
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4a5ef0bc24..ffb498dc88 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -85,6 +85,7 @@ Indexes:
"testpub_tbl2_pkey" PRIMARY KEY, btree (id)
Publications:
"testpub_foralltables"
+Parallel DML: default
\dRp+ testpub_foralltables
Publication testpub_foralltables
@@ -198,6 +199,7 @@ Publications:
"testpib_ins_trunct"
"testpub_default"
"testpub_fortbl"
+Parallel DML: default
\d+ testpub_tbl1
Table "public.testpub_tbl1"
@@ -211,6 +213,7 @@ Publications:
"testpib_ins_trunct"
"testpub_default"
"testpub_fortbl"
+Parallel DML: default
\dRp+ testpub_default
Publication testpub_default
@@ -236,6 +239,7 @@ Indexes:
Publications:
"testpib_ins_trunct"
"testpub_fortbl"
+Parallel DML: default
-- permissions
SET ROLE regress_publication_user2;
diff --git a/src/test/regress/expected/replica_identity.out b/src/test/regress/expected/replica_identity.out
index 79002197a7..482fe4d8c4 100644
--- a/src/test/regress/expected/replica_identity.out
+++ b/src/test/regress/expected/replica_identity.out
@@ -171,6 +171,7 @@ Indexes:
"test_replica_identity_unique_defer" UNIQUE CONSTRAINT, btree (keya, keyb) DEFERRABLE
"test_replica_identity_unique_nondefer" UNIQUE CONSTRAINT, btree (keya, keyb)
Replica Identity: FULL
+Parallel DML: default
ALTER TABLE test_replica_identity REPLICA IDENTITY NOTHING;
SELECT relreplident FROM pg_class WHERE oid = 'test_replica_identity'::regclass;
diff --git a/src/test/regress/expected/rowsecurity.out b/src/test/regress/expected/rowsecurity.out
index 89397e41f0..26ab706515 100644
--- a/src/test/regress/expected/rowsecurity.out
+++ b/src/test/regress/expected/rowsecurity.out
@@ -958,6 +958,7 @@ Policies:
Partitions: part_document_fiction FOR VALUES FROM (11) TO (12),
part_document_nonfiction FOR VALUES FROM (99) TO (100),
part_document_satire FOR VALUES FROM (55) TO (56)
+Parallel DML: default
SELECT * FROM pg_policies WHERE schemaname = 'regress_rls_schema' AND tablename like '%part_document%' ORDER BY policyname;
schemaname | tablename | policyname | permissive | roles | cmd | qual | with_check
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2fa00a3c29..ea8a737c1c 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -3180,6 +3180,7 @@ Rules:
r3 AS
ON DELETE TO rules_src DO
NOTIFY rules_src_deletion
+Parallel DML: default
--
-- Ensure an aliased target relation for insert is correctly deparsed.
@@ -3208,6 +3209,7 @@ Rules:
r5 AS
ON UPDATE TO rules_src DO INSTEAD UPDATE rules_log trgt SET tag = 'updated'::text
WHERE trgt.f1 = new.f1
+Parallel DML: default
--
-- Also check multiassignment deparsing.
@@ -3231,6 +3233,7 @@ Rules:
WHERE trgt.f1 = new.f1
RETURNING new.f1,
new.f2
+Parallel DML: default
drop table rule_t1, rule_dest;
--
diff --git a/src/test/regress/expected/stats_ext.out b/src/test/regress/expected/stats_ext.out
index a7f12e989d..06c1d25326 100644
--- a/src/test/regress/expected/stats_ext.out
+++ b/src/test/regress/expected/stats_ext.out
@@ -156,6 +156,7 @@ ALTER STATISTICS ab1_a_b_stats SET STATISTICS -1;
b | integer | | | | plain | |
Statistics objects:
"public.ab1_a_b_stats" ON a, b FROM ab1
+Parallel DML: default
-- partial analyze doesn't build stats either
ANALYZE ab1 (a);
diff --git a/src/test/regress/expected/triggers.out b/src/test/regress/expected/triggers.out
index 5d124cf96f..13e0547302 100644
--- a/src/test/regress/expected/triggers.out
+++ b/src/test/regress/expected/triggers.out
@@ -3483,6 +3483,7 @@ alter trigger parenttrig on parent rename to anothertrig;
Triggers:
parenttrig AFTER INSERT ON child FOR EACH ROW EXECUTE FUNCTION f()
Inherits: parent
+Parallel DML: default
drop table parent, child;
drop function f();
diff --git a/src/test/regress/expected/update.out b/src/test/regress/expected/update.out
index c809f88f54..3b981ae2aa 100644
--- a/src/test/regress/expected/update.out
+++ b/src/test/regress/expected/update.out
@@ -753,6 +753,7 @@ create table part_def partition of range_parted default;
e | character varying | | | | extended | |
Partition of: range_parted DEFAULT
Partition constraint: (NOT ((a IS NOT NULL) AND (b IS NOT NULL) AND (((a = 'a'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'a'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '1'::bigint) AND (b < '10'::bigint)) OR ((a = 'b'::text) AND (b >= '10'::bigint) AND (b < '20'::bigint)) OR ((a = 'b'::text) AND (b >= '20'::bigint) AND (b < '30'::bigint)))))
+Parallel DML: default
insert into range_parted values ('c', 9);
-- ok
diff --git a/src/test/regress/output/tablespace.source b/src/test/regress/output/tablespace.source
index 1bbe7e0323..8d17677072 100644
--- a/src/test/regress/output/tablespace.source
+++ b/src/test/regress/output/tablespace.source
@@ -339,6 +339,7 @@ Indexes:
"part_a_idx" btree (a), tablespace "regress_tblspace"
Partitions: testschema.part1 FOR VALUES IN (1),
testschema.part2 FOR VALUES IN (2)
+Parallel DML: default
\d testschema.part1
Table "testschema.part1"
@@ -358,6 +359,7 @@ Partition of: testschema.part FOR VALUES IN (1)
Partition constraint: ((a IS NOT NULL) AND (a = 1))
Indexes:
"part1_a_idx" btree (a), tablespace "regress_tblspace"
+Parallel DML: default
\d testschema.part_a_idx
Partitioned index "testschema.part_a_idx"
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 7be89178f0..daf0bad4d5 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -96,6 +96,7 @@ test: rules psql psql_crosstab amutils stats_ext collate.linux.utf8
# run by itself so it can run parallel workers
test: select_parallel
test: write_parallel
+test: insert_parallel
# no relation related tests can be put in this group
test: publication subscription
diff --git a/src/test/regress/sql/insert_parallel.sql b/src/test/regress/sql/insert_parallel.sql
new file mode 100644
index 0000000000..65ab8b79d0
--- /dev/null
+++ b/src/test/regress/sql/insert_parallel.sql
@@ -0,0 +1,381 @@
+--
+-- PARALLEL
+--
+
+--
+-- START: setup some tables and data needed by the tests.
+--
+
+-- Setup - index expressions test
+
+create function pg_class_relname(Oid)
+returns name language sql parallel unsafe
+as 'select relname from pg_class where $1 = oid';
+
+-- For testing purposes, we'll mark this function as parallel-unsafe
+create or replace function fullname_parallel_unsafe(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel unsafe;
+
+create or replace function fullname_parallel_restricted(f text, l text) returns text as $$
+ begin
+ return f || l;
+ end;
+$$ language plpgsql immutable parallel restricted;
+
+create table names(index int, first_name text, last_name text);
+create table names2(index int, first_name text, last_name text);
+create index names2_fullname_idx on names2 (fullname_parallel_unsafe(first_name, last_name));
+create table names4(index int, first_name text, last_name text);
+create index names4_fullname_idx on names4 (fullname_parallel_restricted(first_name, last_name));
+
+
+insert into names values
+ (1, 'albert', 'einstein'),
+ (2, 'niels', 'bohr'),
+ (3, 'erwin', 'schrodinger'),
+ (4, 'leonhard', 'euler'),
+ (5, 'stephen', 'hawking'),
+ (6, 'isaac', 'newton'),
+ (7, 'alan', 'turing'),
+ (8, 'richard', 'feynman');
+
+-- Setup - column default tests
+
+create or replace function bdefault_unsafe ()
+returns int language plpgsql parallel unsafe as $$
+begin
+ RETURN 5;
+end $$;
+
+create or replace function cdefault_restricted ()
+returns int language plpgsql parallel restricted as $$
+begin
+ RETURN 10;
+end $$;
+
+create or replace function ddefault_safe ()
+returns int language plpgsql parallel safe as $$
+begin
+ RETURN 20;
+end $$;
+
+create table testdef(a int, b int default bdefault_unsafe(), c int default cdefault_restricted(), d int default ddefault_safe());
+create table test_data(a int);
+insert into test_data select * from generate_series(1,10);
+
+--
+-- END: setup some tables and data needed by the tests.
+--
+
+begin;
+
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+
+create table para_insert_p1 (
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+);
+
+create table para_insert_f1 (
+ unique1 int4 REFERENCES para_insert_p1(unique1),
+ stringu1 name
+);
+
+create table para_insert_with_parallel_unsafe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml unsafe;
+
+create table para_insert_with_parallel_restricted(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml restricted;
+
+create table para_insert_with_parallel_safe(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml safe;
+
+create table para_insert_with_parallel_auto(
+ unique1 int4 PRIMARY KEY,
+ stringu1 name
+) parallel dml default;
+
+-- Check FK trigger
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('para_insert_f1');
+select pg_get_table_max_parallel_dml_hazard('para_insert_f1');
+
+--
+-- Test INSERT with underlying query.
+-- Set parallel dml safe.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+alter table para_insert_p1 parallel dml safe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+insert into para_insert_p1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+explain (costs off) insert into para_insert_with_parallel_safe select unique1, stringu1 from tenk1;
+
+--
+-- Set parallel dml unsafe.
+-- (should not create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml unsafe;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_unsafe select unique1, stringu1 from tenk1;
+
+--
+-- Set parallel dml restricted.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml restricted;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_restricted select unique1, stringu1 from tenk1;
+
+--
+-- Reset parallel dml.
+-- (should create plan with parallel SELECT)
+--
+alter table para_insert_p1 parallel dml default;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1;
+explain (costs off) insert into para_insert_with_parallel_auto select unique1, stringu1 from tenk1;
+
+--
+-- Test INSERT with ordered underlying query.
+-- (should create plan with parallel SELECT, GatherMerge parent node)
+--
+truncate para_insert_p1 cascade;
+explain (costs off) insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+insert into para_insert_p1 select unique1, stringu1 from tenk1 order by unique1;
+-- select some values to verify that the parallel insert worked
+select count(*), sum(unique1) from para_insert_p1;
+-- verify that the same transaction has been used by all parallel workers
+select count(*) from (select distinct cmin,xmin from para_insert_p1) as dt;
+
+--
+-- Test INSERT with RETURNING clause.
+-- (should create plan with parallel SELECT, Gather parent node)
+--
+create table test_data1(like test_data);
+explain (costs off) insert into test_data1 select * from test_data where a = 10 returning a as data;
+insert into test_data1 select * from test_data where a = 10 returning a as data;
+
+--
+-- Test INSERT into a table with a foreign key.
+-- (Insert into a table with a foreign key is parallel-restricted,
+-- as doing this in a parallel worker would create a new commandId
+-- and within a worker this is not currently supported)
+--
+explain (costs off) insert into para_insert_f1 select unique1, stringu1 from tenk1;
+insert into para_insert_f1 select unique1, stringu1 from tenk1;
+-- select some values to verify that the insert worked
+select count(*), sum(unique1) from para_insert_f1;
+
+--
+-- Test INSERT with ON CONFLICT ... DO UPDATE ...
+-- (should not create a parallel plan)
+--
+create table test_conflict_table(id serial primary key, somedata int);
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data;
+insert into test_conflict_table(id, somedata) select a, a from test_data;
+explain (costs off) insert into test_conflict_table(id, somedata) select a, a from test_data ON CONFLICT(id) DO UPDATE SET somedata = EXCLUDED.somedata + 1;
+
+--
+-- Test INSERT with parallel-unsafe index expression
+-- (should not create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names2');
+select pg_get_table_max_parallel_dml_hazard('names2');
+explain (costs off) insert into names2 select * from names;
+
+--
+-- Test INSERT with parallel-restricted index expression
+-- (should create a parallel plan)
+--
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names4');
+select pg_get_table_max_parallel_dml_hazard('names4');
+explain (costs off) insert into names4 select * from names;
+
+--
+-- Test INSERT with underlying query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names5 (like names);
+explain (costs off) insert into names5 select * from names returning *;
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (no projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names6 (like names);
+explain (costs off) insert into names6 select * from names order by last_name returning *;
+insert into names6 select * from names order by last_name returning *;
+
+--
+-- Test INSERT with underlying ordered query - and RETURNING (with projection)
+-- (should create a parallel plan; parallel SELECT)
+--
+create table names7 (like names);
+explain (costs off) insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+insert into names7 select * from names order by last_name returning last_name || ', ' || first_name as last_name_then_first_name;
+
+
+--
+-- Test INSERT into temporary table with underlying query.
+-- (Insert into a temp table is parallel-restricted;
+-- should create a parallel plan; parallel SELECT)
+--
+create temporary table temp_names (like names);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('temp_names');
+select pg_get_table_max_parallel_dml_hazard('temp_names');
+explain (costs off) insert into temp_names select * from names;
+insert into temp_names select * from names;
+
+--
+-- Test INSERT with column defaults
+--
+--
+
+--
+-- Parallel INSERT with unsafe column default, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,c,d) select a,a*4,a*8 from test_data;
+
+--
+-- Parallel INSERT with restricted column default, should use parallel SELECT
+--
+explain (costs off) insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+insert into testdef(a,b,d) select a,a*2,a*8 from test_data;
+select * from testdef order by a;
+truncate testdef;
+
+--
+-- Parallel INSERT with restricted and unsafe column defaults, should not use a parallel plan
+--
+explain (costs off) insert into testdef(a,d) select a,a*8 from test_data;
+
+--
+-- Test INSERT into partition with underlying query.
+--
+create table parttable1 (a int, b name) partition by range (a);
+create table parttable1_1 partition of parttable1 for values from (0) to (5000);
+create table parttable1_2 partition of parttable1 for values from (5000) to (10000);
+
+alter table parttable1 parallel dml safe;
+
+explain (costs off) insert into parttable1 select unique1,stringu1 from tenk1;
+insert into parttable1 select unique1,stringu1 from tenk1;
+select count(*) from parttable1_1;
+select count(*) from parttable1_2;
+
+--
+-- Test table with parallel-unsafe check constraint
+--
+create or replace function check_b_unsafe(b name) returns boolean as $$
+ begin
+ return (b <> 'XXXXXX');
+ end;
+$$ language plpgsql parallel unsafe;
+
+create table table_check_b(a int4, b name check (check_b_unsafe(b)), c name);
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('table_check_b');
+select pg_get_table_max_parallel_dml_hazard('table_check_b');
+explain (costs off) insert into table_check_b(a,b,c) select unique1, unique2, stringu1 from tenk1;
+
+--
+-- Test table with parallel-safe before stmt-level triggers
+-- (should create a parallel SELECT plan; triggers should fire)
+--
+create table names_with_safe_trigger (like names);
+
+create or replace function insert_before_trigger_safe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_safe';
+ return new;
+ end;
+$$ language plpgsql parallel safe;
+create trigger insert_before_trigger_safe before insert on names_with_safe_trigger
+ for each statement execute procedure insert_before_trigger_safe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_safe_trigger');
+select pg_get_table_max_parallel_dml_hazard('names_with_safe_trigger');
+explain (costs off) insert into names_with_safe_trigger select * from names;
+insert into names_with_safe_trigger select * from names;
+
+--
+-- Test table with parallel-unsafe before stmt-level triggers
+-- (should not create a parallel plan; triggers should fire)
+--
+create table names_with_unsafe_trigger (like names);
+create or replace function insert_before_trigger_unsafe() returns trigger as $$
+ begin
+ raise notice 'hello from insert_before_trigger_unsafe';
+ return new;
+ end;
+$$ language plpgsql parallel unsafe;
+create trigger insert_before_trigger_unsafe before insert on names_with_unsafe_trigger
+ for each statement execute procedure insert_before_trigger_unsafe();
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('names_with_unsafe_trigger');
+select pg_get_table_max_parallel_dml_hazard('names_with_unsafe_trigger');
+explain (costs off) insert into names_with_unsafe_trigger select * from names;
+insert into names_with_unsafe_trigger select * from names;
+
+--
+-- Test partition with parallel-unsafe trigger
+-- (should not create a parallel plan)
+--
+create table part_unsafe_trigger (a int4, b name) partition by range (a);
+create table part_unsafe_trigger_1 partition of part_unsafe_trigger for values from (0) to (5000);
+create table part_unsafe_trigger_2 partition of part_unsafe_trigger for values from (5000) to (10000);
+create trigger part_insert_before_trigger_unsafe before insert on part_unsafe_trigger_1
+ for each statement execute procedure insert_before_trigger_unsafe();
+
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('part_unsafe_trigger');
+select pg_get_table_max_parallel_dml_hazard('part_unsafe_trigger');
+explain (costs off) insert into part_unsafe_trigger select unique1, stringu1 from tenk1;
+
+--
+-- Test DOMAIN column with a CHECK constraint
+--
+create function sql_is_distinct_from_u(anyelement, anyelement)
+returns boolean language sql parallel unsafe
+as 'select $1 is distinct from $2 limit 1';
+
+create domain inotnull_u int
+ check (sql_is_distinct_from_u(value, null));
+
+create table dom_table_u (x inotnull_u, y int);
+
+-- Test DOMAIN column with parallel-unsafe CHECK constraint
+select pg_class_relname(classid), proparallel from pg_get_table_parallel_dml_safety('dom_table_u');
+select pg_get_table_max_parallel_dml_hazard('dom_table_u');
+explain (costs off) insert into dom_table_u select unique1, unique2 from tenk1;
+
+rollback;
+
+--
+-- Clean up anything not created in the transaction
+--
+
+drop table names;
+drop index names2_fullname_idx;
+drop table names2;
+drop index names4_fullname_idx;
+drop table names4;
+drop table testdef;
+drop table test_data;
+
+drop function bdefault_unsafe;
+drop function cdefault_restricted;
+drop function ddefault_safe;
+drop function fullname_parallel_unsafe;
+drop function fullname_parallel_restricted;
--
2.18.4
v19-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patchapplication/octet-stream; name=v19-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patchDownload
From 01bdde01fb66e93928cb84b6aeee7dd31ea9ad83 Mon Sep 17 00:00:00 2001
From: Hou Zhijie <HouZhijie@foxmail.com>
Date: Tue, 3 Aug 2021 14:13:39 +0800
Subject: [PATCH] CREATE-ALTER-TABLE-PARALLEL-DML
Enable users to declare a table's parallel data-modification safety
(DEFAULT/SAFE/RESTRICTED/UNSAFE).
Add a table property that represents parallel safety of a table for
DML statement execution.
It can be specified as follows:
CREATE TABLE table_name PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE };
ALTER TABLE table_name PARALLEL DML { DEFAULT | UNSAFE | RESTRICTED | SAFE };
This property is recorded in pg_class's relparalleldml column as 'u',
'r', or 's' like pg_proc's proparallel and as 'd' if not set.
The default is 'd'.
If relparalleldml is specific(safe/restricted/unsafe), then
the planner assumes that all of the table, its descendant partitions,
and their ancillary objects have, at worst, the specified parallel
safety. The user is responsible for its correctness.
If relparalleldml is not set or set to DEFAULT, for non-partitioned table,
planner will check the parallel safety automatically(see 0004 patch).
But for partitioned table, planner will assume that the table is UNSAFE
to be modified in parallel mode.
---
src/backend/bootstrap/bootparse.y | 3 +
src/backend/catalog/heap.c | 7 +-
src/backend/catalog/index.c | 2 +
src/backend/catalog/toasting.c | 1 +
src/backend/commands/cluster.c | 1 +
src/backend/commands/createas.c | 1 +
src/backend/commands/sequence.c | 1 +
src/backend/commands/tablecmds.c | 97 +++++++++++++++++++
src/backend/commands/typecmds.c | 1 +
src/backend/commands/view.c | 1 +
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 2 +
src/backend/nodes/outfuncs.c | 2 +
src/backend/nodes/readfuncs.c | 1 +
src/backend/parser/gram.y | 73 ++++++++++----
src/backend/utils/cache/relcache.c | 6 +-
src/bin/pg_dump/pg_dump.c | 50 ++++++++--
src/bin/pg_dump/pg_dump.h | 1 +
src/bin/psql/describe.c | 71 ++++++++++++--
src/include/catalog/heap.h | 2 +
src/include/catalog/pg_class.h | 3 +
src/include/catalog/pg_proc.h | 2 +
src/include/nodes/parsenodes.h | 4 +-
src/include/nodes/primnodes.h | 1 +
src/include/parser/kwlist.h | 1 +
src/include/utils/relcache.h | 3 +-
.../test_ddl_deparse/test_ddl_deparse.c | 3 +
27 files changed, 302 insertions(+), 39 deletions(-)
diff --git a/src/backend/bootstrap/bootparse.y b/src/backend/bootstrap/bootparse.y
index 5fcd004e1b..4712536088 100644
--- a/src/backend/bootstrap/bootparse.y
+++ b/src/backend/bootstrap/bootparse.y
@@ -25,6 +25,7 @@
#include "catalog/pg_authid.h"
#include "catalog/pg_class.h"
#include "catalog/pg_namespace.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/toasting.h"
#include "commands/defrem.h"
@@ -208,6 +209,7 @@ Boot_CreateStmt:
tupdesc,
RELKIND_RELATION,
RELPERSISTENCE_PERMANENT,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
true,
@@ -231,6 +233,7 @@ Boot_CreateStmt:
NIL,
RELKIND_RELATION,
RELPERSISTENCE_PERMANENT,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
ONCOMMIT_NOOP,
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 83746d3fd9..135df961c9 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -302,6 +302,7 @@ heap_create(const char *relname,
TupleDesc tupDesc,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
bool allow_system_table_mods,
@@ -404,7 +405,8 @@ heap_create(const char *relname,
shared_relation,
mapped_relation,
relpersistence,
- relkind);
+ relkind,
+ relparalleldml);
/*
* Have the storage manager create the relation's disk file, if needed.
@@ -959,6 +961,7 @@ InsertPgClassTuple(Relation pg_class_desc,
values[Anum_pg_class_relhassubclass - 1] = BoolGetDatum(rd_rel->relhassubclass);
values[Anum_pg_class_relispopulated - 1] = BoolGetDatum(rd_rel->relispopulated);
values[Anum_pg_class_relreplident - 1] = CharGetDatum(rd_rel->relreplident);
+ values[Anum_pg_class_relparalleldml - 1] = CharGetDatum(rd_rel->relparalleldml);
values[Anum_pg_class_relispartition - 1] = BoolGetDatum(rd_rel->relispartition);
values[Anum_pg_class_relrewrite - 1] = ObjectIdGetDatum(rd_rel->relrewrite);
values[Anum_pg_class_relfrozenxid - 1] = TransactionIdGetDatum(rd_rel->relfrozenxid);
@@ -1152,6 +1155,7 @@ heap_create_with_catalog(const char *relname,
List *cooked_constraints,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
OnCommitAction oncommit,
@@ -1299,6 +1303,7 @@ heap_create_with_catalog(const char *relname,
tupdesc,
relkind,
relpersistence,
+ relparalleldml,
shared_relation,
mapped_relation,
allow_system_table_mods,
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 26bfa74ce7..18f3a51686 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -50,6 +50,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_opclass.h"
#include "catalog/pg_operator.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_trigger.h"
#include "catalog/pg_type.h"
@@ -935,6 +936,7 @@ index_create(Relation heapRelation,
indexTupDesc,
relkind,
relpersistence,
+ PROPARALLEL_DEFAULT,
shared_relation,
mapped_relation,
allow_system_table_mods,
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index 147b5abc19..b32d2d4132 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -251,6 +251,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
NIL,
RELKIND_TOASTVALUE,
rel->rd_rel->relpersistence,
+ rel->rd_rel->relparalleldml,
shared_relation,
mapped_relation,
ONCOMMIT_NOOP,
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index b3d8b6deb0..d1a7603d90 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -693,6 +693,7 @@ make_new_heap(Oid OIDOldHeap, Oid NewTableSpace, Oid NewAccessMethod,
NIL,
RELKIND_RELATION,
relpersistence,
+ OldHeap->rd_rel->relparalleldml,
false,
RelationIsMapped(OldHeap),
ONCOMMIT_NOOP,
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 0982851715..7607b91ae8 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -107,6 +107,7 @@ create_ctas_internal(List *attrList, IntoClause *into)
create->options = into->options;
create->oncommit = into->onCommit;
create->tablespacename = into->tableSpaceName;
+ create->paralleldmlsafety = into->paralleldmlsafety;
create->if_not_exists = false;
create->accessMethod = into->accessMethod;
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..384770050a 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -211,6 +211,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
stmt->options = NIL;
stmt->oncommit = ONCOMMIT_NOOP;
stmt->tablespacename = NULL;
+ stmt->paralleldmlsafety = NULL;
stmt->if_not_exists = seq->if_not_exists;
address = DefineRelation(stmt, RELKIND_SEQUENCE, seq->ownerId, NULL, NULL);
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index fcd778c62a..5968252648 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -40,6 +40,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_namespace.h"
#include "catalog/pg_opclass.h"
+#include "catalog/pg_proc.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_statistic_ext.h"
#include "catalog/pg_trigger.h"
@@ -603,6 +604,7 @@ static void refuseDupeIndexAttach(Relation parentIdx, Relation partIdx,
static List *GetParentedForeignKeyRefs(Relation partition);
static void ATDetachCheckNoForeignKeyRefs(Relation partition);
static char GetAttributeCompression(Oid atttypid, char *compression);
+static void ATExecParallelDMLSafety(Relation rel, Node *def);
/* ----------------------------------------------------------------
@@ -648,6 +650,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
LOCKMODE parentLockmode;
const char *accessMethod = NULL;
Oid accessMethodId = InvalidOid;
+ char relparalleldml = PROPARALLEL_DEFAULT;
/*
* Truncate relname to appropriate length (probably a waste of time, as
@@ -926,6 +929,32 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
if (accessMethod != NULL)
accessMethodId = get_table_am_oid(accessMethod, false);
+ if (stmt->paralleldmlsafety != NULL)
+ {
+ if (strcmp(stmt->paralleldmlsafety, "safe") == 0)
+ {
+ if (relkind == RELKIND_FOREIGN_TABLE ||
+ stmt->relation->relpersistence == RELPERSISTENCE_TEMP)
+ ereport(ERROR,
+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ errmsg("cannot perform parallel data modification on relation \"%s\"",
+ relname),
+ errdetail_relkind_not_supported(relkind)));
+
+ relparalleldml = PROPARALLEL_SAFE;
+ }
+ else if (strcmp(stmt->paralleldmlsafety, "restricted") == 0)
+ relparalleldml = PROPARALLEL_RESTRICTED;
+ else if (strcmp(stmt->paralleldmlsafety, "unsafe") == 0)
+ relparalleldml = PROPARALLEL_UNSAFE;
+ else if (strcmp(stmt->paralleldmlsafety, "default") == 0)
+ relparalleldml = PROPARALLEL_DEFAULT;
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("parameter \"parallel dml\" must be SAFE, RESTRICTED, UNSAFE or DEFAULT")));
+ }
+
/*
* Create the relation. Inherited defaults and constraints are passed in
* for immediate handling --- since they don't need parsing, they can be
@@ -944,6 +973,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
old_constraints),
relkind,
stmt->relation->relpersistence,
+ relparalleldml,
false,
false,
stmt->oncommit,
@@ -4187,6 +4217,7 @@ AlterTableGetLockLevel(List *cmds)
case AT_SetIdentity:
case AT_DropExpression:
case AT_SetCompression:
+ case AT_ParallelDMLSafety:
cmd_lockmode = AccessExclusiveLock;
break;
@@ -4737,6 +4768,11 @@ ATPrepCmd(List **wqueue, Relation rel, AlterTableCmd *cmd,
/* No command-specific prep needed */
pass = AT_PASS_MISC;
break;
+ case AT_ParallelDMLSafety:
+ ATSimplePermissions(cmd->subtype, rel, ATT_TABLE | ATT_FOREIGN_TABLE);
+ /* No command-specific prep needed */
+ pass = AT_PASS_MISC;
+ break;
default: /* oops */
elog(ERROR, "unrecognized alter table type: %d",
(int) cmd->subtype);
@@ -5142,6 +5178,9 @@ ATExecCmd(List **wqueue, AlteredTableInfo *tab,
case AT_DetachPartitionFinalize:
ATExecDetachPartitionFinalize(rel, ((PartitionCmd *) cmd->def)->name);
break;
+ case AT_ParallelDMLSafety:
+ ATExecParallelDMLSafety(rel, cmd->def);
+ break;
default: /* oops */
elog(ERROR, "unrecognized alter table type: %d",
(int) cmd->subtype);
@@ -6113,6 +6152,8 @@ alter_table_type_to_string(AlterTableType cmdtype)
return "ALTER COLUMN ... DROP IDENTITY";
case AT_ReAddStatistics:
return NULL; /* not real grammar */
+ case AT_ParallelDMLSafety:
+ return "PARALLEL DML SAFETY";
}
return NULL;
@@ -18773,3 +18814,59 @@ GetAttributeCompression(Oid atttypid, char *compression)
return cmethod;
}
+
+static void
+ATExecParallelDMLSafety(Relation rel, Node *def)
+{
+ Relation pg_class;
+ Oid relid;
+ HeapTuple tuple;
+ char relparallel = PROPARALLEL_DEFAULT;
+ char *parallel = strVal(def);
+
+ if (parallel)
+ {
+ if (strcmp(parallel, "safe") == 0)
+ {
+ /*
+ * We can't support table modification in a parallel worker if it's
+ * a foreign table/partition (no FDW API for supporting parallel
+ * access) or a temporary table.
+ */
+ if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||
+ RelationUsesLocalBuffers(rel))
+ ereport(ERROR,
+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ errmsg("cannot perform parallel data modification on relation \"%s\"",
+ RelationGetRelationName(rel)),
+ errdetail_relkind_not_supported(rel->rd_rel->relkind)));
+
+ relparallel = PROPARALLEL_SAFE;
+ }
+ else if (strcmp(parallel, "restricted") == 0)
+ relparallel = PROPARALLEL_RESTRICTED;
+ else if (strcmp(parallel, "unsafe") == 0)
+ relparallel = PROPARALLEL_UNSAFE;
+ else if (strcmp(parallel, "default") == 0)
+ relparallel = PROPARALLEL_DEFAULT;
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("parameter \"parallel dml\" must be SAFE, RESTRICTED, UNSAFE or DEFAULT")));
+ }
+
+ relid = RelationGetRelid(rel);
+
+ pg_class = table_open(RelationRelationId, RowExclusiveLock);
+
+ tuple = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(relid));
+
+ if (!HeapTupleIsValid(tuple))
+ elog(ERROR, "cache lookup failed for relation %u", relid);
+
+ ((Form_pg_class) GETSTRUCT(tuple))->relparalleldml = relparallel;
+ CatalogTupleUpdate(pg_class, &tuple->t_self, tuple);
+
+ table_close(pg_class, RowExclusiveLock);
+ heap_freetuple(tuple);
+}
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index 93eeff950b..a2f06c3e79 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2525,6 +2525,7 @@ DefineCompositeType(RangeVar *typevar, List *coldeflist)
createStmt->options = NIL;
createStmt->oncommit = ONCOMMIT_NOOP;
createStmt->tablespacename = NULL;
+ createStmt->paralleldmlsafety = NULL;
createStmt->if_not_exists = false;
/*
diff --git a/src/backend/commands/view.c b/src/backend/commands/view.c
index 4df05a0b33..65f33a95d8 100644
--- a/src/backend/commands/view.c
+++ b/src/backend/commands/view.c
@@ -227,6 +227,7 @@ DefineVirtualRelation(RangeVar *relation, List *tlist, bool replace,
createStmt->options = options;
createStmt->oncommit = ONCOMMIT_NOOP;
createStmt->tablespacename = NULL;
+ createStmt->paralleldmlsafety = NULL;
createStmt->if_not_exists = false;
/*
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 29020c908e..df41165c5f 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -3534,6 +3534,7 @@ CopyCreateStmtFields(const CreateStmt *from, CreateStmt *newnode)
COPY_SCALAR_FIELD(oncommit);
COPY_STRING_FIELD(tablespacename);
COPY_STRING_FIELD(accessMethod);
+ COPY_STRING_FIELD(paralleldmlsafety);
COPY_SCALAR_FIELD(if_not_exists);
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 8a1762000c..67b1966f18 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -146,6 +146,7 @@ _equalIntoClause(const IntoClause *a, const IntoClause *b)
COMPARE_NODE_FIELD(options);
COMPARE_SCALAR_FIELD(onCommit);
COMPARE_STRING_FIELD(tableSpaceName);
+ COMPARE_STRING_FIELD(paralleldmlsafety);
COMPARE_NODE_FIELD(viewQuery);
COMPARE_SCALAR_FIELD(skipData);
@@ -1292,6 +1293,7 @@ _equalCreateStmt(const CreateStmt *a, const CreateStmt *b)
COMPARE_SCALAR_FIELD(oncommit);
COMPARE_STRING_FIELD(tablespacename);
COMPARE_STRING_FIELD(accessMethod);
+ COMPARE_STRING_FIELD(paralleldmlsafety);
COMPARE_SCALAR_FIELD(if_not_exists);
return true;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 48202d2232..fdc5b63c28 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -1107,6 +1107,7 @@ _outIntoClause(StringInfo str, const IntoClause *node)
WRITE_NODE_FIELD(options);
WRITE_ENUM_FIELD(onCommit, OnCommitAction);
WRITE_STRING_FIELD(tableSpaceName);
+ WRITE_STRING_FIELD(paralleldmlsafety);
WRITE_NODE_FIELD(viewQuery);
WRITE_BOOL_FIELD(skipData);
}
@@ -2714,6 +2715,7 @@ _outCreateStmtInfo(StringInfo str, const CreateStmt *node)
WRITE_ENUM_FIELD(oncommit, OnCommitAction);
WRITE_STRING_FIELD(tablespacename);
WRITE_STRING_FIELD(accessMethod);
+ WRITE_STRING_FIELD(paralleldmlsafety);
WRITE_BOOL_FIELD(if_not_exists);
}
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 77d082d8b4..ba725cb290 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -563,6 +563,7 @@ _readIntoClause(void)
READ_NODE_FIELD(options);
READ_ENUM_FIELD(onCommit, OnCommitAction);
READ_STRING_FIELD(tableSpaceName);
+ READ_STRING_FIELD(paralleldmlsafety);
READ_NODE_FIELD(viewQuery);
READ_BOOL_FIELD(skipData);
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 39a2849eba..f74a7cac60 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -609,7 +609,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <partboundspec> PartitionBoundSpec
%type <list> hash_partbound
%type <defelt> hash_partbound_elem
-
+%type <str> ParallelDMLSafety
/*
* Non-keyword token types. These are hard-wired into the "flex" lexer.
@@ -654,7 +654,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
DATA_P DATABASE DAY_P DEALLOCATE DEC DECIMAL_P DECLARE DEFAULT DEFAULTS
DEFERRABLE DEFERRED DEFINER DELETE_P DELIMITER DELIMITERS DEPENDS DEPTH DESC
- DETACH DICTIONARY DISABLE_P DISCARD DISTINCT DO DOCUMENT_P DOMAIN_P
+ DETACH DICTIONARY DISABLE_P DISCARD DISTINCT DML DO DOCUMENT_P DOMAIN_P
DOUBLE_P DROP
EACH ELSE ENABLE_P ENCODING ENCRYPTED END_P ENUM_P ESCAPE EVENT EXCEPT
@@ -2691,6 +2691,21 @@ alter_table_cmd:
n->subtype = AT_NoForceRowSecurity;
$$ = (Node *)n;
}
+ /* ALTER TABLE <name> PARALLEL DML SAFE/RESTRICTED/UNSAFE/DEFAULT */
+ | PARALLEL DML ColId
+ {
+ AlterTableCmd *n = makeNode(AlterTableCmd);
+ n->subtype = AT_ParallelDMLSafety;
+ n->def = (Node *)makeString($3);
+ $$ = (Node *)n;
+ }
+ | PARALLEL DML DEFAULT
+ {
+ AlterTableCmd *n = makeNode(AlterTableCmd);
+ n->subtype = AT_ParallelDMLSafety;
+ n->def = (Node *)makeString("default");
+ $$ = (Node *)n;
+ }
| alter_generic_options
{
AlterTableCmd *n = makeNode(AlterTableCmd);
@@ -3276,7 +3291,7 @@ copy_generic_opt_arg_list_item:
CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
OptInherit OptPartitionSpec table_access_method_clause OptWith
- OnCommitOption OptTableSpace
+ OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3290,12 +3305,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $11;
n->oncommit = $12;
n->tablespacename = $13;
+ n->paralleldmlsafety = $14;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name '('
OptTableElementList ')' OptInherit OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3309,12 +3325,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $14;
n->oncommit = $15;
n->tablespacename = $16;
+ n->paralleldmlsafety = $17;
n->if_not_exists = true;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE qualified_name OF any_name
OptTypedTableElementList OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3329,12 +3346,13 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $10;
n->oncommit = $11;
n->tablespacename = $12;
+ n->paralleldmlsafety = $13;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name OF any_name
OptTypedTableElementList OptPartitionSpec table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3349,12 +3367,14 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $13;
n->oncommit = $14;
n->tablespacename = $15;
+ n->paralleldmlsafety = $16;
n->if_not_exists = true;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE qualified_name PARTITION OF qualified_name
OptTypedTableElementList PartitionBoundSpec OptPartitionSpec
table_access_method_clause OptWith OnCommitOption OptTableSpace
+ ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$4->relpersistence = $2;
@@ -3369,12 +3389,14 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $12;
n->oncommit = $13;
n->tablespacename = $14;
+ n->paralleldmlsafety = $15;
n->if_not_exists = false;
$$ = (Node *)n;
}
| CREATE OptTemp TABLE IF_P NOT EXISTS qualified_name PARTITION OF
qualified_name OptTypedTableElementList PartitionBoundSpec OptPartitionSpec
table_access_method_clause OptWith OnCommitOption OptTableSpace
+ ParallelDMLSafety
{
CreateStmt *n = makeNode(CreateStmt);
$7->relpersistence = $2;
@@ -3389,6 +3411,7 @@ CreateStmt: CREATE OptTemp TABLE qualified_name '(' OptTableElementList ')'
n->options = $15;
n->oncommit = $16;
n->tablespacename = $17;
+ n->paralleldmlsafety = $18;
n->if_not_exists = true;
$$ = (Node *)n;
}
@@ -4089,6 +4112,11 @@ OptTableSpace: TABLESPACE name { $$ = $2; }
| /*EMPTY*/ { $$ = NULL; }
;
+ParallelDMLSafety: PARALLEL DML name { $$ = $3; }
+ | PARALLEL DML DEFAULT { $$ = pstrdup("default"); }
+ | /*EMPTY*/ { $$ = NULL; }
+ ;
+
OptConsTableSpace: USING INDEX TABLESPACE name { $$ = $4; }
| /*EMPTY*/ { $$ = NULL; }
;
@@ -4236,7 +4264,7 @@ CreateAsStmt:
create_as_target:
qualified_name opt_column_list table_access_method_clause
- OptWith OnCommitOption OptTableSpace
+ OptWith OnCommitOption OptTableSpace ParallelDMLSafety
{
$$ = makeNode(IntoClause);
$$->rel = $1;
@@ -4245,6 +4273,7 @@ create_as_target:
$$->options = $4;
$$->onCommit = $5;
$$->tableSpaceName = $6;
+ $$->paralleldmlsafety = $7;
$$->viewQuery = NULL;
$$->skipData = false; /* might get changed later */
}
@@ -5024,7 +5053,7 @@ AlterForeignServerStmt: ALTER SERVER name foreign_server_version alter_generic_o
CreateForeignTableStmt:
CREATE FOREIGN TABLE qualified_name
'(' OptTableElementList ')'
- OptInherit SERVER name create_generic_options
+ OptInherit ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$4->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5036,15 +5065,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $9;
n->base.if_not_exists = false;
/* FDW-specific data */
- n->servername = $10;
- n->options = $11;
+ n->servername = $11;
+ n->options = $12;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE IF_P NOT EXISTS qualified_name
'(' OptTableElementList ')'
- OptInherit SERVER name create_generic_options
+ OptInherit ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$7->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5056,15 +5086,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $12;
n->base.if_not_exists = true;
/* FDW-specific data */
- n->servername = $13;
- n->options = $14;
+ n->servername = $14;
+ n->options = $15;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE qualified_name
PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec
- SERVER name create_generic_options
+ ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$4->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5077,15 +5108,16 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $10;
n->base.if_not_exists = false;
/* FDW-specific data */
- n->servername = $11;
- n->options = $12;
+ n->servername = $12;
+ n->options = $13;
$$ = (Node *) n;
}
| CREATE FOREIGN TABLE IF_P NOT EXISTS qualified_name
PARTITION OF qualified_name OptTypedTableElementList PartitionBoundSpec
- SERVER name create_generic_options
+ ParallelDMLSafety SERVER name create_generic_options
{
CreateForeignTableStmt *n = makeNode(CreateForeignTableStmt);
$7->relpersistence = RELPERSISTENCE_PERMANENT;
@@ -5098,10 +5130,11 @@ CreateForeignTableStmt:
n->base.options = NIL;
n->base.oncommit = ONCOMMIT_NOOP;
n->base.tablespacename = NULL;
+ n->base.paralleldmlsafety = $13;
n->base.if_not_exists = true;
/* FDW-specific data */
- n->servername = $14;
- n->options = $15;
+ n->servername = $15;
+ n->options = $16;
$$ = (Node *) n;
}
;
@@ -15547,6 +15580,7 @@ unreserved_keyword:
| DICTIONARY
| DISABLE_P
| DISCARD
+ | DML
| DOCUMENT_P
| DOMAIN_P
| DOUBLE_P
@@ -16087,6 +16121,7 @@ bare_label_keyword:
| DISABLE_P
| DISCARD
| DISTINCT
+ | DML
| DO
| DOCUMENT_P
| DOMAIN_P
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 13d9994af3..70d8ecb1dd 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -1873,6 +1873,7 @@ formrdesc(const char *relationName, Oid relationReltype,
relation->rd_rel->relkind = RELKIND_RELATION;
relation->rd_rel->relnatts = (int16) natts;
relation->rd_rel->relam = HEAP_TABLE_AM_OID;
+ relation->rd_rel->relparalleldml = PROPARALLEL_DEFAULT;
/*
* initialize attribute tuple form
@@ -3359,7 +3360,8 @@ RelationBuildLocalRelation(const char *relname,
bool shared_relation,
bool mapped_relation,
char relpersistence,
- char relkind)
+ char relkind,
+ char relparalleldml)
{
Relation rel;
MemoryContext oldcxt;
@@ -3509,6 +3511,8 @@ RelationBuildLocalRelation(const char *relname,
else
rel->rd_rel->relreplident = REPLICA_IDENTITY_NOTHING;
+ rel->rd_rel->relparalleldml = relparalleldml;
+
/*
* Insert relation physical and logical identifiers (OIDs) into the right
* places. For a mapped relation, we set relfilenode to zero and rely on
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 90ac445bcd..5165202e84 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -6253,6 +6253,7 @@ getTables(Archive *fout, int *numTables)
int i_relpersistence;
int i_relispopulated;
int i_relreplident;
+ int i_relparalleldml;
int i_owning_tab;
int i_owning_col;
int i_reltablespace;
@@ -6358,7 +6359,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, am.amname, "
+ "c.relreplident, c.relparalleldml, c.relpages, am.amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
"ELSE 0 END AS foreignserver, "
@@ -6450,7 +6451,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, "
+ "c.relreplident, c.relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6503,7 +6504,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "c.relreplident, c.relpages, "
+ "c.relreplident, c.relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6556,7 +6557,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"tc.relminmxid AS tminmxid, "
"c.relpersistence, c.relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6609,7 +6610,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"c.relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"CASE WHEN c.relkind = 'f' THEN "
"(SELECT ftserver FROM pg_catalog.pg_foreign_table WHERE ftrelid = c.oid) "
@@ -6660,7 +6661,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, "
@@ -6708,7 +6709,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6756,7 +6757,7 @@ getTables(Archive *fout, int *numTables)
"tc.relfrozenxid AS tfrozenxid, "
"0 AS tminmxid, "
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, c.relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, c.relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6803,7 +6804,7 @@ getTables(Archive *fout, int *numTables)
"0 AS toid, "
"0 AS tfrozenxid, 0 AS tminmxid,"
"'p' AS relpersistence, 't' as relispopulated, "
- "'d' AS relreplident, relpages, "
+ "'d' AS relreplident, 'd' AS relparalleldml, relpages, "
"NULL AS amname, "
"NULL AS foreignserver, "
"NULL AS reloftype, "
@@ -6872,6 +6873,7 @@ getTables(Archive *fout, int *numTables)
i_relpersistence = PQfnumber(res, "relpersistence");
i_relispopulated = PQfnumber(res, "relispopulated");
i_relreplident = PQfnumber(res, "relreplident");
+ i_relparalleldml = PQfnumber(res, "relparalleldml");
i_relpages = PQfnumber(res, "relpages");
i_foreignserver = PQfnumber(res, "foreignserver");
i_owning_tab = PQfnumber(res, "owning_tab");
@@ -6927,6 +6929,7 @@ getTables(Archive *fout, int *numTables)
tblinfo[i].hasoids = (strcmp(PQgetvalue(res, i, i_relhasoids), "t") == 0);
tblinfo[i].relispopulated = (strcmp(PQgetvalue(res, i, i_relispopulated), "t") == 0);
tblinfo[i].relreplident = *(PQgetvalue(res, i, i_relreplident));
+ tblinfo[i].relparalleldml = *(PQgetvalue(res, i, i_relparalleldml));
tblinfo[i].relpages = atoi(PQgetvalue(res, i, i_relpages));
tblinfo[i].frozenxid = atooid(PQgetvalue(res, i, i_relfrozenxid));
tblinfo[i].minmxid = atooid(PQgetvalue(res, i, i_relminmxid));
@@ -16555,6 +16558,35 @@ dumpTableSchema(Archive *fout, const TableInfo *tbinfo)
}
}
+ if (tbinfo->relkind == RELKIND_RELATION ||
+ tbinfo->relkind == RELKIND_PARTITIONED_TABLE ||
+ tbinfo->relkind == RELKIND_FOREIGN_TABLE)
+ {
+ appendPQExpBuffer(q, "\nALTER %sTABLE %s PARALLEL DML ",
+ tbinfo->relkind == RELKIND_FOREIGN_TABLE ? "FOREIGN " : "",
+ qualrelname);
+
+ switch (tbinfo->relparalleldml)
+ {
+ case 's':
+ appendPQExpBuffer(q, "SAFE;\n");
+ break;
+ case 'r':
+ appendPQExpBuffer(q, "RESTRICTED;\n");
+ break;
+ case 'u':
+ appendPQExpBuffer(q, "UNSAFE;\n");
+ break;
+ case 'd':
+ appendPQExpBuffer(q, "DEFAULT;\n");
+ break;
+ default:
+ /* should not reach here */
+ appendPQExpBuffer(q, "DEFAULT;\n");
+ break;
+ }
+ }
+
if (tbinfo->forcerowsec)
appendPQExpBuffer(q, "\nALTER TABLE ONLY %s FORCE ROW LEVEL SECURITY;\n",
qualrelname);
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index f5e170e0db..8175a0bc82 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -270,6 +270,7 @@ typedef struct _tableInfo
char relpersistence; /* relation persistence */
bool relispopulated; /* relation is populated */
char relreplident; /* replica identifier */
+ char relparalleldml; /* parallel safety of dml on the relation */
char *reltablespace; /* relation tablespace */
char *reloptions; /* options specified by WITH (...) */
char *checkoption; /* WITH CHECK OPTION, if any */
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 8333558bda..f896fe1793 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1656,6 +1656,7 @@ describeOneTableDetails(const char *schemaname,
char *reloftype;
char relpersistence;
char relreplident;
+ char relparalleldml;
char *relam;
} tableinfo;
bool show_column_details = false;
@@ -1669,7 +1670,25 @@ describeOneTableDetails(const char *schemaname,
initPQExpBuffer(&tmpbuf);
/* Get general table info */
- if (pset.sversion >= 120000)
+ if (pset.sversion >= 150000)
+ {
+ printfPQExpBuffer(&buf,
+ "SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, "
+ "c.relhastriggers, c.relrowsecurity, c.relforcerowsecurity, "
+ "false AS relhasoids, c.relispartition, %s, c.reltablespace, "
+ "CASE WHEN c.reloftype = 0 THEN '' ELSE c.reloftype::pg_catalog.regtype::pg_catalog.text END, "
+ "c.relpersistence, c.relreplident, am.amname, c.relparalleldml\n"
+ "FROM pg_catalog.pg_class c\n "
+ "LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)\n"
+ "LEFT JOIN pg_catalog.pg_am am ON (c.relam = am.oid)\n"
+ "WHERE c.oid = '%s';",
+ (verbose ?
+ "pg_catalog.array_to_string(c.reloptions || "
+ "array(select 'toast.' || x from pg_catalog.unnest(tc.reloptions) x), ', ')\n"
+ : "''"),
+ oid);
+ }
+ else if (pset.sversion >= 120000)
{
printfPQExpBuffer(&buf,
"SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, "
@@ -1853,6 +1872,8 @@ describeOneTableDetails(const char *schemaname,
(char *) NULL : pg_strdup(PQgetvalue(res, 0, 14));
else
tableinfo.relam = NULL;
+ tableinfo.relparalleldml = (pset.sversion >= 150000) ?
+ *(PQgetvalue(res, 0, 15)) : 0;
PQclear(res);
res = NULL;
@@ -3630,6 +3651,21 @@ describeOneTableDetails(const char *schemaname,
printfPQExpBuffer(&buf, _("Access method: %s"), tableinfo.relam);
printTableAddFooter(&cont, buf.data);
}
+
+ if (verbose &&
+ (tableinfo.relkind == RELKIND_RELATION ||
+ tableinfo.relkind == RELKIND_PARTITIONED_TABLE ||
+ tableinfo.relkind == RELKIND_FOREIGN_TABLE) &&
+ tableinfo.relparalleldml != 0)
+ {
+ printfPQExpBuffer(&buf, _("Parallel DML: %s"),
+ tableinfo.relparalleldml == 'd' ? "default" :
+ tableinfo.relparalleldml == 'u' ? "unsafe" :
+ tableinfo.relparalleldml == 'r' ? "restricted" :
+ tableinfo.relparalleldml == 's' ? "safe" :
+ "???");
+ printTableAddFooter(&cont, buf.data);
+ }
}
/* reloptions, if verbose */
@@ -4005,7 +4041,7 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys
PGresult *res;
printQueryOpt myopt = pset.popt;
int cols_so_far;
- bool translate_columns[] = {false, false, true, false, false, false, false, false, false};
+ bool translate_columns[] = {false, false, true, false, false, false, false, false, false, false};
/* If tabtypes is empty, we default to \dtvmsE (but see also command.c) */
if (!(showTables || showIndexes || showViews || showMatViews || showSeq || showForeign))
@@ -4073,22 +4109,43 @@ listTables(const char *tabtypes, const char *pattern, bool verbose, bool showSys
gettext_noop("unlogged"),
gettext_noop("Persistence"));
translate_columns[cols_so_far] = true;
+ cols_so_far++;
}
- /*
- * We don't bother to count cols_so_far below here, as there's no need
- * to; this might change with future additions to the output columns.
- */
-
/*
* Access methods exist for tables, materialized views and indexes.
* This has been introduced in PostgreSQL 12 for tables.
*/
if (pset.sversion >= 120000 && !pset.hide_tableam &&
(showTables || showMatViews || showIndexes))
+ {
appendPQExpBuffer(&buf,
",\n am.amname as \"%s\"",
gettext_noop("Access method"));
+ cols_so_far++;
+ }
+
+ /*
+ * Show whether the data in the relation is default('d') unsafe('u'),
+ * restricted('r'), or safe('s') can be modified in parallel mode.
+ * This has been introduced in PostgreSQL 15 for tables.
+ */
+ if (pset.sversion >= 150000)
+ {
+ appendPQExpBuffer(&buf,
+ ",\n CASE c.relparalleldml WHEN 'd' THEN '%s' WHEN 'u' THEN '%s' WHEN 'r' THEN '%s' WHEN 's' THEN '%s' END as \"%s\"",
+ gettext_noop("default"),
+ gettext_noop("unsafe"),
+ gettext_noop("restricted"),
+ gettext_noop("safe"),
+ gettext_noop("Parallel DML"));
+ translate_columns[cols_so_far] = true;
+ }
+
+ /*
+ * We don't bother to count cols_so_far below here, as there's no need
+ * to; this might change with future additions to the output columns.
+ */
/*
* As of PostgreSQL 9.0, use pg_table_size() to show a more accurate
diff --git a/src/include/catalog/heap.h b/src/include/catalog/heap.h
index 6ce480b49c..b59975919b 100644
--- a/src/include/catalog/heap.h
+++ b/src/include/catalog/heap.h
@@ -55,6 +55,7 @@ extern Relation heap_create(const char *relname,
TupleDesc tupDesc,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
bool allow_system_table_mods,
@@ -73,6 +74,7 @@ extern Oid heap_create_with_catalog(const char *relname,
List *cooked_constraints,
char relkind,
char relpersistence,
+ char relparalleldml,
bool shared_relation,
bool mapped_relation,
OnCommitAction oncommit,
diff --git a/src/include/catalog/pg_class.h b/src/include/catalog/pg_class.h
index fef9945ed8..244eac6bd8 100644
--- a/src/include/catalog/pg_class.h
+++ b/src/include/catalog/pg_class.h
@@ -116,6 +116,9 @@ CATALOG(pg_class,1259,RelationRelationId) BKI_BOOTSTRAP BKI_ROWTYPE_OID(83,Relat
/* see REPLICA_IDENTITY_xxx constants */
char relreplident BKI_DEFAULT(n);
+ /* parallel safety of the dml on the relation */
+ char relparalleldml BKI_DEFAULT(d);
+
/* is relation a partition? */
bool relispartition BKI_DEFAULT(f);
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index b33b8b0134..cd52c0e254 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -171,6 +171,8 @@ DECLARE_UNIQUE_INDEX(pg_proc_proname_args_nsp_index, 2691, ProcedureNameArgsNspI
#define PROPARALLEL_RESTRICTED 'r' /* can run in parallel leader only */
#define PROPARALLEL_UNSAFE 'u' /* banned while in parallel mode */
+#define PROPARALLEL_DEFAULT 'd' /* only used for parallel dml safety */
+
/*
* Symbolic values for proargmodes column. Note that these must agree with
* the FunctionParameterMode enum in parsenodes.h; we declare them here to
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index e28248af32..0352e41c6e 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -1934,7 +1934,8 @@ typedef enum AlterTableType
AT_AddIdentity, /* ADD IDENTITY */
AT_SetIdentity, /* SET identity column options */
AT_DropIdentity, /* DROP IDENTITY */
- AT_ReAddStatistics /* internal to commands/tablecmds.c */
+ AT_ReAddStatistics, /* internal to commands/tablecmds.c */
+ AT_ParallelDMLSafety /* PARALLEL DML SAFE/RESTRICTED/UNSAFE/DEFAULT */
} AlterTableType;
typedef struct ReplicaIdentityStmt
@@ -2180,6 +2181,7 @@ typedef struct CreateStmt
OnCommitAction oncommit; /* what do we do at COMMIT? */
char *tablespacename; /* table space to use, or NULL */
char *accessMethod; /* table access method */
+ char *paralleldmlsafety; /* parallel dml safety */
bool if_not_exists; /* just do nothing if it already exists? */
} CreateStmt;
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index c04282f91f..6e679d9f97 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -115,6 +115,7 @@ typedef struct IntoClause
List *options; /* options from WITH clause */
OnCommitAction onCommit; /* what do we do at COMMIT? */
char *tableSpaceName; /* table space to use, or NULL */
+ char *paralleldmlsafety; /* parallel dml safety */
Node *viewQuery; /* materialized view's SELECT query */
bool skipData; /* true for WITH NO DATA */
} IntoClause;
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index f836acf876..05222faccd 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -139,6 +139,7 @@ PG_KEYWORD("dictionary", DICTIONARY, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("disable", DISABLE_P, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("discard", DISCARD, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("distinct", DISTINCT, RESERVED_KEYWORD, BARE_LABEL)
+PG_KEYWORD("dml", DML, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("do", DO, RESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("document", DOCUMENT_P, UNRESERVED_KEYWORD, BARE_LABEL)
PG_KEYWORD("domain", DOMAIN_P, UNRESERVED_KEYWORD, BARE_LABEL)
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index f772855ac6..5ea225ac2d 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -108,7 +108,8 @@ extern Relation RelationBuildLocalRelation(const char *relname,
bool shared_relation,
bool mapped_relation,
char relpersistence,
- char relkind);
+ char relkind,
+ char relparalleldml);
/*
* Routines to manage assignment of new relfilenode to a relation
diff --git a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
index 1bae1e5438..e1f5678eef 100644
--- a/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
+++ b/src/test/modules/test_ddl_deparse/test_ddl_deparse.c
@@ -276,6 +276,9 @@ get_altertable_subcmdtypes(PG_FUNCTION_ARGS)
case AT_NoForceRowSecurity:
strtype = "NO FORCE ROW SECURITY";
break;
+ case AT_ParallelDMLSafety:
+ strtype = "PARALLEL DML SAFETY";
+ break;
case AT_GenericOptions:
strtype = "SET OPTIONS";
break;
--
2.27.0
Hi,
On Thu, Sep 09, 2021 at 02:12:08AM +0000, houzj.fnst@fujitsu.com wrote:
Attach new version patch set which remove the workaround patch.
This version of the patchset doesn't apply anymore:
http://cfbot.cputube.org/patch_36_3143.log
=== Applying patches on top of PostgreSQL commit ID a18b6d2dc288dfa6e7905ede1d4462edd6a8af47 ===
=== applying patch ./v19-0001-CREATE-ALTER-TABLE-PARALLEL-DML.patch
[...]
patching file src/backend/commands/tablecmds.c
Hunk #1 FAILED at 40.
Hunk #2 succeeded at 624 (offset 21 lines).
Hunk #3 succeeded at 670 (offset 21 lines).
Hunk #4 succeeded at 947 (offset 19 lines).
Hunk #5 succeeded at 991 (offset 19 lines).
Hunk #6 succeeded at 4256 (offset 40 lines).
Hunk #7 succeeded at 4807 (offset 40 lines).
Hunk #8 succeeded at 5217 (offset 40 lines).
Hunk #9 succeeded at 6193 (offset 42 lines).
Hunk #10 succeeded at 19278 (offset 465 lines).
1 out of 10 hunks FAILED -- saving rejects to file src/backend/commands/tablecmds.c.rej
[...]
patching file src/bin/pg_dump/pg_dump.c
Hunk #1 FAILED at 6253.
Hunk #2 FAILED at 6358.
Hunk #3 FAILED at 6450.
Hunk #4 FAILED at 6503.
Hunk #5 FAILED at 6556.
Hunk #6 FAILED at 6609.
Hunk #7 FAILED at 6660.
Hunk #8 FAILED at 6708.
Hunk #9 FAILED at 6756.
Hunk #10 FAILED at 6803.
Hunk #11 FAILED at 6872.
Hunk #12 FAILED at 6927.
Hunk #13 succeeded at 15524 (offset -1031 lines).
12 out of 13 hunks FAILED -- saving rejects to file src/bin/pg_dump/pg_dump.c.rej
[...]
patching file src/bin/psql/describe.c
Hunk #1 succeeded at 1479 (offset -177 lines).
Hunk #2 succeeded at 1493 (offset -177 lines).
Hunk #3 succeeded at 1631 (offset -241 lines).
Hunk #4 succeeded at 3374 (offset -277 lines).
Hunk #5 succeeded at 3731 (offset -310 lines).
Hunk #6 FAILED at 4109.
1 out of 6 hunks FAILED -- saving rejects to file src/bin/psql/describe.c.rej
Could you send a rebased version? In the meantime I will switch the entry to
Waiting on Author.
On Thu, Jul 28, 2022 at 8:43 AM Julien Rouhaud <rjuju123@gmail.com> wrote:
Could you send a rebased version? In the meantime I will switch the entry to
Waiting on Author.
By request in [1]/messages/by-id/OS0PR01MB571696D623F35A09AB51903A94969@OS0PR01MB5716.jpnprd01.prod.outlook.com I'm marking this Returned with Feedback for now.
Whenever you're ready, you can resurrect the patch entry by visiting
https://commitfest.postgresql.org/38/3143/
and changing the status to "Needs Review", and then changing the
status again to "Move to next CF". (Don't forget the second step;
hopefully we will have streamlined this in the near future!)
Thanks,
--Jacob
[1]: /messages/by-id/OS0PR01MB571696D623F35A09AB51903A94969@OS0PR01MB5716.jpnprd01.prod.outlook.com