utility commands benefiting from parallel plan

Started by Haribabu Kommialmost 9 years ago13 messages
#1Haribabu Kommi
kommi.haribabu@gmail.com
1 attachment(s)

Hi Hackers,

Here I attached an implementation patch that allows
utility statements that have queries underneath such as
CREATE TABLE AS, CREATE MATERIALIZED VIEW
and REFRESH commands to benefit from parallel plan.

These write operations not performed concurrently by the
parallel workers, but the underlying query that is used by
these operations are eligible for parallel plans.

Currently the write operations are implemented for the
tuple dest types DestIntoRel and DestTransientRel.

Currently I am evaluating other write operations that can
benefit with parallelism without side effects in enabling them.

comments?

Regards,
Hari Babu
Fujitsu Australia

Attachments:

utlity_write_with_query_parallel_1.patchapplication/octet-stream; name=utlity_write_with_query_parallel_1.patchDownload
diff --git a/doc/src/sgml/parallel.sgml b/doc/src/sgml/parallel.sgml
index e8624fc..da65db6 100644
--- a/doc/src/sgml/parallel.sgml
+++ b/doc/src/sgml/parallel.sgml
@@ -229,15 +229,6 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%';
 
     <listitem>
       <para>
-        A prepared statement is executed using a <literal>CREATE TABLE .. AS
-        EXECUTE ..</literal> statement.  This construct converts what otherwise
-        would have been a read-only operation into a read-write operation,
-        making it ineligible for parallel query.
-      </para>
-    </listitem>
-
-    <listitem>
-      <para>
         The transaction isolation level is serializable.  This situation
         does not normally arise, because parallel query plans are not
         generated when the transaction isolation level is serializable.
@@ -335,7 +326,30 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%';
   </para>
 
  </sect2>
+ 
+ <sect2 id="writes-using-parallel-plans">
+  <title>Write Operations Using Parallel Plan</title>
 
+   <para>
+    Currently there are limited number of write operations
+    that uses benefits of parallelism. Those are,
+   </para> 
+   
+  <itemizedlist>
+   <listitem>
+    <para>
+     An utility statement that is used to create table or materialized view
+     such as <literal>CREATE TABLE .. AS</literal> or <literal>CREATE
+     MATERIALIZED VIEW .. AS</literal> and etc statements are supported.
+     These write operations not performed concurrently by the parallel workers,
+     but the underlying query that is used by these operations are eligible
+     for parallel plans.
+    </para>
+   </listitem>
+  </itemizedlist>
+  
+ </sect2>
+ 
  <sect2 id="parallel-plan-tips">
   <title>Parallel Plan Tips</title>
 
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index af25836..581a05c 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -2568,15 +2568,13 @@ heap_prepare_insert(Relation relation, HeapTuple tup, TransactionId xid,
 					CommandId cid, int options)
 {
 	/*
-	 * For now, parallel operations are required to be strictly read-only.
-	 * Unlike heap_update() and heap_delete(), an insert should never create a
-	 * combo CID, so it might be possible to relax this restriction, but not
-	 * without more thought and testing.
+	 * For now, parallel operations are required to be strictly read-only in
+	 * parallel worker.
 	 */
-	if (IsInParallelMode())
+	if (IsParallelWorker())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
-				 errmsg("cannot insert tuples during a parallel operation")));
+				 errmsg("cannot insert tuples in a parallel worker")));
 
 	if (relation->rd_rel->relhasoids)
 	{
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 646a884..b6cc6a4 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -326,7 +326,7 @@ ExecCreateTableAs(CreateTableAsStmt *stmt, const char *queryString,
 		Assert(query->commandType == CMD_SELECT);
 
 		/* plan the query --- note we disallow parallelism */
-		plan = pg_plan_query(query, 0, params);
+		plan = pg_plan_query(query, CURSOR_OPT_PARALLEL_OK, params);
 
 		/*
 		 * Use a snapshot with an updated command ID to ensure this query sees
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index c9e0a3e..25e6e9e 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -397,7 +397,7 @@ ExplainOneUtility(Node *utilityStmt, IntoClause *into, ExplainState *es,
 		rewritten = QueryRewrite(castNode(Query, copyObject(ctas->query)));
 		Assert(list_length(rewritten) == 1);
 		ExplainOneQuery(castNode(Query, linitial(rewritten)),
-						0, ctas->into, es,
+						CURSOR_OPT_PARALLEL_OK, ctas->into, es,
 						queryString, params);
 	}
 	else if (IsA(utilityStmt, DeclareCursorStmt))
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index a18c917..0766e8d 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -384,7 +384,7 @@ refresh_matview_datafill(DestReceiver *dest, Query *query,
 	CHECK_FOR_INTERRUPTS();
 
 	/* Plan the query which will generate data for the refresh. */
-	plan = pg_plan_query(query, 0, NULL);
+	plan = pg_plan_query(query, CURSOR_OPT_PARALLEL_OK, NULL);
 
 	/*
 	 * Use a snapshot with an updated command ID to ensure this query sees
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 3f76a40..9567e9a 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -1587,11 +1587,9 @@ ExecutePlan(EState *estate,
 
 	/*
 	 * If a tuple count was supplied, we must force the plan to run without
-	 * parallelism, because we might exit early.  Also disable parallelism
-	 * when writing into a relation, because no database changes are allowed
-	 * in parallel mode.
+	 * parallelism, because we might exit early.
 	 */
-	if (numberTuples || dest->mydest == DestIntoRel)
+	if (numberTuples)
 		use_parallel_mode = false;
 
 	if (use_parallel_mode)
diff --git a/src/test/regress/expected/write_parallel.out b/src/test/regress/expected/write_parallel.out
new file mode 100644
index 0000000..e549cc2
--- /dev/null
+++ b/src/test/regress/expected/write_parallel.out
@@ -0,0 +1,80 @@
+--
+-- PARALLEL
+--
+-- Serializable isolation would disable parallel query, so explicitly use an
+-- arbitrary other level.
+begin isolation level repeatable read;
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+explain (costs off) create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+drop table parallel_write;
+explain (costs off) select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+drop table parallel_write;
+explain (costs off) create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+Refresh materialized view parallel_mat_view;
+drop materialized view parallel_mat_view;
+prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1);
+explain (costs off) create table parallel_write as execute prep_stmt;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create table parallel_write as execute prep_stmt;
+drop table parallel_write;
+rollback;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index edeb2d6..aee1ed8 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -96,6 +96,7 @@ test: rules psql_crosstab amutils
 
 # run by itself so it can run parallel workers
 test: select_parallel
+test: write_parallel
 
 # no relation related tests can be put in this group
 test: publication subscription
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 27a46d7..26799c9 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -128,6 +128,7 @@ test: tsrf
 test: rules
 test: psql_crosstab
 test: select_parallel
+test: write_parallel
 test: publication
 test: subscription
 test: amutils
diff --git a/src/test/regress/sql/write_parallel.sql b/src/test/regress/sql/write_parallel.sql
new file mode 100644
index 0000000..00f9156
--- /dev/null
+++ b/src/test/regress/sql/write_parallel.sql
@@ -0,0 +1,43 @@
+--
+-- PARALLEL
+--
+
+-- Serializable isolation would disable parallel query, so explicitly use an
+-- arbitrary other level.
+begin isolation level repeatable read;
+
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+explain (costs off) create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+drop table parallel_write;
+
+explain (costs off) select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+drop table parallel_write;
+
+explain (costs off) create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+Refresh materialized view parallel_mat_view;
+drop materialized view parallel_mat_view;
+
+prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1);
+explain (costs off) create table parallel_write as execute prep_stmt;
+create table parallel_write as execute prep_stmt;
+drop table parallel_write;
+
+rollback;
#2Dilip Kumar
dilipbalaut@gmail.com
In reply to: Haribabu Kommi (#1)
Re: utility commands benefiting from parallel plan

On Fri, Feb 24, 2017 at 11:43 AM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

Here I attached an implementation patch that allows
utility statements that have queries underneath such as
CREATE TABLE AS, CREATE MATERIALIZED VIEW
and REFRESH commands to benefit from parallel plan.

These write operations not performed concurrently by the
parallel workers, but the underlying query that is used by
these operations are eligible for parallel plans.

Currently the write operations are implemented for the
tuple dest types DestIntoRel and DestTransientRel.

Currently I am evaluating other write operations that can
benefit with parallelism without side effects in enabling them.

The Idea looks good to me.

Since we are already modifying heap_prepare_insert, I am thinking that
we can as well enable queries like "insert into .. select from .."
with minor modification?

- * For now, parallel operations are required to be strictly read-only.
- * Unlike heap_update() and heap_delete(), an insert should never create a
- * combo CID, so it might be possible to relax this restriction, but not
- * without more thought and testing.
+ * For now, parallel operations are required to be strictly read-only in
+ * parallel worker.
This statement is still not true, we can not do heap_update in the
leader even though worker are doing the read-only operation (update
with select).  We can change the comments such that it appears more
specific to insert I think.
-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Robert Haas
robertmhaas@gmail.com
In reply to: Haribabu Kommi (#1)
Re: utility commands benefiting from parallel plan

On Fri, Feb 24, 2017 at 11:43 AM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

Here I attached an implementation patch that allows
utility statements that have queries underneath such as
CREATE TABLE AS, CREATE MATERIALIZED VIEW
and REFRESH commands to benefit from parallel plan.

These write operations not performed concurrently by the
parallel workers, but the underlying query that is used by
these operations are eligible for parallel plans.

Currently the write operations are implemented for the
tuple dest types DestIntoRel and DestTransientRel.

Currently I am evaluating other write operations that can
benefit with parallelism without side effects in enabling them.

comments?

I think a lot more work than this will be needed. See:

/messages/by-id/CA+TgmoZC5ft_t9uQWSO5_1vU6H8oVyD=zyuLvRnJqTN==fvnhg@mail.gmail.com

...and the discussion which followed.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Dilip Kumar (#2)
2 attachment(s)
Re: utility commands benefiting from parallel plan

On Sat, Feb 25, 2017 at 2:45 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Fri, Feb 24, 2017 at 11:43 AM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

Here I attached an implementation patch that allows
utility statements that have queries underneath such as
CREATE TABLE AS, CREATE MATERIALIZED VIEW
and REFRESH commands to benefit from parallel plan.

These write operations not performed concurrently by the
parallel workers, but the underlying query that is used by
these operations are eligible for parallel plans.

Currently the write operations are implemented for the
tuple dest types DestIntoRel and DestTransientRel.

Currently I am evaluating other write operations that can
benefit with parallelism without side effects in enabling them.

The Idea looks good to me.

Since we are already modifying heap_prepare_insert, I am thinking that
we can as well enable queries like "insert into .. select from .."
with minor modification?

Thanks for the review.

I am finding it not so easy in supporting write operations like INSERT,
DELETE and UPDATE commands to use parallelism benefits for the
queries that are underneath.

Currently the parallelism is enabled only for the tables that don't have
any triggers and indexes with expressions. This limitation can be
removed after a though testing.

To support the same, I removed all the errors from heap functions
and functions to get a new transaction and updating the command id
to the current snapshot (Required for the cases where a single command
validates the input).

Attached a WIP patch for the support for DML write operations.
There is no functional change in base utility write support patch.

Regards,
Hari Babu
Fujitsu Australia

Attachments:

0002_dml_write_using_parallel_1.patchapplication/octet-stream; name=0002_dml_write_using_parallel_1.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 581a05c..551a689 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -3017,14 +3017,13 @@ heap_delete(Relation relation, ItemPointer tid,
 	Assert(ItemPointerIsValid(tid));
 
 	/*
-	 * Forbid this during a parallel operation, lest it allocate a combocid.
-	 * Other workers might need that combocid for visibility checks, and we
-	 * have no provision for broadcasting it to them.
+	 * For now, parallel operations are required to be strictly read-only in
+	 * parallel worker.
 	 */
-	if (IsInParallelMode())
+	if (IsParallelWorker())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
-				 errmsg("cannot delete tuples during a parallel operation")));
+				 errmsg("cannot delete tuples in a parallel worker")));
 
 	block = ItemPointerGetBlockNumber(tid);
 	buffer = ReadBuffer(relation, block);
@@ -3489,14 +3488,13 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemPointerIsValid(otid));
 
 	/*
-	 * Forbid this during a parallel operation, lest it allocate a combocid.
-	 * Other workers might need that combocid for visibility checks, and we
-	 * have no provision for broadcasting it to them.
+	 * For now, parallel operations are required to be strictly read-only in
+	 * parallel worker.
 	 */
-	if (IsInParallelMode())
+	if (IsParallelWorker())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
-				 errmsg("cannot update tuples during a parallel operation")));
+				 errmsg("cannot update tuples in a parallel worker")));
 
 	/*
 	 * Fetch the list of attributes to be checked for HOT update.  This is
diff --git a/src/backend/access/transam/varsup.c b/src/backend/access/transam/varsup.c
index 42fc351..8557e29 100644
--- a/src/backend/access/transam/varsup.c
+++ b/src/backend/access/transam/varsup.c
@@ -15,6 +15,7 @@
 
 #include "access/clog.h"
 #include "access/commit_ts.h"
+#include "access/parallel.h"
 #include "access/subtrans.h"
 #include "access/transam.h"
 #include "access/xact.h"
@@ -53,8 +54,8 @@ GetNewTransactionId(bool isSubXact)
 	 * Workers synchronize transaction state at the beginning of each parallel
 	 * operation, so we can't account for new XIDs after that point.
 	 */
-	if (IsInParallelMode())
-		elog(ERROR, "cannot assign TransactionIds during a parallel operation");
+	if (IsParallelWorker())
+		elog(ERROR, "cannot assign TransactionIds in a parallel worker");
 
 	/*
 	 * During bootstrap initialization, we return the special bootstrap
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 82f9a3c..611ddbc 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -499,8 +499,8 @@ AssignTransactionId(TransactionState s)
 	 * Workers synchronize transaction state at the beginning of each parallel
 	 * operation, so we can't account for new XIDs at this point.
 	 */
-	if (IsInParallelMode() || IsParallelWorker())
-		elog(ERROR, "cannot assign XIDs during a parallel operation");
+	if (IsParallelWorker())
+		elog(ERROR, "cannot assign XIDs in a parallel worker");
 
 	/*
 	 * Ensure parent(s) have XIDs, so that a child always has an XID later
@@ -933,8 +933,8 @@ CommandCounterIncrement(void)
 		 * parallel operation, so we can't account for new commands after that
 		 * point.
 		 */
-		if (IsInParallelMode() || IsParallelWorker())
-			elog(ERROR, "cannot start commands during a parallel operation");
+		if (IsParallelWorker())
+			elog(ERROR, "cannot start commands in a parallel worker");
 
 		currentCommandId += 1;
 		if (currentCommandId == InvalidCommandId)
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 9567e9a..e0470bd 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -78,6 +78,7 @@ static void InitPlan(QueryDesc *queryDesc, int eflags);
 static void CheckValidRowMarkRel(Relation rel, RowMarkType markType);
 static void ExecPostprocessPlan(EState *estate);
 static void ExecEndPlan(PlanState *planstate, EState *estate);
+static bool is_write_parallel_safe(EState *estate);
 static void ExecutePlan(EState *estate, PlanState *planstate,
 			bool use_parallel_mode,
 			CmdType operation,
@@ -1550,6 +1551,62 @@ ExecEndPlan(PlanState *planstate, EState *estate)
 	}
 }
 
+/*
+ * is_write_parallel_safe
+ *		Detect whether the given estate contains any write operations that contains
+ *		only parallel-safe functions
+ *
+ *		This is required to identify earlier before entering paralle mode,
+ *		otherwise it may lead to a failure in case if it involves any parallel
+ *		unsafe operation.
+ */
+static bool
+is_write_parallel_safe(EState *estate)
+{
+	/*
+	 * check whether the command is of write operation or not?
+	 * before proceeding to verify whether it contains all parallel
+	 * safe functions.
+	 */
+	if (estate->es_plannedstmt->commandType == CMD_INSERT ||
+			estate->es_plannedstmt->commandType == CMD_DELETE ||
+			estate->es_plannedstmt->commandType == CMD_UPDATE)
+	{
+		int i;
+
+		/*
+		 * Loop through the all result relations and identify whether
+		 * any relations in the query contains any triggers, check constraints
+		 * and indexes with expressions.
+		 *
+		 * FIXME: Not sure whether these are the only area where unsafe functions
+		 * can present that leads to a failure.
+		 *
+		 * FIXME: Need to identify the expressions that contains any unsafe
+		 * functions, instead of rejecting blindly.
+		 */
+		for (i = 0; i < estate->es_num_result_relations; i++)
+		{
+			int j;
+
+			ResultRelInfo *current_result_relation
+									= estate->es_result_relations + i;
+
+			if (current_result_relation->ri_TrigDesc != NULL)
+				return false;
+
+			for (j = 0; j < current_result_relation->ri_NumIndices; j++)
+			{
+				IndexInfo *idx_info
+						= *(current_result_relation->ri_IndexRelationInfo + j);
+
+				if (idx_info->ii_Expressions)
+					return false;
+			}
+		}
+	}
+}
+
 /* ----------------------------------------------------------------
  *		ExecutePlan
  *
@@ -1587,11 +1644,14 @@ ExecutePlan(EState *estate,
 
 	/*
 	 * If a tuple count was supplied, we must force the plan to run without
-	 * parallelism, because we might exit early.
+	 * parallelism, because we might exit early. And also check whether it
+	 * contains any unsafe functions that are present in write operations
+	 * eligible for parallel mode.
 	 */
-	if (numberTuples)
+	if (numberTuples || (use_parallel_mode && !is_write_parallel_safe(estate)))
 		use_parallel_mode = false;
 
+
 	if (use_parallel_mode)
 		EnterParallelMode();
 
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index ca0ae78..29f594d 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -242,7 +242,7 @@ standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)
 	if ((cursorOptions & CURSOR_OPT_PARALLEL_OK) != 0 &&
 		IsUnderPostmaster &&
 		dynamic_shared_memory_type != DSM_IMPL_NONE &&
-		parse->commandType == CMD_SELECT &&
+		parse->commandType != CMD_UTILITY &&
 		!parse->hasModifyingCTE &&
 		max_parallel_workers_per_gather > 0 &&
 		!IsParallelWorker() &&
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index b19380e..d6b6156 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -1129,6 +1129,25 @@ max_parallel_hazard_walker(Node *node, max_parallel_hazard_context *context)
 	if (node == NULL)
 		return false;
 
+	/* Loop through all the ListCell items to find out the parallel safety */
+	if (IsA(node, List))
+	{
+		ListCell   *temp;
+
+		foreach(temp, (List *) node)
+		{
+			if (max_parallel_hazard_walker((Node *) lfirst(temp), context))
+				return true;
+		}
+	}
+
+	/* check for hazardous functions in target expression */
+	if (IsA(node, TargetEntry))
+	{
+		if (max_parallel_hazard_walker((Node *)((TargetEntry *)node)->expr, context))
+			return true;
+	}
+
 	/* Check for hazardous functions in node itself */
 	if (check_functions_in_node(node, max_parallel_hazard_checker,
 								context))
diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c
index f232c84..66403af 100644
--- a/src/backend/utils/time/snapmgr.c
+++ b/src/backend/utils/time/snapmgr.c
@@ -48,6 +48,7 @@
 #include <sys/stat.h>
 #include <unistd.h>
 
+#include "access/parallel.h"
 #include "access/transam.h"
 #include "access/xact.h"
 #include "access/xlog.h"
@@ -792,8 +793,8 @@ UpdateActiveSnapshotCommandId(void)
 	 */
 	save_curcid = ActiveSnapshot->as_snap->curcid;
 	curcid = GetCurrentCommandId(false);
-	if (IsInParallelMode() && save_curcid != curcid)
-		elog(ERROR, "cannot modify commandid in active snapshot during a parallel operation");
+	if (IsParallelWorker() && save_curcid != curcid)
+		elog(ERROR, "cannot modify commandid in active snapshot in a parallel worker");
 	ActiveSnapshot->as_snap->curcid = curcid;
 }
 
diff --git a/src/test/regress/expected/write_parallel.out b/src/test/regress/expected/write_parallel.out
index e549cc2..a0f8c7a 100644
--- a/src/test/regress/expected/write_parallel.out
+++ b/src/test/regress/expected/write_parallel.out
@@ -10,7 +10,7 @@ set parallel_tuple_cost=0;
 set min_parallel_table_scan_size=0;
 set max_parallel_workers_per_gather=4;
 --
--- Test write operations that has an underlying query that is eligble
+-- Test utility write operations that has an underlying query that is eligble
 -- for parallel plans
 --
 explain (costs off) create table parallel_write as
@@ -77,4 +77,179 @@ explain (costs off) create table parallel_write as execute prep_stmt;
 
 create table parallel_write as execute prep_stmt;
 drop table parallel_write;
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+create table parallel_test(a int);
+explain (costs off) insert into parallel_test (select length(stringu1) from tenk1 group by length(stringu1));
+                          QUERY PLAN                           
+---------------------------------------------------------------
+ Insert on parallel_test
+   ->  Finalize HashAggregate
+         Group Key: (length((tenk1.stringu1)::text))
+         ->  Gather
+               Workers Planned: 4
+               ->  Partial HashAggregate
+                     Group Key: length((tenk1.stringu1)::text)
+                     ->  Parallel Seq Scan on tenk1
+(8 rows)
+
+insert into parallel_test (select length(stringu1) from tenk1 group by length(stringu1));
+set enable_indexscan to off;
+set enable_bitmapscan to off;
+explain (costs off) update parallel_test set a = a from tenk1 where hundred > 100;
+                      QUERY PLAN                      
+------------------------------------------------------
+ Update on parallel_test
+   ->  Nested Loop
+         ->  Gather
+               Workers Planned: 4
+               ->  Parallel Seq Scan on tenk1
+                     Filter: (hundred > 100)
+         ->  Gather
+               Workers Planned: 3
+               ->  Parallel Seq Scan on parallel_test
+(9 rows)
+
+update parallel_test set a = a from tenk1 where hundred > 100;
+explain (costs off) delete from tenk1 where hundred > 100;
+               QUERY PLAN               
+----------------------------------------
+ Delete on tenk1
+   ->  Gather
+         Workers Planned: 4
+         ->  Parallel Seq Scan on tenk1
+               Filter: (hundred > 100)
+(5 rows)
+
+delete from tenk1 where hundred > 100;
+reset enable_indexscan;
+reset enable_bitmapscan;
+--
+-- Check the insert operation when the domain contains functions
+-- that are parallel restricted.
+--
+create function sql_is_distinct_from(anyelement, anyelement)
+returns boolean language sql
+as 'select $1 is distinct from $2 limit 1';
+create domain inotnull int
+  check (sql_is_distinct_from(value, null));
+create table dom_table (x inotnull);
+explain (costs off) insert into dom_table (select length(stringu1) from tenk1 group by length(stringu1));
+                             QUERY PLAN                              
+---------------------------------------------------------------------
+ Insert on dom_table
+   ->  Subquery Scan on "*SELECT*"
+         ->  Finalize HashAggregate
+               Group Key: (length((tenk1.stringu1)::text))
+               ->  Gather
+                     Workers Planned: 4
+                     ->  Partial HashAggregate
+                           Group Key: length((tenk1.stringu1)::text)
+                           ->  Parallel Seq Scan on tenk1
+(9 rows)
+
+insert into dom_table (select length(stringu1) from tenk1 group by length(stringu1));
+drop table dom_table;
+drop domain inotnull;
+drop function sql_is_distinct_from(anyelement, anyelement);
+--
+-- Check the insert operation when the index contains expressions
+-- that are parallel unsafe.
+-- (No parallel mode - Otherwise fails with "ERROR:  cannot execute ANALYZE during a parallel operation")
+--
+CREATE TABLE vac (i int);
+CREATE FUNCTION test_analyze() RETURNS VOID VOLATILE LANGUAGE SQL
+	AS 'ANALYZE pg_am';
+CREATE FUNCTION wrap_test_analyze(c INT) RETURNS INT IMMUTABLE LANGUAGE SQL
+	AS 'SELECT $1 FROM test_analyze()';
+CREATE INDEX ON vac(wrap_test_analyze(i));
+explain (costs off) INSERT INTO vac select length(stringu1) from tenk1 group by length(stringu1);
+                          QUERY PLAN                           
+---------------------------------------------------------------
+ Insert on vac
+   ->  Finalize HashAggregate
+         Group Key: (length((tenk1.stringu1)::text))
+         ->  Gather
+               Workers Planned: 4
+               ->  Partial HashAggregate
+                     Group Key: length((tenk1.stringu1)::text)
+                     ->  Parallel Seq Scan on tenk1
+(8 rows)
+
+INSERT INTO vac select length(stringu1) from tenk1 group by length(stringu1);
+drop table vac;
+drop function wrap_test_analyze(c INT);
+drop function test_analyze();
+--
+-- Check the insert operation when the table contains trigger functions
+-- (No parallel mode - Otherwise fails with "ERROR:  cannot execute nextval() during a parallel operation")
+--
+CREATE TABLE y (a integer);
+INSERT INTO y SELECT generate_series(1, 10);
+CREATE SEQUENCE y_seq;
+CREATE FUNCTION y_trigger() RETURNS trigger AS $$
+declare
+count_val integer;
+begin
+  select nextval('y_seq') into count_val;
+  raise notice 'count: %', count_val;
+  raise notice 'y_trigger: a = %', new.a;
+  return new;
+end;
+$$ LANGUAGE plpgsql;
+CREATE TRIGGER y_trig BEFORE INSERT ON y FOR EACH ROW
+    EXECUTE PROCEDURE y_trigger();
+explain (costs off) insert into y (select length(stringu1) from tenk1 group by length(stringu1));
+                          QUERY PLAN                           
+---------------------------------------------------------------
+ Insert on y
+   ->  Finalize HashAggregate
+         Group Key: (length((tenk1.stringu1)::text))
+         ->  Gather
+               Workers Planned: 4
+               ->  Partial HashAggregate
+                     Group Key: length((tenk1.stringu1)::text)
+                     ->  Parallel Seq Scan on tenk1
+(8 rows)
+
+insert into y (select length(stringu1) from tenk1 group by length(stringu1));
+NOTICE:  count: 1
+NOTICE:  y_trigger: a = 6
+drop trigger y_trig on y;
+drop function y_trigger();
+drop sequence y_seq;
+--
+-- Check the insert operation using CTE
+-- (No parallel mode - Otherwise fails in writing in worker)
+--
+explain (costs off) WITH t AS (
+    INSERT INTO y
+        (select length(stringu1) from tenk1 group by length(stringu1))
+    RETURNING *
+)
+SELECT * FROM t;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ CTE Scan on t
+   CTE t
+     ->  Insert on y
+           ->  HashAggregate
+                 Group Key: length((tenk1.stringu1)::text)
+                 ->  Seq Scan on tenk1
+(6 rows)
+
+WITH t AS (
+    INSERT INTO y
+        (select length(stringu1) from tenk1 group by length(stringu1))
+    RETURNING *
+)
+SELECT * FROM t;
+ a 
+---
+ 6
+(1 row)
+
+drop table y;
 rollback;
diff --git a/src/test/regress/sql/write_parallel.sql b/src/test/regress/sql/write_parallel.sql
index 00f9156..9e51a7f 100644
--- a/src/test/regress/sql/write_parallel.sql
+++ b/src/test/regress/sql/write_parallel.sql
@@ -13,7 +13,7 @@ set min_parallel_table_scan_size=0;
 set max_parallel_workers_per_gather=4;
 
 --
--- Test write operations that has an underlying query that is eligble
+-- Test utility write operations that has an underlying query that is eligble
 -- for parallel plans
 --
 explain (costs off) create table parallel_write as
@@ -40,4 +40,102 @@ explain (costs off) create table parallel_write as execute prep_stmt;
 create table parallel_write as execute prep_stmt;
 drop table parallel_write;
 
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+create table parallel_test(a int);
+explain (costs off) insert into parallel_test (select length(stringu1) from tenk1 group by length(stringu1));
+insert into parallel_test (select length(stringu1) from tenk1 group by length(stringu1));
+
+set enable_indexscan to off;
+set enable_bitmapscan to off;
+
+explain (costs off) update parallel_test set a = a from tenk1 where hundred > 100;
+update parallel_test set a = a from tenk1 where hundred > 100;
+
+explain (costs off) delete from tenk1 where hundred > 100;
+delete from tenk1 where hundred > 100;
+
+reset enable_indexscan;
+reset enable_bitmapscan;
+
+--
+-- Check the insert operation when the domain contains functions
+-- that are parallel restricted.
+--
+create function sql_is_distinct_from(anyelement, anyelement)
+returns boolean language sql
+as 'select $1 is distinct from $2 limit 1';
+create domain inotnull int
+  check (sql_is_distinct_from(value, null));
+create table dom_table (x inotnull);
+explain (costs off) insert into dom_table (select length(stringu1) from tenk1 group by length(stringu1));
+insert into dom_table (select length(stringu1) from tenk1 group by length(stringu1));
+drop table dom_table;
+drop domain inotnull;
+drop function sql_is_distinct_from(anyelement, anyelement);
+
+--
+-- Check the insert operation when the index contains expressions
+-- that are parallel unsafe.
+-- (No parallel mode - Otherwise fails with "ERROR:  cannot execute ANALYZE during a parallel operation")
+--
+CREATE TABLE vac (i int);
+CREATE FUNCTION test_analyze() RETURNS VOID VOLATILE LANGUAGE SQL
+	AS 'ANALYZE pg_am';
+CREATE FUNCTION wrap_test_analyze(c INT) RETURNS INT IMMUTABLE LANGUAGE SQL
+	AS 'SELECT $1 FROM test_analyze()';
+CREATE INDEX ON vac(wrap_test_analyze(i));
+explain (costs off) INSERT INTO vac select length(stringu1) from tenk1 group by length(stringu1);
+INSERT INTO vac select length(stringu1) from tenk1 group by length(stringu1);
+drop table vac;
+drop function wrap_test_analyze(c INT);
+drop function test_analyze();
+
+--
+-- Check the insert operation when the table contains trigger functions
+-- (No parallel mode - Otherwise fails with "ERROR:  cannot execute nextval() during a parallel operation")
+--
+CREATE TABLE y (a integer);
+INSERT INTO y SELECT generate_series(1, 10);
+CREATE SEQUENCE y_seq;
+CREATE FUNCTION y_trigger() RETURNS trigger AS $$
+declare
+count_val integer;
+begin
+  select nextval('y_seq') into count_val;
+  raise notice 'count: %', count_val;
+  raise notice 'y_trigger: a = %', new.a;
+  return new;
+end;
+$$ LANGUAGE plpgsql;
+CREATE TRIGGER y_trig BEFORE INSERT ON y FOR EACH ROW
+    EXECUTE PROCEDURE y_trigger();
+explain (costs off) insert into y (select length(stringu1) from tenk1 group by length(stringu1));
+insert into y (select length(stringu1) from tenk1 group by length(stringu1));
+drop trigger y_trig on y;
+drop function y_trigger();
+drop sequence y_seq;
+
+--
+-- Check the insert operation using CTE
+-- (No parallel mode - Otherwise fails in writing in worker)
+--
+explain (costs off) WITH t AS (
+    INSERT INTO y
+        (select length(stringu1) from tenk1 group by length(stringu1))
+    RETURNING *
+)
+SELECT * FROM t;
+
+WITH t AS (
+    INSERT INTO y
+        (select length(stringu1) from tenk1 group by length(stringu1))
+    RETURNING *
+)
+SELECT * FROM t;
+
+drop table y;
+
 rollback;
0001_utility_write_using_parallel_1.patchapplication/octet-stream; name=0001_utility_write_using_parallel_1.patchDownload
diff --git a/doc/src/sgml/parallel.sgml b/doc/src/sgml/parallel.sgml
index e8624fc..da65db6 100644
--- a/doc/src/sgml/parallel.sgml
+++ b/doc/src/sgml/parallel.sgml
@@ -229,15 +229,6 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%';
 
     <listitem>
       <para>
-        A prepared statement is executed using a <literal>CREATE TABLE .. AS
-        EXECUTE ..</literal> statement.  This construct converts what otherwise
-        would have been a read-only operation into a read-write operation,
-        making it ineligible for parallel query.
-      </para>
-    </listitem>
-
-    <listitem>
-      <para>
         The transaction isolation level is serializable.  This situation
         does not normally arise, because parallel query plans are not
         generated when the transaction isolation level is serializable.
@@ -335,7 +326,30 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%';
   </para>
 
  </sect2>
+ 
+ <sect2 id="writes-using-parallel-plans">
+  <title>Write Operations Using Parallel Plan</title>
 
+   <para>
+    Currently there are limited number of write operations
+    that uses benefits of parallelism. Those are,
+   </para> 
+   
+  <itemizedlist>
+   <listitem>
+    <para>
+     An utility statement that is used to create table or materialized view
+     such as <literal>CREATE TABLE .. AS</literal> or <literal>CREATE
+     MATERIALIZED VIEW .. AS</literal> and etc statements are supported.
+     These write operations not perfomed concurrently by the parallel workers,
+     but the underlying query that is used by these operations are eligible
+     for parallel plans.
+    </para>
+   </listitem>
+  </itemizedlist>
+  
+ </sect2>
+ 
  <sect2 id="parallel-plan-tips">
   <title>Parallel Plan Tips</title>
 
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index af25836..581a05c 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -2568,15 +2568,13 @@ heap_prepare_insert(Relation relation, HeapTuple tup, TransactionId xid,
 					CommandId cid, int options)
 {
 	/*
-	 * For now, parallel operations are required to be strictly read-only.
-	 * Unlike heap_update() and heap_delete(), an insert should never create a
-	 * combo CID, so it might be possible to relax this restriction, but not
-	 * without more thought and testing.
+	 * For now, parallel operations are required to be strictly read-only in
+	 * parallel worker.
 	 */
-	if (IsInParallelMode())
+	if (IsParallelWorker())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
-				 errmsg("cannot insert tuples during a parallel operation")));
+				 errmsg("cannot insert tuples in a parallel worker")));
 
 	if (relation->rd_rel->relhasoids)
 	{
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 646a884..b6cc6a4 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -326,7 +326,7 @@ ExecCreateTableAs(CreateTableAsStmt *stmt, const char *queryString,
 		Assert(query->commandType == CMD_SELECT);
 
 		/* plan the query --- note we disallow parallelism */
-		plan = pg_plan_query(query, 0, params);
+		plan = pg_plan_query(query, CURSOR_OPT_PARALLEL_OK, params);
 
 		/*
 		 * Use a snapshot with an updated command ID to ensure this query sees
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index c9e0a3e..b17e710 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -388,8 +388,6 @@ ExplainOneUtility(Node *utilityStmt, IntoClause *into, ExplainState *es,
 		 * We have to rewrite the contained SELECT and then pass it back to
 		 * ExplainOneQuery.  It's probably not really necessary to copy the
 		 * contained parsetree another time, but let's be safe.
-		 *
-		 * Like ExecCreateTableAs, disallow parallelism in the plan.
 		 */
 		CreateTableAsStmt *ctas = (CreateTableAsStmt *) utilityStmt;
 		List	   *rewritten;
@@ -397,7 +395,7 @@ ExplainOneUtility(Node *utilityStmt, IntoClause *into, ExplainState *es,
 		rewritten = QueryRewrite(castNode(Query, copyObject(ctas->query)));
 		Assert(list_length(rewritten) == 1);
 		ExplainOneQuery(castNode(Query, linitial(rewritten)),
-						0, ctas->into, es,
+						CURSOR_OPT_PARALLEL_OK, ctas->into, es,
 						queryString, params);
 	}
 	else if (IsA(utilityStmt, DeclareCursorStmt))
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index a18c917..0766e8d 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -384,7 +384,7 @@ refresh_matview_datafill(DestReceiver *dest, Query *query,
 	CHECK_FOR_INTERRUPTS();
 
 	/* Plan the query which will generate data for the refresh. */
-	plan = pg_plan_query(query, 0, NULL);
+	plan = pg_plan_query(query, CURSOR_OPT_PARALLEL_OK, NULL);
 
 	/*
 	 * Use a snapshot with an updated command ID to ensure this query sees
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 3f76a40..9567e9a 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -1587,11 +1587,9 @@ ExecutePlan(EState *estate,
 
 	/*
 	 * If a tuple count was supplied, we must force the plan to run without
-	 * parallelism, because we might exit early.  Also disable parallelism
-	 * when writing into a relation, because no database changes are allowed
-	 * in parallel mode.
+	 * parallelism, because we might exit early.
 	 */
-	if (numberTuples || dest->mydest == DestIntoRel)
+	if (numberTuples)
 		use_parallel_mode = false;
 
 	if (use_parallel_mode)
diff --git a/src/test/regress/expected/write_parallel.out b/src/test/regress/expected/write_parallel.out
new file mode 100644
index 0000000..e549cc2
--- /dev/null
+++ b/src/test/regress/expected/write_parallel.out
@@ -0,0 +1,80 @@
+--
+-- PARALLEL
+--
+-- Serializable isolation would disable parallel query, so explicitly use an
+-- arbitrary other level.
+begin isolation level repeatable read;
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+explain (costs off) create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+drop table parallel_write;
+explain (costs off) select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+drop table parallel_write;
+explain (costs off) create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+Refresh materialized view parallel_mat_view;
+drop materialized view parallel_mat_view;
+prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1);
+explain (costs off) create table parallel_write as execute prep_stmt;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create table parallel_write as execute prep_stmt;
+drop table parallel_write;
+rollback;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index edeb2d6..aee1ed8 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -96,6 +96,7 @@ test: rules psql_crosstab amutils
 
 # run by itself so it can run parallel workers
 test: select_parallel
+test: write_parallel
 
 # no relation related tests can be put in this group
 test: publication subscription
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 27a46d7..26799c9 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -128,6 +128,7 @@ test: tsrf
 test: rules
 test: psql_crosstab
 test: select_parallel
+test: write_parallel
 test: publication
 test: subscription
 test: amutils
diff --git a/src/test/regress/sql/write_parallel.sql b/src/test/regress/sql/write_parallel.sql
new file mode 100644
index 0000000..00f9156
--- /dev/null
+++ b/src/test/regress/sql/write_parallel.sql
@@ -0,0 +1,43 @@
+--
+-- PARALLEL
+--
+
+-- Serializable isolation would disable parallel query, so explicitly use an
+-- arbitrary other level.
+begin isolation level repeatable read;
+
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+explain (costs off) create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+drop table parallel_write;
+
+explain (costs off) select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+drop table parallel_write;
+
+explain (costs off) create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+Refresh materialized view parallel_mat_view;
+drop materialized view parallel_mat_view;
+
+prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1);
+explain (costs off) create table parallel_write as execute prep_stmt;
+create table parallel_write as execute prep_stmt;
+drop table parallel_write;
+
+rollback;
#5Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Robert Haas (#3)
Re: utility commands benefiting from parallel plan

On Sat, Feb 25, 2017 at 3:21 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Feb 24, 2017 at 11:43 AM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

Here I attached an implementation patch that allows
utility statements that have queries underneath such as
CREATE TABLE AS, CREATE MATERIALIZED VIEW
and REFRESH commands to benefit from parallel plan.

These write operations not performed concurrently by the
parallel workers, but the underlying query that is used by
these operations are eligible for parallel plans.

Currently the write operations are implemented for the
tuple dest types DestIntoRel and DestTransientRel.

Currently I am evaluating other write operations that can
benefit with parallelism without side effects in enabling them.

comments?

I think a lot more work than this will be needed. See:

/messages/by-id/CA+TgmoZC5ft_t9uQWSO5_1vU6H8oVyD=
zyuLvRnJqTN==fvnhg@mail.gmail.com

...and the discussion which followed.

Thanks for the link.
Yes, it needs more work to support parallelism even for
queries that involved in write operations like INSERT,
DELETE and UPDATE commands.

Regards,
Hari Babu
Fujitsu Australia

#6Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Haribabu Kommi (#5)
1 attachment(s)
Re: utility commands benefiting from parallel plan

On Tue, Feb 28, 2017 at 12:48 PM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

On Sat, Feb 25, 2017 at 3:21 AM, Robert Haas <robertmhaas@gmail.com>
wrote:

On Fri, Feb 24, 2017 at 11:43 AM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

Here I attached an implementation patch that allows
utility statements that have queries underneath such as
CREATE TABLE AS, CREATE MATERIALIZED VIEW
and REFRESH commands to benefit from parallel plan.

These write operations not performed concurrently by the
parallel workers, but the underlying query that is used by
these operations are eligible for parallel plans.

Currently the write operations are implemented for the
tuple dest types DestIntoRel and DestTransientRel.

Currently I am evaluating other write operations that can
benefit with parallelism without side effects in enabling them.

comments?

I think a lot more work than this will be needed. See:

/messages/by-id/CA+TgmoZC5ft_t9uQWSO5_
1vU6H8oVyD=zyuLvRnJqTN==fvnhg@mail.gmail.com

...and the discussion which followed.

Thanks for the link.
Yes, it needs more work to support parallelism even for
queries that involved in write operations like INSERT,
DELETE and UPDATE commands.

This patch is marked as "returned with feedback" in the ongoing
commitfest.

The proposed DML write operations patch is having good number
of limitations like triggers and etc, but the utility writer operations
patch is in a good shape in my view to start supporting write operations.
This is useful for materialized view while refreshing the data.

Do you find any problems/missings in supporting parallel plan for utility
commands with the attached update patch? Or is it something
like supporting all write operations at once?

Regards,
Hari Babu
Fujitsu Australia

Attachments:

0001_utility_write_using_parallel_2.patchapplication/octet-stream; name=0001_utility_write_using_parallel_2.patchDownload
diff --git a/doc/src/sgml/parallel.sgml b/doc/src/sgml/parallel.sgml
index 2ea5c34..067e69a 100644
--- a/doc/src/sgml/parallel.sgml
+++ b/doc/src/sgml/parallel.sgml
@@ -229,15 +229,6 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%';
 
     <listitem>
       <para>
-        A prepared statement is executed using a <literal>CREATE TABLE .. AS
-        EXECUTE ..</literal> statement.  This construct converts what otherwise
-        would have been a read-only operation into a read-write operation,
-        making it ineligible for parallel query.
-      </para>
-    </listitem>
-
-    <listitem>
-      <para>
         The transaction isolation level is serializable.  This situation
         does not normally arise, because parallel query plans are not
         generated when the transaction isolation level is serializable.
@@ -376,7 +367,30 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%';
   </para>
 
  </sect2>
+ 
+ <sect2 id="writes-using-parallel-plans">
+  <title>Write Operations Using Parallel Plan</title>
 
+   <para>
+    Currently there are limited number of write operations
+    that uses benefits of parallelism. Those are,
+   </para> 
+   
+  <itemizedlist>
+   <listitem>
+    <para>
+     An utility statement that is used to create table or materialized view
+     such as <literal>CREATE TABLE .. AS</literal> or <literal>CREATE
+     MATERIALIZED VIEW .. AS</literal> and etc statements are supported.
+     These write operations not perfomed concurrently by the parallel workers,
+     but the underlying query that is used by these operations are eligible
+     for parallel plans.
+    </para>
+   </listitem>
+  </itemizedlist>
+  
+ </sect2>
+ 
  <sect2 id="parallel-plan-tips">
   <title>Parallel Plan Tips</title>
 
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index bffc971..720712a 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -2584,15 +2584,13 @@ heap_prepare_insert(Relation relation, HeapTuple tup, TransactionId xid,
 					CommandId cid, int options)
 {
 	/*
-	 * For now, parallel operations are required to be strictly read-only.
-	 * Unlike heap_update() and heap_delete(), an insert should never create a
-	 * combo CID, so it might be possible to relax this restriction, but not
-	 * without more thought and testing.
+	 * For now, parallel operations are required to be strictly read-only in
+	 * parallel worker.
 	 */
-	if (IsInParallelMode())
+	if (IsParallelWorker())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
-				 errmsg("cannot insert tuples during a parallel operation")));
+				 errmsg("cannot insert tuples in a parallel worker")));
 
 	if (relation->rd_rel->relhasoids)
 	{
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 646a884..cecb476 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -325,8 +325,8 @@ ExecCreateTableAs(CreateTableAsStmt *stmt, const char *queryString,
 		query = castNode(Query, linitial(rewritten));
 		Assert(query->commandType == CMD_SELECT);
 
-		/* plan the query --- note we disallow parallelism */
-		plan = pg_plan_query(query, 0, params);
+		/* plan the query */
+		plan = pg_plan_query(query, CURSOR_OPT_PARALLEL_OK, params);
 
 		/*
 		 * Use a snapshot with an updated command ID to ensure this query sees
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index c9b55ea..797efe1 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -394,8 +394,6 @@ ExplainOneUtility(Node *utilityStmt, IntoClause *into, ExplainState *es,
 		 * We have to rewrite the contained SELECT and then pass it back to
 		 * ExplainOneQuery.  It's probably not really necessary to copy the
 		 * contained parsetree another time, but let's be safe.
-		 *
-		 * Like ExecCreateTableAs, disallow parallelism in the plan.
 		 */
 		CreateTableAsStmt *ctas = (CreateTableAsStmt *) utilityStmt;
 		List	   *rewritten;
@@ -403,7 +401,7 @@ ExplainOneUtility(Node *utilityStmt, IntoClause *into, ExplainState *es,
 		rewritten = QueryRewrite(castNode(Query, copyObject(ctas->query)));
 		Assert(list_length(rewritten) == 1);
 		ExplainOneQuery(castNode(Query, linitial(rewritten)),
-						0, ctas->into, es,
+						CURSOR_OPT_PARALLEL_OK, ctas->into, es,
 						queryString, params);
 	}
 	else if (IsA(utilityStmt, DeclareCursorStmt))
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index a18c917..0766e8d 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -384,7 +384,7 @@ refresh_matview_datafill(DestReceiver *dest, Query *query,
 	CHECK_FOR_INTERRUPTS();
 
 	/* Plan the query which will generate data for the refresh. */
-	plan = pg_plan_query(query, 0, NULL);
+	plan = pg_plan_query(query, CURSOR_OPT_PARALLEL_OK, NULL);
 
 	/*
 	 * Use a snapshot with an updated command ID to ensure this query sees
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index f5cd65d..750d633 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -1592,11 +1592,9 @@ ExecutePlan(EState *estate,
 
 	/*
 	 * If a tuple count was supplied, we must force the plan to run without
-	 * parallelism, because we might exit early.  Also disable parallelism
-	 * when writing into a relation, because no database changes are allowed
-	 * in parallel mode.
+	 * parallelism, because we might exit early.
 	 */
-	if (numberTuples || dest->mydest == DestIntoRel)
+	if (numberTuples)
 		use_parallel_mode = false;
 
 	if (use_parallel_mode)
diff --git a/src/test/regress/expected/write_parallel.out b/src/test/regress/expected/write_parallel.out
new file mode 100644
index 0000000..e549cc2
--- /dev/null
+++ b/src/test/regress/expected/write_parallel.out
@@ -0,0 +1,80 @@
+--
+-- PARALLEL
+--
+-- Serializable isolation would disable parallel query, so explicitly use an
+-- arbitrary other level.
+begin isolation level repeatable read;
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+explain (costs off) create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+drop table parallel_write;
+explain (costs off) select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+drop table parallel_write;
+explain (costs off) create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+Refresh materialized view parallel_mat_view;
+drop materialized view parallel_mat_view;
+prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1);
+explain (costs off) create table parallel_write as execute prep_stmt;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create table parallel_write as execute prep_stmt;
+drop table parallel_write;
+rollback;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index ea7b5b4..4bd10d0 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -96,6 +96,7 @@ test: rules psql_crosstab amutils
 
 # run by itself so it can run parallel workers
 test: select_parallel
+test: write_parallel
 
 # no relation related tests can be put in this group
 test: publication subscription
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index cf48ea7..acf2d19 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -132,6 +132,7 @@ test: tsrf
 test: rules
 test: psql_crosstab
 test: select_parallel
+test: write_parallel
 test: publication
 test: subscription
 test: amutils
diff --git a/src/test/regress/sql/write_parallel.sql b/src/test/regress/sql/write_parallel.sql
new file mode 100644
index 0000000..00f9156
--- /dev/null
+++ b/src/test/regress/sql/write_parallel.sql
@@ -0,0 +1,43 @@
+--
+-- PARALLEL
+--
+
+-- Serializable isolation would disable parallel query, so explicitly use an
+-- arbitrary other level.
+begin isolation level repeatable read;
+
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+explain (costs off) create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+drop table parallel_write;
+
+explain (costs off) select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+drop table parallel_write;
+
+explain (costs off) create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+Refresh materialized view parallel_mat_view;
+drop materialized view parallel_mat_view;
+
+prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1);
+explain (costs off) create table parallel_write as execute prep_stmt;
+create table parallel_write as execute prep_stmt;
+drop table parallel_write;
+
+rollback;
#7Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Haribabu Kommi (#6)
1 attachment(s)
Re: utility commands benefiting from parallel plan

Hi All,

Attached a rebased patch that supports parallelism for the queries
that are underneath of some utility commands such as CREATE TABLE AS
and CREATE MATERIALIZED VIEW.

Note: This patch doesn't make the utility statement (insert operation)
to run in parallel. It only allows the select query to be parallel if the
query
is eligible for parallel.

Regards,
Hari Babu
Fujitsu Australia

Attachments:

0001-Make-parallel-eligible-for-utility-commands-undernea_V3.patchapplication/octet-stream; name=0001-Make-parallel-eligible-for-utility-commands-undernea_V3.patchDownload
From 0ae7b130d7ded5765b30cc6c6a37d8b882809a8c Mon Sep 17 00:00:00 2001
From: Hari Babu Kommi <kommih@localhost.localdomain>
Date: Tue, 29 Aug 2017 10:34:34 +1000
Subject: [PATCH] Make parallel eligible for utility commands underneath
 queries

This functionality adds the parallel query support to the queries
that run as part of some of the utility commands such as
CREATE MATERIALIZED VIEW, CREATE TABLE AS and etc.
---
 doc/src/sgml/parallel.sgml                   | 32 +++++++----
 src/backend/access/heap/heapam.c             | 10 ++--
 src/backend/commands/createas.c              |  4 +-
 src/backend/commands/explain.c               |  4 +-
 src/backend/commands/matview.c               |  2 +-
 src/backend/executor/execMain.c              |  6 +--
 src/test/regress/expected/write_parallel.out | 80 ++++++++++++++++++++++++++++
 src/test/regress/parallel_schedule           |  1 +
 src/test/regress/serial_schedule             |  1 +
 src/test/regress/sql/write_parallel.sql      | 43 +++++++++++++++
 10 files changed, 158 insertions(+), 25 deletions(-)
 create mode 100644 src/test/regress/expected/write_parallel.out
 create mode 100644 src/test/regress/sql/write_parallel.sql

diff --git a/doc/src/sgml/parallel.sgml b/doc/src/sgml/parallel.sgml
index 2a25f21..85f1d38 100644
--- a/doc/src/sgml/parallel.sgml
+++ b/doc/src/sgml/parallel.sgml
@@ -243,15 +243,6 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%';
 
     <listitem>
       <para>
-        A prepared statement is executed using a <literal>CREATE TABLE .. AS
-        EXECUTE ..</literal> statement.  This construct converts what otherwise
-        would have been a read-only operation into a read-write operation,
-        making it ineligible for parallel query.
-      </para>
-    </listitem>
-
-    <listitem>
-      <para>
         The transaction isolation level is serializable.  This situation
         does not normally arise, because parallel query plans are not
         generated when the transaction isolation level is serializable.
@@ -391,7 +382,30 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%';
   </para>
 
  </sect2>
+ 
+ <sect2 id="writes-using-parallel-plans">
+  <title>Write Operations Using Parallel Plan</title>
 
+   <para>
+    Currently there are limited number of write operations
+    that uses benefits of parallelism. Those are,
+   </para> 
+   
+  <itemizedlist>
+   <listitem>
+    <para>
+     An utility statement that is used to create table or materialized view
+     such as <literal>CREATE TABLE .. AS</literal> or <literal>CREATE
+     MATERIALIZED VIEW .. AS</literal> and etc statements are supported.
+     These write operations not perfomed concurrently by the parallel workers,
+     but the underlying query that is used by these operations are eligible
+     for parallel plans.
+    </para>
+   </listitem>
+  </itemizedlist>
+  
+ </sect2>
+ 
  <sect2 id="parallel-plan-tips">
   <title>Parallel Plan Tips</title>
 
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index e29c5ad..47de10e 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -2595,15 +2595,13 @@ heap_prepare_insert(Relation relation, HeapTuple tup, TransactionId xid,
 					CommandId cid, int options)
 {
 	/*
-	 * For now, parallel operations are required to be strictly read-only.
-	 * Unlike heap_update() and heap_delete(), an insert should never create a
-	 * combo CID, so it might be possible to relax this restriction, but not
-	 * without more thought and testing.
+	 * For now, parallel operations are required to be strictly read-only in
+	 * parallel worker.
 	 */
-	if (IsInParallelMode())
+	if (IsParallelWorker())
 		ereport(ERROR,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
-				 errmsg("cannot insert tuples during a parallel operation")));
+				 errmsg("cannot insert tuples in a parallel worker")));
 
 	if (relation->rd_rel->relhasoids)
 	{
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index e60210c..4d77411 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -326,8 +326,8 @@ ExecCreateTableAs(CreateTableAsStmt *stmt, const char *queryString,
 		query = linitial_node(Query, rewritten);
 		Assert(query->commandType == CMD_SELECT);
 
-		/* plan the query --- note we disallow parallelism */
-		plan = pg_plan_query(query, 0, params);
+		/* plan the query */
+		plan = pg_plan_query(query, CURSOR_OPT_PARALLEL_OK, params);
 
 		/*
 		 * Use a snapshot with an updated command ID to ensure this query sees
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index 4cee357..c13efc9 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -400,8 +400,6 @@ ExplainOneUtility(Node *utilityStmt, IntoClause *into, ExplainState *es,
 		 * We have to rewrite the contained SELECT and then pass it back to
 		 * ExplainOneQuery.  It's probably not really necessary to copy the
 		 * contained parsetree another time, but let's be safe.
-		 *
-		 * Like ExecCreateTableAs, disallow parallelism in the plan.
 		 */
 		CreateTableAsStmt *ctas = (CreateTableAsStmt *) utilityStmt;
 		List	   *rewritten;
@@ -409,7 +407,7 @@ ExplainOneUtility(Node *utilityStmt, IntoClause *into, ExplainState *es,
 		rewritten = QueryRewrite(castNode(Query, copyObject(ctas->query)));
 		Assert(list_length(rewritten) == 1);
 		ExplainOneQuery(linitial_node(Query, rewritten),
-						0, ctas->into, es,
+						CURSOR_OPT_PARALLEL_OK, ctas->into, es,
 						queryString, params, queryEnv);
 	}
 	else if (IsA(utilityStmt, DeclareCursorStmt))
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index d2e0376..b6fa062 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -404,7 +404,7 @@ refresh_matview_datafill(DestReceiver *dest, Query *query,
 	CHECK_FOR_INTERRUPTS();
 
 	/* Plan the query which will generate data for the refresh. */
-	plan = pg_plan_query(query, 0, NULL);
+	plan = pg_plan_query(query, CURSOR_OPT_PARALLEL_OK, NULL);
 
 	/*
 	 * Use a snapshot with an updated command ID to ensure this query sees
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 2946a0e..6458f0b 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -1689,11 +1689,9 @@ ExecutePlan(EState *estate,
 
 	/*
 	 * If the plan might potentially be executed multiple times, we must force
-	 * it to run without parallelism, because we might exit early.  Also
-	 * disable parallelism when writing into a relation, because no database
-	 * changes are allowed in parallel mode.
+	 * it to run without parallelism, because we might exit early.
 	 */
-	if (!execute_once || dest->mydest == DestIntoRel)
+	if (!execute_once)
 		use_parallel_mode = false;
 
 	if (use_parallel_mode)
diff --git a/src/test/regress/expected/write_parallel.out b/src/test/regress/expected/write_parallel.out
new file mode 100644
index 0000000..e549cc2
--- /dev/null
+++ b/src/test/regress/expected/write_parallel.out
@@ -0,0 +1,80 @@
+--
+-- PARALLEL
+--
+-- Serializable isolation would disable parallel query, so explicitly use an
+-- arbitrary other level.
+begin isolation level repeatable read;
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+explain (costs off) create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+drop table parallel_write;
+explain (costs off) select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+drop table parallel_write;
+explain (costs off) create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+Refresh materialized view parallel_mat_view;
+drop materialized view parallel_mat_view;
+prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1);
+explain (costs off) create table parallel_write as execute prep_stmt;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Finalize HashAggregate
+   Group Key: (length((stringu1)::text))
+   ->  Gather
+         Workers Planned: 4
+         ->  Partial HashAggregate
+               Group Key: length((stringu1)::text)
+               ->  Parallel Seq Scan on tenk1
+(7 rows)
+
+create table parallel_write as execute prep_stmt;
+drop table parallel_write;
+rollback;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 2fd3f2b..860e8ab 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -96,6 +96,7 @@ test: rules psql_crosstab amutils
 
 # run by itself so it can run parallel workers
 test: select_parallel
+test: write_parallel
 
 # no relation related tests can be put in this group
 test: publication subscription
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 76b0de3..ef275d0 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -134,6 +134,7 @@ test: stats_ext
 test: rules
 test: psql_crosstab
 test: select_parallel
+test: write_parallel
 test: publication
 test: subscription
 test: amutils
diff --git a/src/test/regress/sql/write_parallel.sql b/src/test/regress/sql/write_parallel.sql
new file mode 100644
index 0000000..00f9156
--- /dev/null
+++ b/src/test/regress/sql/write_parallel.sql
@@ -0,0 +1,43 @@
+--
+-- PARALLEL
+--
+
+-- Serializable isolation would disable parallel query, so explicitly use an
+-- arbitrary other level.
+begin isolation level repeatable read;
+
+-- encourage use of parallel plans
+set parallel_setup_cost=0;
+set parallel_tuple_cost=0;
+set min_parallel_table_scan_size=0;
+set max_parallel_workers_per_gather=4;
+
+--
+-- Test write operations that has an underlying query that is eligble
+-- for parallel plans
+--
+explain (costs off) create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+create table parallel_write as
+    select length(stringu1) from tenk1 group by length(stringu1);
+drop table parallel_write;
+
+explain (costs off) select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+select length(stringu1) into parallel_write
+    from tenk1 group by length(stringu1);
+drop table parallel_write;
+
+explain (costs off) create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+create materialized view parallel_mat_view as
+    select length(stringu1) from tenk1 group by length(stringu1);
+Refresh materialized view parallel_mat_view;
+drop materialized view parallel_mat_view;
+
+prepare prep_stmt as select length(stringu1) from tenk1 group by length(stringu1);
+explain (costs off) create table parallel_write as execute prep_stmt;
+create table parallel_write as execute prep_stmt;
+drop table parallel_write;
+
+rollback;
-- 
1.8.3.1

#8Rafia Sabih
rafia.sabih@enterprisedb.com
In reply to: Haribabu Kommi (#7)
Re: utility commands benefiting from parallel plan

On Fri, Sep 1, 2017 at 12:31 PM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

Hi All,

Attached a rebased patch that supports parallelism for the queries
that are underneath of some utility commands such as CREATE TABLE AS
and CREATE MATERIALIZED VIEW.

Note: This patch doesn't make the utility statement (insert operation)
to run in parallel. It only allows the select query to be parallel if the
query
is eligible for parallel.

Here is my feedback fro this patch,

- The patch is working as expected, all regression tests are passing
- I agree with Dilip that having similar mechanism for 'insert into
select...' statements would add more value to the patch, but even then
this looks like a good idea to extend parallelism for atleast a few of
the write operations

--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Rafia Sabih (#8)
Re: utility commands benefiting from parallel plan

On Wed, Sep 13, 2017 at 4:17 PM, Rafia Sabih <rafia.sabih@enterprisedb.com>
wrote:

On Fri, Sep 1, 2017 at 12:31 PM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

Hi All,

Attached a rebased patch that supports parallelism for the queries
that are underneath of some utility commands such as CREATE TABLE AS
and CREATE MATERIALIZED VIEW.

Note: This patch doesn't make the utility statement (insert operation)
to run in parallel. It only allows the select query to be parallel if the
query
is eligible for parallel.

Here is my feedback fro this patch,

- The patch is working as expected, all regression tests are passing

Thanks for the review.

- I agree with Dilip that having similar mechanism for 'insert into
select...' statements would add more value to the patch, but even then
this looks like a good idea to extend parallelism for atleast a few of
the write operations

Yes, I also agree that supporting of 'insert into select' will provide more
benefit. I already tried to support the same in [1]/messages/by-id/CAJrrPGfo58TrYxnqwnFAo4%25 2BtYr8wUH-oC0dJ7V9x7gAOZeaz%2BQ%40mail.gmail.com, but it have many
drawbacks especially with triggers. To support a proper parallel support
for DML queries, I feel the logic of ParalleMode needs an update to
avoid the errors from PreventCommandIfParallelMode() function to
identify whether it is nested query operation and that should execute
only in backend and etc.

As the current patch falls into DDL category that gets benefited from
parallel query, because of this reason, I didn't add the 'insert into
select'
support into this patch. Without support of it also, it provides the
benefit.
I work on supporting the DML write support with parallel query as a
separate patch.

[1]: /messages/by-id/CAJrrPGfo58TrYxnqwnFAo4%25 2BtYr8wUH-oC0dJ7V9x7gAOZeaz%2BQ%40mail.gmail.com
2BtYr8wUH-oC0dJ7V9x7gAOZeaz%2BQ%40mail.gmail.com

Regards,
Hari Babu
Fujitsu Australia

#10Rafia Sabih
rafia.sabih@enterprisedb.com
In reply to: Haribabu Kommi (#9)
Re: utility commands benefiting from parallel plan

On Wed, Sep 13, 2017 at 2:29 PM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

On Wed, Sep 13, 2017 at 4:17 PM, Rafia Sabih <rafia.sabih@enterprisedb.com>
wrote:

On Fri, Sep 1, 2017 at 12:31 PM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

Hi All,

Attached a rebased patch that supports parallelism for the queries
that are underneath of some utility commands such as CREATE TABLE AS
and CREATE MATERIALIZED VIEW.

Note: This patch doesn't make the utility statement (insert operation)
to run in parallel. It only allows the select query to be parallel if
the
query
is eligible for parallel.

Here is my feedback fro this patch,

- The patch is working as expected, all regression tests are passing

Thanks for the review.

- I agree with Dilip that having similar mechanism for 'insert into
select...' statements would add more value to the patch, but even then
this looks like a good idea to extend parallelism for atleast a few of
the write operations

Yes, I also agree that supporting of 'insert into select' will provide more
benefit. I already tried to support the same in [1], but it have many
drawbacks especially with triggers. To support a proper parallel support
for DML queries, I feel the logic of ParalleMode needs an update to
avoid the errors from PreventCommandIfParallelMode() function to
identify whether it is nested query operation and that should execute
only in backend and etc.

As the current patch falls into DDL category that gets benefited from
parallel query, because of this reason, I didn't add the 'insert into
select'
support into this patch. Without support of it also, it provides the
benefit.
I work on supporting the DML write support with parallel query as a
separate patch.

Sounds sensible. In that case, I'll be marking this patch ready for committer.

[1] -
/messages/by-id/CAJrrPGfo58TrYxnqwnFAo4+tYr8wUH-oC0dJ7V9x7gAOZeaz+Q@mail.gmail.com

--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Rafia Sabih (#10)
Re: utility commands benefiting from parallel plan

On Thu, Sep 14, 2017 at 2:42 PM, Rafia Sabih <rafia.sabih@enterprisedb.com>
wrote:

On Wed, Sep 13, 2017 at 2:29 PM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

On Wed, Sep 13, 2017 at 4:17 PM, Rafia Sabih <

rafia.sabih@enterprisedb.com>

wrote:

On Fri, Sep 1, 2017 at 12:31 PM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

Hi All,

Attached a rebased patch that supports parallelism for the queries
that are underneath of some utility commands such as CREATE TABLE AS
and CREATE MATERIALIZED VIEW.

Note: This patch doesn't make the utility statement (insert operation)
to run in parallel. It only allows the select query to be parallel if
the
query
is eligible for parallel.

Here is my feedback fro this patch,

- The patch is working as expected, all regression tests are passing

Thanks for the review.

- I agree with Dilip that having similar mechanism for 'insert into
select...' statements would add more value to the patch, but even then
this looks like a good idea to extend parallelism for atleast a few of
the write operations

Yes, I also agree that supporting of 'insert into select' will provide

more

benefit. I already tried to support the same in [1], but it have many
drawbacks especially with triggers. To support a proper parallel support
for DML queries, I feel the logic of ParalleMode needs an update to
avoid the errors from PreventCommandIfParallelMode() function to
identify whether it is nested query operation and that should execute
only in backend and etc.

As the current patch falls into DDL category that gets benefited from
parallel query, because of this reason, I didn't add the 'insert into
select'
support into this patch. Without support of it also, it provides the
benefit.
I work on supporting the DML write support with parallel query as a
separate patch.

Sounds sensible. In that case, I'll be marking this patch ready for
committer.

Thanks for the review.

Regards,
Hari Babu
Fujitsu Australia

#12Robert Haas
robertmhaas@gmail.com
In reply to: Haribabu Kommi (#11)
Re: utility commands benefiting from parallel plan

On Fri, Sep 15, 2017 at 2:22 AM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

Thanks for the review.

I committed this patch with some cosmetic changes. I think the fact
that several people have asked for this indicates that, even without
making some of the more complicated cases work, this has some value.
I am not convinced it is safe in any case other than when the DML
command is both creating and populating the table, so I removed
REFRESH MATERIALIZED VIEW support from the patch and worked over the
documentation and comments to a degree.

The problem with a case like REFRESH MATERIALIZED VIEW is that there's
nothing to prevent something that gets run in the course of the query
from trying to access the view (and the heavyweight lock won't prevent
that, due to group locking). That's probably a stupid thing to do,
but it can't be allowed to break the world. The other cases are safe
from that particular problem because the table doesn't exist yet.

I am still slightly nervous that there may be some other problem that
none of us have thought about that makes this unsafe, but we still
have quite a while until 11 goes out the door, so if there is such a
problem, maybe someone else will find it now that this is committed
and more likely to get some attention. I thought about not committing
this just in case such a problem exists, but that seemed too timid: if
we never commit anything that might have an undetected bug, we'll
never commit anything at all.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Robert Haas (#12)
Re: utility commands benefiting from parallel plan

On Fri, Oct 6, 2017 at 2:43 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Sep 15, 2017 at 2:22 AM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

Thanks for the review.

I committed this patch with some cosmetic changes. I think the fact
that several people have asked for this indicates that, even without
making some of the more complicated cases work, this has some value.
I am not convinced it is safe in any case other than when the DML
command is both creating and populating the table, so I removed
REFRESH MATERIALIZED VIEW support from the patch and worked over the
documentation and comments to a degree.

The problem with a case like REFRESH MATERIALIZED VIEW is that there's
nothing to prevent something that gets run in the course of the query
from trying to access the view (and the heavyweight lock won't prevent
that, due to group locking). That's probably a stupid thing to do,
but it can't be allowed to break the world. The other cases are safe
from that particular problem because the table doesn't exist yet.

Thanks for committing the patch.
I understand the problem of REFRESH MATERIALIZED VIEW case.

Regards,
Hari Babu
Fujitsu Australia