Expose Parallelism counters planned/execute in pg_stat_statements
Hi all:
Here's a patch to add counters about planned/executed for parallelism
to pg_stat_statements, as a way to follow-up on if the queries are
planning/executing with parallelism, this can help to understand if you
have a good/bad configuration or if your hardware is enough
We decided to store information about the number of times is planned
and the number of times executed the parallelism by queries
Regards
Anthony
Attachments:
v1-0001-pg_stat_statements_counters-parallelism.patchtext/x-patch; charset=UTF-8; name=v1-0001-pg_stat_statements_counters-parallelism.patchDownload+203-10
On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote:
Hi all:
Here's a patch to add counters about� planned/executed� for parallelism� to
pg_stat_statements, as a way to follow-up on if the queries are
planning/executing with parallelism, this can help to understand if you have
a good/bad configuration or if your hardware is enough
+1, I was missing something like this before, but it didn't occur to me to use
PSS:
/messages/by-id/20200310190142.GB29065@telsasoft.com
My hope is to answer to questions like these:
. is query (ever? usually?) using parallel paths?
. is query usefully using parallel paths?
. what queries are my max_parallel_workers(_per_process) being used for ?
. Are certain longrunning or frequently running queries which are using
parallel paths using all max_parallel_workers and precluding other queries
from using parallel query ? Or, are semi-short queries sometimes precluding
longrunning queries from using parallelism, when the long queries would
better benefit ?
This patch is storing the number of times the query was planned/executed using
parallelism, but not the number of workers. Would it make sense to instead
store the the *number* of workers launched/planned ? Otherwise, it might be
that a query is consistently planned to use a large number of workers, but then
runs with few. I'm referring to the fields shown in "explain/analyze". (Then,
the 2nd field should be renamed to "launched").
Workers Planned: 2
Workers Launched: 2
I don't think this is doing the right thing for prepared statements, like
PQprepare()/PQexecPrepared(), or SQL: PREPARE p AS SELECT; EXECUTE p;
Right now, the docs say that it shows the "number of times the statement was
planned to use parallelism", but the planning counter is incremented during
each execution. PSS already shows "calls" and "plans" separately. The
documentation doesn't mention prepared statements as a reason why they wouldn't
match, which seems like a deficiency.
This currently doesn't count parallel workers used by utility statements, such
as CREATE INDEX and VACUUM (see max_parallel_maintenance_workers). If that's
not easy to do, mention that in the docs as a limitation.
You should try to add some test to contrib/pg_stat_statements/sql, or add
parallelism test to an existing test. Note that the number of parallel workers
launched isn't stable, so you can't test that part..
You modified pgss_store() to take two booleans, but pass "NULL" instead of
"false". Curiously, of all the compilers in cirrusci, only MSVC complained ..
"planed" is actually spelled "planned", with two enns.
The patch has some leading/trailing whitespace (maybe shown by git log
depending on your configuration).
Please add this patch to the next commitfest.
https://commitfest.postgresql.org/39/
--
Justin
On 21-07-22 20:35, Justin Pryzby wrote:
On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote:
Hi all:
Here's a patch to add counters about planned/executed for parallelism to
pg_stat_statements, as a way to follow-up on if the queries are
planning/executing with parallelism, this can help to understand if you have
a good/bad configuration or if your hardware is enough+1, I was missing something like this before, but it didn't occur to me to use
PSS:
First of all, thanks for review the the patch and for the comments
/messages/by-id/20200310190142.GB29065@telsasoft.com
My hope is to answer to questions like these:
. is query (ever? usually?) using parallel paths?
. is query usefully using parallel paths?
. what queries are my max_parallel_workers(_per_process) being used for ?
. Are certain longrunning or frequently running queries which are using
parallel paths using all max_parallel_workers and precluding other queries
from using parallel query ? Or, are semi-short queries sometimes precluding
longrunning queries from using parallelism, when the long queries would
better benefit ?This patch is storing the number of times the query was planned/executed using
parallelism, but not the number of workers. Would it make sense to instead
store the the *number* of workers launched/planned ? Otherwise, it might be
that a query is consistently planned to use a large number of workers, but then
runs with few. I'm referring to the fields shown in "explain/analyze". (Then,
the 2nd field should be renamed to "launched").Workers Planned: 2
Workers Launched: 2
The main idea of the patch is to store the number of times the
statements were planned and executed in parallel, not the number of
workers used in the execution. Of course, what you mention can be
helpful, it will be given a review to see how it can be achieved
I don't think this is doing the right thing for prepared statements, like
PQprepare()/PQexecPrepared(), or SQL: PREPARE p AS SELECT; EXECUTE p;Right now, the docs say that it shows the "number of times the statement was
planned to use parallelism", but the planning counter is incremented during
each execution. PSS already shows "calls" and "plans" separately. The
documentation doesn't mention prepared statements as a reason why they wouldn't
match, which seems like a deficiency.
We will check it and see how fix it
This currently doesn't count parallel workers used by utility statements, such
as CREATE INDEX and VACUUM (see max_parallel_maintenance_workers). If that's
not easy to do, mention that in the docs as a limitation.
We will update the documentation with information related to this comment
You should try to add some test to contrib/pg_stat_statements/sql, or add
parallelism test to an existing test. Note that the number of parallel workers
launched isn't stable, so you can't test that part..You modified pgss_store() to take two booleans, but pass "NULL" instead of
"false". Curiously, of all the compilers in cirrusci, only MSVC complained .."planed" is actually spelled "planned", with two enns.
The patch has some leading/trailing whitespace (maybe shown by git log
depending on your configuration).
OK, we will fix it
Show quoted text
Please add this patch to the next commitfest.
https://commitfest.postgresql.org/39/
Hi,
On Fri, Jul 22, 2022 at 11:17:52AM -0400, Anthony Sotolongo wrote:
On 21-07-22 20:35, Justin Pryzby wrote:
On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote:
Hi all:
Here's a patch to add counters about� planned/executed� for parallelism� to
pg_stat_statements, as a way to follow-up on if the queries are
planning/executing with parallelism, this can help to understand if you have
a good/bad configuration or if your hardware is enough+1, I was missing something like this before, but it didn't occur to me to use
PSS:First of all, thanks for review the the patch and for the comments
/messages/by-id/20200310190142.GB29065@telsasoft.com
My hope is to answer to questions like these:
. is query (ever? usually?) using parallel paths?
. is query usefully using parallel paths?
. what queries are my max_parallel_workers(_per_process) being used for ?
. Are certain longrunning or frequently running queries which are using
parallel paths using all max_parallel_workers and precluding other queries
from using parallel query ? Or, are semi-short queries sometimes precluding
longrunning queries from using parallelism, when the long queries would
better benefit ?This patch is storing the number of times the query was planned/executed using
parallelism, but not the number of workers. Would it make sense to instead
store the the *number* of workers launched/planned ? Otherwise, it might be
that a query is consistently planned to use a large number of workers, but then
runs with few. I'm referring to the fields shown in "explain/analyze". (Then,
the 2nd field should be renamed to "launched").Workers Planned: 2
Workers Launched: 2The main idea of the patch is to store the number of times the statements
were planned and executed in parallel, not the number of workers used in the
execution. Of course, what you mention can be helpful, it will be given a
review to see how it can be achieved
I think you would need both information.
With your current patch it only says if the plan and execution had parallelism
enabled, but not if it could actually use with parallelism at all. It gives
some information, but it's not that useful on its own.
Also, a cumulated number of workers isn't really useful if you don't know what
fraction of the number of executions (or planning) they refer to.
That being said, I'm not sure how exactly the information about the number of
workers can be exposed, as there might be multiple gathers per plan and AKAIK
they can run at different part of the query execution. So in some case having
a total of 3 workers planned means that you ideally needed 3 workers available
at the same time, and in some other case it might be only 2 or even 1.
On 22-07-22 12:08, Julien Rouhaud wrote:
Hi,
On Fri, Jul 22, 2022 at 11:17:52AM -0400, Anthony Sotolongo wrote:
On 21-07-22 20:35, Justin Pryzby wrote:
On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote:
Hi all:
Here's a patch to add counters about planned/executed for parallelism to
pg_stat_statements, as a way to follow-up on if the queries are
planning/executing with parallelism, this can help to understand if you have
a good/bad configuration or if your hardware is enough+1, I was missing something like this before, but it didn't occur to me to use
PSS:First of all, thanks for review the the patch and for the comments
/messages/by-id/20200310190142.GB29065@telsasoft.com
My hope is to answer to questions like these:
. is query (ever? usually?) using parallel paths?
. is query usefully using parallel paths?
. what queries are my max_parallel_workers(_per_process) being used for ?
. Are certain longrunning or frequently running queries which are using
parallel paths using all max_parallel_workers and precluding other queries
from using parallel query ? Or, are semi-short queries sometimes precluding
longrunning queries from using parallelism, when the long queries would
better benefit ?This patch is storing the number of times the query was planned/executed using
parallelism, but not the number of workers. Would it make sense to instead
store the the *number* of workers launched/planned ? Otherwise, it might be
that a query is consistently planned to use a large number of workers, but then
runs with few. I'm referring to the fields shown in "explain/analyze". (Then,
the 2nd field should be renamed to "launched").Workers Planned: 2
Workers Launched: 2The main idea of the patch is to store the number of times the statements
were planned and executed in parallel, not the number of workers used in the
execution. Of course, what you mention can be helpful, it will be given a
review to see how it can be achievedI think you would need both information.
With your current patch it only says if the plan and execution had parallelism
enabled, but not if it could actually use with parallelism at all. It gives
some information, but it's not that useful on its own.
The original idea of this patch was identify when occurred some of the
circumstances under which it was impossible to execute that plan in
parallel at execution time
as mentioned on the documentation at [1]
For example:
Due to the different client configuration, the execution behavior can
be different , and can affect the performance:
As you can see in the above execution plan
From psql
-> Gather Merge (cost=779747.43..795700.62 rows=126492
width=40) (actual time=1109.515..1472.369 rows=267351 loops=1)
Output: t.entity_node_id, t.configuration_id,
t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))
Workers Planned: 6
Workers Launched: 6
-> Partial GroupAggregate (cost=778747.33..779327.09
rows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7)
From jdbc (from dbeaver)
-> Gather Merge (cost=779747.43..795700.62 rows=126492
width=40) (actual time=4383.576..4385.856 rows=398 loops=1)
Output: t.entity_node_id, t.configuration_id,
t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))
Workers Planned: 6
Workers Launched: 0
-> Partial GroupAggregate (cost=778747.33..779327.09
rows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1)
This example was discussed also at this Thread [2]
With these PSS counters will be easily identified when some of these
causes are happening.
[1]
https://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html
Also, a cumulated number of workers isn't really useful if you don't know what
fraction of the number of executions (or planning) they refer to.
We will try to investigate how to do this.
Show quoted text
That being said, I'm not sure how exactly the information about the number of
workers can be exposed, as there might be multiple gathers per plan and AKAIK
they can run at different part of the query execution. So in some case having
a total of 3 workers planned means that you ideally needed 3 workers available
at the same time, and in some other case it might be only 2 or even 1.
Hi,
On Fri, Jul 22, 2022 at 02:11:35PM -0400, Anthony Sotolongo wrote:
On 22-07-22 12:08, Julien Rouhaud wrote:
With your current patch it only says if the plan and execution had parallelism
enabled, but not if it could actually use with parallelism at all. It gives
some information, but it's not that useful on its own.The original idea of this patch was� identify when occurred some of the
circumstances under which it was� impossible to execute that plan in
parallel at execution timeas mentioned on the documentation at [1]
For example:
Due to the different client configuration, the execution behavior can be�
different , and can affect the performance:As you can see in the above execution plan
From psql
�� �������� ->� Gather Merge� (cost=779747.43..795700.62 rows=126492
width=40) (actual time=1109.515..1472.369 rows=267351 loops=1)
�� �������������� Output: t.entity_node_id, t.configuration_id,
t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))
�� �������������� Workers Planned: 6
�� �������������� Workers Launched: 6
�� �������������� ->� Partial GroupAggregate (cost=778747.33..779327.09
rows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7)From jdbc (from dbeaver)
�� �������� ->� Gather Merge� (cost=779747.43..795700.62 rows=126492
width=40) (actual time=4383.576..4385.856 rows=398 loops=1)
�� �������������� Output: t.entity_node_id, t.configuration_id,
t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))
�� �������������� Workers Planned: 6
�� �������������� Workers Launched: 0
�� �������������� ->� Partial GroupAggregate (cost=778747.33..779327.09
rows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1)This example was� discussed also at this Thread [2]
With these PSS counters will be easily identified when some of these causes
are happening.
I agree it can be hard to identify, but I don't think that your proposed
approach is enough to be able to do so. There's no guarantee of an exact 1:1
mapping between planning and execution, so you could totally see the same value
for parallel_planned and parallel_exec and still have the dbeaver behavior
happening.
If you want to be able to distinguish "plan was parallel but execution was
forced to disable it" from "plan wasn't parallel, so was the execution", you
need some specific counters for both situations.
On 23-07-22 00:03, Julien Rouhaud wrote:
Hi,
On Fri, Jul 22, 2022 at 02:11:35PM -0400, Anthony Sotolongo wrote:
On 22-07-22 12:08, Julien Rouhaud wrote:
With your current patch it only says if the plan and execution had parallelism
enabled, but not if it could actually use with parallelism at all. It gives
some information, but it's not that useful on its own.The original idea of this patch was identify when occurred some of the
circumstances under which it was impossible to execute that plan in
parallel at execution timeas mentioned on the documentation at [1]
For example:
Due to the different client configuration, the execution behavior can be
different , and can affect the performance:As you can see in the above execution plan
From psql
-> Gather Merge (cost=779747.43..795700.62 rows=126492
width=40) (actual time=1109.515..1472.369 rows=267351 loops=1)
Output: t.entity_node_id, t.configuration_id,
t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))
Workers Planned: 6
Workers Launched: 6
-> Partial GroupAggregate (cost=778747.33..779327.09
rows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7)From jdbc (from dbeaver)
-> Gather Merge (cost=779747.43..795700.62 rows=126492
width=40) (actual time=4383.576..4385.856 rows=398 loops=1)
Output: t.entity_node_id, t.configuration_id,
t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))
Workers Planned: 6
Workers Launched: 0
-> Partial GroupAggregate (cost=778747.33..779327.09
rows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1)This example was discussed also at this Thread [2]
With these PSS counters will be easily identified when some of these causes
are happening.I agree it can be hard to identify, but I don't think that your proposed
approach is enough to be able to do so. There's no guarantee of an exact 1:1
mapping between planning and execution, so you could totally see the same value
for parallel_planned and parallel_exec and still have the dbeaver behavior
happening.If you want to be able to distinguish "plan was parallel but execution was
forced to disable it" from "plan wasn't parallel, so was the execution", you
need some specific counters for both situations.
Thanks for your time and feedback, yes we were missing some details, so
we need to rethink some points to continue
Hi,
El lun, 25 jul 2022 a la(s) 14:19, Anthony Sotolongo (asotolongo@gmail.com)
escribió:
On 23-07-22 00:03, Julien Rouhaud wrote:
Hi,
On Fri, Jul 22, 2022 at 02:11:35PM -0400, Anthony Sotolongo wrote:
On 22-07-22 12:08, Julien Rouhaud wrote:
With your current patch it only says if the plan and execution had
parallelism
enabled, but not if it could actually use with parallelism at all. It
gives
some information, but it's not that useful on its own.
The original idea of this patch was identify when occurred some of the
circumstances under which it was impossible to execute that plan in
parallel at execution timeas mentioned on the documentation at [1]
For example:
Due to the different client configuration, the execution behavior can be
different , and can affect the performance:As you can see in the above execution plan
From psql
-> Gather Merge (cost=779747.43..795700.62 rows=126492
width=40) (actual time=1109.515..1472.369 rows=267351 loops=1)
Output: t.entity_node_id, t.configuration_id,
t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))
Workers Planned: 6
Workers Launched: 6
-> Partial GroupAggregate (cost=778747.33..779327.09
rows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7)From jdbc (from dbeaver)
-> Gather Merge (cost=779747.43..795700.62 rows=126492
width=40) (actual time=4383.576..4385.856 rows=398 loops=1)
Output: t.entity_node_id, t.configuration_id,
t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))
Workers Planned: 6
Workers Launched: 0
-> Partial GroupAggregate (cost=778747.33..779327.09
rows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1)This example was discussed also at this Thread [2]
With these PSS counters will be easily identified when some of these
causes
are happening.
I agree it can be hard to identify, but I don't think that your proposed
approach is enough to be able to do so. There's no guarantee of anexact 1:1
mapping between planning and execution, so you could totally see the
same value
for parallel_planned and parallel_exec and still have the dbeaver
behavior
happening.
If you want to be able to distinguish "plan was parallel but execution
was
forced to disable it" from "plan wasn't parallel, so was the execution",
you
need some specific counters for both situations.
Thanks for your time and feedback, yes we were missing some details, so
we need to rethink some points to continue
We have rewritten the patch and added the necessary columns to have the
number of times a parallel query plan was not executed using parallelism.
We are investigating how to add more information related to the workers
created
by the Gather/GatherMerge nodes, but it is not a trivial task.
Regards.
Attachments:
v2-0001-Add-parallel-counters-to-pg_stat_statements.patchtext/x-patch; charset=US-ASCII; name=v2-0001-Add-parallel-counters-to-pg_stat_statements.patchDownload+287-11
Hi:
We have rewritten the patch and added the necessary columns to have the
number of times a parallel query plan was not executed using parallelism.
This version includes comments on the source code and documentation.
Regards
Attachments:
v3-0001-Add-parallel-counters-to-pg_stat_statements.patchapplication/octet-stream; name=v3-0001-Add-parallel-counters-to-pg_stat_statements.patchDownload+311-11
Hi,
On Fri, Jul 29, 2022 at 08:36:44AM -0500, Daymel Bonne Sol�s wrote:
We have rewritten the patch and added the necessary columns to have the
number of times a parallel query plan was not executed using parallelism.We are investigating how to add more information related to the workers
created
by the Gather/GatherMerge nodes, but it is not a trivial task.
As far as I can see the scope of the counters is now different. You said you
wanted to be able to identify when a parallel query plan cannot be executed
with parallelism, but what the fields are now showing is simply whether no
workers were launched at all. It could be because of the dbeaver behavior you
mentioned (the !es_use_parallel_mode case), but also if the executor did try to
launch parallel workers and didn't get any.
I don't think that's an improvement. With this patch if you see the
"paral_planned_not_exec" counter going up, you still don't know if this is
because of the !es_use_parallel_mode or if you simply have too many parallel
queries running at the same time, or both, and therefore can't do much with
that information. Both situations are different and in my opinion require
different (and specialized) counters to properly handle them.
Also, I don't think that paral_planned_exec and paral_planned_not_exec are good
column (and variable) names. Maybe something like
"parallel_exec_count" and "forced_non_parallel_exec_count" (assuming it's based
on a parallel plan and !es_use_parallel_mode).
On Tue, Aug 16, 2022 at 02:58:43PM +0800, Julien Rouhaud wrote:
I don't think that's an improvement. With this patch if you see the
"paral_planned_not_exec" counter going up, you still don't know if this is
because of the !es_use_parallel_mode or if you simply have too many parallel
queries running at the same time, or both, and therefore can't do much with
that information. Both situations are different and in my opinion require
different (and specialized) counters to properly handle them.
This thread has been idle for a few weeks now, and this feedback has
not been answered to. This CF entry has been marked as RwF.
--
Michael