Allow auto_explain to log plans before queries are executed

Started by Yugo Nagataabout 6 years ago15 messageshackers
Jump to latest
#1Yugo Nagata
nagata@sraoss.co.jp

Hi,

Attached is a patch for allowing auto_explain to log plans before
queries are executed.

Currently, auto_explain logs plans only after query executions,
so if a query gets stuck its plan could not be logged. If we can
know plans of stuck queries, we may get some hints to resolve the
stuck. This is useful when you are testing and debugging your
application whose queries get stuck in some situations.

This patch adds new option log_before_query to auto_explain.
Setting auto_explain.log_before_query option logs all plans before
queries are executed regardless of auto_explain.log_min_duration
unless this is set -1 to disable logging. If log_before_query is
enabled, only duration time is logged after query execution as in
the case of when both log_statement and log_min_duration_statement
are enabled.

Regards,
Yugo Nagata

--
Yugo NAGATA <nagata@sraoss.co.jp>

Attachments:

0001-Allow-to-log-plans-before-queries-are-executed.patchtext/x-diff; name=0001-Allow-to-log-plans-before-queries-are-executed.patchDownload+109-36
#2Julien Rouhaud
rjuju123@gmail.com
In reply to: Yugo Nagata (#1)
Re: Allow auto_explain to log plans before queries are executed

On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:

Hi,

Attached is a patch for allowing auto_explain to log plans before
queries are executed.

Currently, auto_explain logs plans only after query executions,
so if a query gets stuck its plan could not be logged. If we can
know plans of stuck queries, we may get some hints to resolve the
stuck. This is useful when you are testing and debugging your
application whose queries get stuck in some situations.

Indeed that could be useful.

This patch adds new option log_before_query to auto_explain.

Maybe "log_before_execution" would be better?

Setting auto_explain.log_before_query option logs all plans before
queries are executed regardless of auto_explain.log_min_duration
unless this is set -1 to disable logging. If log_before_query is
enabled, only duration time is logged after query execution as in
the case of when both log_statement and log_min_duration_statement
are enabled.

I'm not sure about this behavior. The final explain plan is needed at least if
log_analyze, log_buffers or log_timing are enabled.

#3Yugo Nagata
nagata@sraoss.co.jp
In reply to: Julien Rouhaud (#2)
Re: Allow auto_explain to log plans before queries are executed

On Wed, 26 Feb 2020 18:51:21 +0100
Julien Rouhaud <rjuju123@gmail.com> wrote:

On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:

Hi,

Attached is a patch for allowing auto_explain to log plans before
queries are executed.

Currently, auto_explain logs plans only after query executions,
so if a query gets stuck its plan could not be logged. If we can
know plans of stuck queries, we may get some hints to resolve the
stuck. This is useful when you are testing and debugging your
application whose queries get stuck in some situations.

Indeed that could be useful.

This patch adds new option log_before_query to auto_explain.

Maybe "log_before_execution" would be better?

Thanks! This seems better also to me.

Setting auto_explain.log_before_query option logs all plans before
queries are executed regardless of auto_explain.log_min_duration
unless this is set -1 to disable logging. If log_before_query is
enabled, only duration time is logged after query execution as in
the case of when both log_statement and log_min_duration_statement
are enabled.

I'm not sure about this behavior. The final explain plan is needed at least if
log_analyze, log_buffers or log_timing are enabled.

In the current patch, log_before_query (will be log_before_execution)
has no effect if log_analyze is enabled in order to avoid to log the
same plans twice. Instead, is it better to log the plan always twice,
before and after the execution, if log_before_query is enabled
regardless of log_min_duration or log_analyze?

Regards,
Yugo Nagata

--
Yugo NAGATA <nagata@sraoss.co.jp>

#4Kyotaro Horiguchi
horikyota.ntt@gmail.com
In reply to: Yugo Nagata (#3)
Re: Allow auto_explain to log plans before queries are executed

Hello.

At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in

On Wed, 26 Feb 2020 18:51:21 +0100
Julien Rouhaud <rjuju123@gmail.com> wrote:

On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:

Hi,

Attached is a patch for allowing auto_explain to log plans before
queries are executed.

Currently, auto_explain logs plans only after query executions,
so if a query gets stuck its plan could not be logged. If we can
know plans of stuck queries, we may get some hints to resolve the
stuck. This is useful when you are testing and debugging your
application whose queries get stuck in some situations.

Indeed that could be useful.

This patch adds new option log_before_query to auto_explain.

Maybe "log_before_execution" would be better?

Thanks! This seems better also to me.

Setting auto_explain.log_before_query option logs all plans before
queries are executed regardless of auto_explain.log_min_duration
unless this is set -1 to disable logging. If log_before_query is
enabled, only duration time is logged after query execution as in
the case of when both log_statement and log_min_duration_statement
are enabled.

I'm not sure about this behavior. The final explain plan is needed at least if
log_analyze, log_buffers or log_timing are enabled.

In the current patch, log_before_query (will be log_before_execution)
has no effect if log_analyze is enabled in order to avoid to log the
same plans twice. Instead, is it better to log the plan always twice,
before and after the execution, if log_before_query is enabled
regardless of log_min_duration or log_analyze?

Honestly, I don't think showing plans for all queries is useful
behavior.

If you allow the stuck query to be canceled, showing plan in
PG_FINALLY() block in explain_ExecutorRun would work, which look like
this.

explain_ExecutorRun()
{
...
PG_TRY();
{
...
else
starndard_ExecutorRun();
nesting_level--;
}
PG_CATCH();
{
nesting_level--;

if (auto_explain_log_failed_plan &&
<maybe the time elapsed from start exceeds min_duration>)
{
'show the plan'
}
}
}

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

#5Pavel Stehule
pavel.stehule@gmail.com
In reply to: Kyotaro Horiguchi (#4)
Re: Allow auto_explain to log plans before queries are executed

čt 27. 2. 2020 v 6:16 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>
napsal:

Hello.

At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp>
wrote in

On Wed, 26 Feb 2020 18:51:21 +0100
Julien Rouhaud <rjuju123@gmail.com> wrote:

On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:

Hi,

Attached is a patch for allowing auto_explain to log plans before
queries are executed.

Currently, auto_explain logs plans only after query executions,
so if a query gets stuck its plan could not be logged. If we can
know plans of stuck queries, we may get some hints to resolve the
stuck. This is useful when you are testing and debugging your
application whose queries get stuck in some situations.

Indeed that could be useful.

This patch adds new option log_before_query to auto_explain.

Maybe "log_before_execution" would be better?

Thanks! This seems better also to me.

Setting auto_explain.log_before_query option logs all plans before
queries are executed regardless of auto_explain.log_min_duration
unless this is set -1 to disable logging. If log_before_query is
enabled, only duration time is logged after query execution as in
the case of when both log_statement and log_min_duration_statement
are enabled.

I'm not sure about this behavior. The final explain plan is needed at

least if

log_analyze, log_buffers or log_timing are enabled.

In the current patch, log_before_query (will be log_before_execution)
has no effect if log_analyze is enabled in order to avoid to log the
same plans twice. Instead, is it better to log the plan always twice,
before and after the execution, if log_before_query is enabled
regardless of log_min_duration or log_analyze?

Honestly, I don't think showing plans for all queries is useful
behavior.

If you allow the stuck query to be canceled, showing plan in
PG_FINALLY() block in explain_ExecutorRun would work, which look like
this.

explain_ExecutorRun()
{
...
PG_TRY();
{
...
else
starndard_ExecutorRun();
nesting_level--;
}
PG_CATCH();
{
nesting_level--;

if (auto_explain_log_failed_plan &&
<maybe the time elapsed from start exceeds min_duration>)
{
'show the plan'
}
}
}

regards.

It can work - but still it is not good enough solution. We need "query
debugger" that allows to get some query execution metrics online.

There was a problem with memory management for passing plans between
processes. Can we used temp files instead shared memory?

Regards

Pavel

Show quoted text

--
Kyotaro Horiguchi
NTT Open Source Software Center

#6Yugo Nagata
nagata@sraoss.co.jp
In reply to: Kyotaro Horiguchi (#4)
Re: Allow auto_explain to log plans before queries are executed

On Thu, 27 Feb 2020 14:14:41 +0900 (JST)
Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:

Hello.

At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in

On Wed, 26 Feb 2020 18:51:21 +0100
Julien Rouhaud <rjuju123@gmail.com> wrote:

On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:

Hi,

Attached is a patch for allowing auto_explain to log plans before
queries are executed.

Currently, auto_explain logs plans only after query executions,
so if a query gets stuck its plan could not be logged. If we can
know plans of stuck queries, we may get some hints to resolve the
stuck. This is useful when you are testing and debugging your
application whose queries get stuck in some situations.

Indeed that could be useful.

This patch adds new option log_before_query to auto_explain.

Maybe "log_before_execution" would be better?

Thanks! This seems better also to me.

Setting auto_explain.log_before_query option logs all plans before
queries are executed regardless of auto_explain.log_min_duration
unless this is set -1 to disable logging. If log_before_query is
enabled, only duration time is logged after query execution as in
the case of when both log_statement and log_min_duration_statement
are enabled.

I'm not sure about this behavior. The final explain plan is needed at least if
log_analyze, log_buffers or log_timing are enabled.

In the current patch, log_before_query (will be log_before_execution)
has no effect if log_analyze is enabled in order to avoid to log the
same plans twice. Instead, is it better to log the plan always twice,
before and after the execution, if log_before_query is enabled
regardless of log_min_duration or log_analyze?

Honestly, I don't think showing plans for all queries is useful
behavior.

If you allow the stuck query to be canceled, showing plan in
PG_FINALLY() block in explain_ExecutorRun would work, which look like
this.

explain_ExecutorRun()
{
...
PG_TRY();
{
...
else
starndard_ExecutorRun();
nesting_level--;
}
PG_CATCH();
{
nesting_level--;

if (auto_explain_log_failed_plan &&
<maybe the time elapsed from start exceeds min_duration>)
{
'show the plan'
}
}
}

That makes sense. The initial purpose is to log plans of stuck queries
not of all queries, so your suggestion, doing it only when the query
fails, is reasonable. I'll consider it little more.

Regards,
Yugo Nagata

--
Yugo NAGATA <nagata@sraoss.co.jp>

#7Kyotaro Horiguchi
horikyota.ntt@gmail.com
In reply to: Pavel Stehule (#5)
Re: Allow auto_explain to log plans before queries are executed

At Thu, 27 Feb 2020 06:27:24 +0100, Pavel Stehule <pavel.stehule@gmail.com> wrote in

odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>
napsal:

In the current patch, log_before_query (will be log_before_execution)
has no effect if log_analyze is enabled in order to avoid to log the
same plans twice. Instead, is it better to log the plan always twice,
before and after the execution, if log_before_query is enabled
regardless of log_min_duration or log_analyze?

Honestly, I don't think showing plans for all queries is useful
behavior.

If you allow the stuck query to be canceled, showing plan in
PG_FINALLY() block in explain_ExecutorRun would work, which look like
this.

...

It can work - but still it is not good enough solution. We need "query
debugger" that allows to get some query execution metrics online.

If we need a live plan dump of a running query, We could do that using
some kind of inter-backend triggering. (I'm not sure if PG offers
inter-backend signalling facility usable by extensions..)

=# select auto_explain.log_plan_backend(12345);

postgresql.log:
LOG: requested plan dump: <blah, blah>..

There was a problem with memory management for passing plans between
processes. Can we used temp files instead shared memory?

=# select auto_explain.dump_plan_backend(12345);
pid | query | plan
-------+-------------+-------------------
12345 | SELECT 1; | Result (cost=....) (actual..)
(1 row)

Doesn't DSA work? I think it would be easier to handle than files.

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

#8Yugo Nagata
nagata@sraoss.co.jp
In reply to: Pavel Stehule (#5)
Re: Allow auto_explain to log plans before queries are executed

On Thu, 27 Feb 2020 06:27:24 +0100
Pavel Stehule <pavel.stehule@gmail.com> wrote:

čt 27. 2. 2020 v 6:16 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>
napsal:

Hello.

At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp>
wrote in

On Wed, 26 Feb 2020 18:51:21 +0100
Julien Rouhaud <rjuju123@gmail.com> wrote:

On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:

Hi,

Attached is a patch for allowing auto_explain to log plans before
queries are executed.

Currently, auto_explain logs plans only after query executions,
so if a query gets stuck its plan could not be logged. If we can
know plans of stuck queries, we may get some hints to resolve the
stuck. This is useful when you are testing and debugging your
application whose queries get stuck in some situations.

Indeed that could be useful.

This patch adds new option log_before_query to auto_explain.

Maybe "log_before_execution" would be better?

Thanks! This seems better also to me.

Setting auto_explain.log_before_query option logs all plans before
queries are executed regardless of auto_explain.log_min_duration
unless this is set -1 to disable logging. If log_before_query is
enabled, only duration time is logged after query execution as in
the case of when both log_statement and log_min_duration_statement
are enabled.

I'm not sure about this behavior. The final explain plan is needed at

least if

log_analyze, log_buffers or log_timing are enabled.

In the current patch, log_before_query (will be log_before_execution)
has no effect if log_analyze is enabled in order to avoid to log the
same plans twice. Instead, is it better to log the plan always twice,
before and after the execution, if log_before_query is enabled
regardless of log_min_duration or log_analyze?

Honestly, I don't think showing plans for all queries is useful
behavior.

If you allow the stuck query to be canceled, showing plan in
PG_FINALLY() block in explain_ExecutorRun would work, which look like
this.

explain_ExecutorRun()
{
...
PG_TRY();
{
...
else
starndard_ExecutorRun();
nesting_level--;
}
PG_CATCH();
{
nesting_level--;

if (auto_explain_log_failed_plan &&
<maybe the time elapsed from start exceeds min_duration>)
{
'show the plan'
}
}
}

regards.

It can work - but still it is not good enough solution. We need "query
debugger" that allows to get some query execution metrics online.

There was a problem with memory management for passing plans between
processes. Can we used temp files instead shared memory?

I think "query debugger" feature you proposed is out of scope of
auto_explain module. I also think the feature to analyze running
query online is great, but we will need another discussion on a new
module or eature for it.

Regards,
Yugo Nagata

--
Yugo NAGATA <nagata@sraoss.co.jp>

#9Pavel Stehule
pavel.stehule@gmail.com
In reply to: Kyotaro Horiguchi (#7)
Re: Allow auto_explain to log plans before queries are executed

čt 27. 2. 2020 v 6:58 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>
napsal:

At Thu, 27 Feb 2020 06:27:24 +0100, Pavel Stehule <pavel.stehule@gmail.com>
wrote in

odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>
napsal:

In the current patch, log_before_query (will be log_before_execution)
has no effect if log_analyze is enabled in order to avoid to log the
same plans twice. Instead, is it better to log the plan always

twice,

before and after the execution, if log_before_query is enabled
regardless of log_min_duration or log_analyze?

Honestly, I don't think showing plans for all queries is useful
behavior.

If you allow the stuck query to be canceled, showing plan in
PG_FINALLY() block in explain_ExecutorRun would work, which look like
this.

...

It can work - but still it is not good enough solution. We need "query
debugger" that allows to get some query execution metrics online.

If we need a live plan dump of a running query, We could do that using
some kind of inter-backend triggering. (I'm not sure if PG offers
inter-backend signalling facility usable by extensions..)

=# select auto_explain.log_plan_backend(12345);

postgresql.log:
LOG: requested plan dump: <blah, blah>..

There was a problem with memory management for passing plans between
processes. Can we used temp files instead shared memory?

=# select auto_explain.dump_plan_backend(12345);
pid | query | plan
-------+-------------+-------------------
12345 | SELECT 1; | Result (cost=....) (actual..)
(1 row)

Doesn't DSA work? I think it would be easier to handle than files.

I am not sure. There is hard questions when the allocated shared memory
should be deallocated.

Maybe using third process can be the most nice, safe solution.

The execution plans can be pushed to some background worker memory, and
this process can works like stats_collector.

Show quoted text

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

#10Pavel Stehule
pavel.stehule@gmail.com
In reply to: Yugo Nagata (#8)
Re: Allow auto_explain to log plans before queries are executed

čt 27. 2. 2020 v 7:01 odesílatel Yugo NAGATA <nagata@sraoss.co.jp> napsal:

On Thu, 27 Feb 2020 06:27:24 +0100
Pavel Stehule <pavel.stehule@gmail.com> wrote:

čt 27. 2. 2020 v 6:16 odesílatel Kyotaro Horiguchi <

horikyota.ntt@gmail.com>

napsal:

Hello.

At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp>
wrote in

On Wed, 26 Feb 2020 18:51:21 +0100
Julien Rouhaud <rjuju123@gmail.com> wrote:

On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:

Hi,

Attached is a patch for allowing auto_explain to log plans before
queries are executed.

Currently, auto_explain logs plans only after query executions,
so if a query gets stuck its plan could not be logged. If we can
know plans of stuck queries, we may get some hints to resolve the
stuck. This is useful when you are testing and debugging your
application whose queries get stuck in some situations.

Indeed that could be useful.

This patch adds new option log_before_query to auto_explain.

Maybe "log_before_execution" would be better?

Thanks! This seems better also to me.

Setting auto_explain.log_before_query option logs all plans

before

queries are executed regardless of auto_explain.log_min_duration
unless this is set -1 to disable logging. If log_before_query is
enabled, only duration time is logged after query execution as in
the case of when both log_statement and

log_min_duration_statement

are enabled.

I'm not sure about this behavior. The final explain plan is

needed at

least if

log_analyze, log_buffers or log_timing are enabled.

In the current patch, log_before_query (will be log_before_execution)
has no effect if log_analyze is enabled in order to avoid to log the
same plans twice. Instead, is it better to log the plan always

twice,

before and after the execution, if log_before_query is enabled
regardless of log_min_duration or log_analyze?

Honestly, I don't think showing plans for all queries is useful
behavior.

If you allow the stuck query to be canceled, showing plan in
PG_FINALLY() block in explain_ExecutorRun would work, which look like
this.

explain_ExecutorRun()
{
...
PG_TRY();
{
...
else
starndard_ExecutorRun();
nesting_level--;
}
PG_CATCH();
{
nesting_level--;

if (auto_explain_log_failed_plan &&
<maybe the time elapsed from start exceeds min_duration>)
{
'show the plan'
}
}
}

regards.

It can work - but still it is not good enough solution. We need "query
debugger" that allows to get some query execution metrics online.

There was a problem with memory management for passing plans between
processes. Can we used temp files instead shared memory?

I think "query debugger" feature you proposed is out of scope of
auto_explain module. I also think the feature to analyze running
query online is great, but we will need another discussion on a new
module or eature for it.

sure. My note was about using auto_explain like query_debugger. It has not
too sense, and from this perspective, the original proposal to log plan
before execution has more sense.

you can log every plan with higher cost than some constant.

Show quoted text

Regards,
Yugo Nagata

--
Yugo NAGATA <nagata@sraoss.co.jp>

#11Julien Rouhaud
rjuju123@gmail.com
In reply to: Pavel Stehule (#10)
Re: Allow auto_explain to log plans before queries are executed

On Thu, Feb 27, 2020 at 7:12 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:

čt 27. 2. 2020 v 7:01 odesílatel Yugo NAGATA <nagata@sraoss.co.jp> napsal:

On Thu, 27 Feb 2020 06:27:24 +0100
Pavel Stehule <pavel.stehule@gmail.com> wrote:

čt 27. 2. 2020 v 6:16 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>
napsal:

Hello.

At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp>
wrote in

On Wed, 26 Feb 2020 18:51:21 +0100
Julien Rouhaud <rjuju123@gmail.com> wrote:

On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:

Hi,

Attached is a patch for allowing auto_explain to log plans before
queries are executed.

Currently, auto_explain logs plans only after query executions,
so if a query gets stuck its plan could not be logged. If we can
know plans of stuck queries, we may get some hints to resolve the
stuck. This is useful when you are testing and debugging your
application whose queries get stuck in some situations.

Indeed that could be useful.

This patch adds new option log_before_query to auto_explain.

Maybe "log_before_execution" would be better?

Thanks! This seems better also to me.

Setting auto_explain.log_before_query option logs all plans before
queries are executed regardless of auto_explain.log_min_duration
unless this is set -1 to disable logging. If log_before_query is
enabled, only duration time is logged after query execution as in
the case of when both log_statement and log_min_duration_statement
are enabled.

I'm not sure about this behavior. The final explain plan is needed at

least if

log_analyze, log_buffers or log_timing are enabled.

In the current patch, log_before_query (will be log_before_execution)
has no effect if log_analyze is enabled in order to avoid to log the
same plans twice. Instead, is it better to log the plan always twice,
before and after the execution, if log_before_query is enabled
regardless of log_min_duration or log_analyze?

Honestly, I don't think showing plans for all queries is useful
behavior.

If you allow the stuck query to be canceled, showing plan in
PG_FINALLY() block in explain_ExecutorRun would work, which look like
this.

explain_ExecutorRun()
{
...
PG_TRY();
{
...
else
starndard_ExecutorRun();
nesting_level--;
}
PG_CATCH();
{
nesting_level--;

if (auto_explain_log_failed_plan &&
<maybe the time elapsed from start exceeds min_duration>)
{
'show the plan'
}
}
}

regards.

It can work - but still it is not good enough solution. We need "query
debugger" that allows to get some query execution metrics online.

There was a problem with memory management for passing plans between
processes. Can we used temp files instead shared memory?

I think "query debugger" feature you proposed is out of scope of
auto_explain module. I also think the feature to analyze running
query online is great, but we will need another discussion on a new
module or eature for it.

sure. My note was about using auto_explain like query_debugger. It has not too sense, and from this perspective, the original proposal to log plan before execution has more sense.

you can log every plan with higher cost than some constant.

Yes I thought about that too. If you're not in an OLAP environment
(or with a specific user running few expensive queries), setup an
auto_explain.log_before_execution_min_cost.

#12legrand legrand
legrand_legrand@hotmail.com
In reply to: Julien Rouhaud (#11)
Re: Allow auto_explain to log plans before queries are executed

Hi,

that feature for dumping plans with auto explain is already available in
https://github.com/legrandlegrand/pg_stat_sql_plans

This is an hybrid extension combining auto_explain and pg_stat_statements,
adding a planid and tracking metrics even on error, ..., ...

With
pg_stat_sql_plans.track_planid = true
pg_stat_sql_plans.explain = true
--> it writes explain plan in log file after planning and only one time
per (queryid,planid)
then no need of sampling

and with
pg_stat_sql_plans.track = 'all'
--> function pgssp_backend_queryid(pid) retrieves (nested) queryid of a
stuck statement,
and permit to retrieve its plan (by its queryid) in logs.

Regards
PAscal

--
Sent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html

#13Julien Rouhaud
rjuju123@gmail.com
In reply to: Julien Rouhaud (#11)
Re: Allow auto_explain to log plans before queries are executed

On Thu, Feb 27, 2020 at 7:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

On Thu, Feb 27, 2020 at 7:12 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:

čt 27. 2. 2020 v 7:01 odesílatel Yugo NAGATA <nagata@sraoss.co.jp> napsal:

I think "query debugger" feature you proposed is out of scope of
auto_explain module. I also think the feature to analyze running
query online is great, but we will need another discussion on a new
module or eature for it.

sure. My note was about using auto_explain like query_debugger. It has not too sense, and from this perspective, the original proposal to log plan before execution has more sense.

you can log every plan with higher cost than some constant.

Yes I thought about that too. If you're not in an OLAP environment
(or with a specific user running few expensive queries), setup an
auto_explain.log_before_execution_min_cost.

There was some discussion but no clear consensus on what should really
be done. I'm marking the patch as waiting on author which seems more
accurate. Feel free to switch it back if that's a wrong move.

#14legrand legrand
legrand_legrand@hotmail.com
In reply to: Kyotaro Horiguchi (#7)
Re: Allow auto_explain to log plans before queries are executed

Kyotaro Horiguchi-4 wrote

At Thu, 27 Feb 2020 06:27:24 +0100, Pavel Stehule &lt;

pavel.stehule@

&gt; wrote in

odesílatel Kyotaro Horiguchi &lt;

horikyota.ntt@

&gt;

napsal:

If we need a live plan dump of a running query, We could do that using
some kind of inter-backend triggering. (I'm not sure if PG offers
inter-backend signalling facility usable by extensions..)

=# select auto_explain.log_plan_backend(12345);

postgresql.log:
LOG: requested plan dump: &lt;blah, blah&gt;..

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

Did you know
https://www.postgresql-archive.org/pg-show-plans-Seeing-all-execution-plans-at-once-td6129231.html
?

Regards
PAscal

--
Sent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html

#15David Steele
david@pgmasters.net
In reply to: Julien Rouhaud (#13)
Re: Allow auto_explain to log plans before queries are executed

On 3/5/20 8:46 AM, Julien Rouhaud wrote:

On Thu, Feb 27, 2020 at 7:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:

On Thu, Feb 27, 2020 at 7:12 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:

čt 27. 2. 2020 v 7:01 odesílatel Yugo NAGATA <nagata@sraoss.co.jp> napsal:

I think "query debugger" feature you proposed is out of scope of
auto_explain module. I also think the feature to analyze running
query online is great, but we will need another discussion on a new
module or eature for it.

sure. My note was about using auto_explain like query_debugger. It has not too sense, and from this perspective, the original proposal to log plan before execution has more sense.

you can log every plan with higher cost than some constant.

Yes I thought about that too. If you're not in an OLAP environment
(or with a specific user running few expensive queries), setup an
auto_explain.log_before_execution_min_cost.

There was some discussion but no clear consensus on what should really
be done. I'm marking the patch as waiting on author which seems more
accurate. Feel free to switch it back if that's a wrong move.

There does seem to be any progress towards a consensus so I'm marking
this Returned with Feedback.

Regards,
--
-David
david@pgmasters.net