proposal - log_full_scan
Hi
today I worked on postgres's server used for critical service. Because the
application is very specific, we had to do final tuning on production
server. I fix lot of queries, but I am not able to detect fast queries that
does full scan of middle size tables - to 1M rows. Surely I wouldn't log
all queries. Now, there are these queries with freq 10 per sec.
Can be nice to have a possibility to set a log of queries that do full
scan and read more tuples than is specified limit or that does full scan of
specified tables.
What do you think about the proposed feature?
Regards
Pavel
On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
today I worked on postgres's server used for critical service. Because the
application is very specific, we had to do final tuning on production
server. I fix lot of queries, but I am not able to detect fast queries that
does full scan of middle size tables - to 1M rows. Surely I wouldn't log
all queries. Now, there are these queries with freq 10 per sec.Can be nice to have a possibility to set a log of queries that do full
scan and read more tuples than is specified limit or that does full scan of
specified tables.What do you think about the proposed feature?
Are you able to use auto_explain with auto_explain.log_min_duration ?
Then you can search for query logs with
message ~ 'Seq Scan .* \(actual time=[.0-9]* rows=[0-9]{6,} loops=[0-9]*)'
Or can you use pg_stat_all_tables.seq_scan ?
But it seems to me that filtering on the duration would be both a more
important criteria and a more general one, than "seq scan with number of rows".
| (split_part(message, ' ', 2)::float/1000 AS duration ..) WHERE duration>2222;
--
Justin
so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com>
napsal:
On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
today I worked on postgres's server used for critical service. Because
the
application is very specific, we had to do final tuning on production
server. I fix lot of queries, but I am not able to detect fast queriesthat
does full scan of middle size tables - to 1M rows. Surely I wouldn't log
all queries. Now, there are these queries with freq 10 per sec.Can be nice to have a possibility to set a log of queries that do full
scan and read more tuples than is specified limit or that does full scanof
specified tables.
What do you think about the proposed feature?
Are you able to use auto_explain with auto_explain.log_min_duration ?
Unfortunately, I cannot use it. This server executes 5K queries per
seconds, and I am afraid to decrease log_min_duration.
The logs are forwarded to the network and last time, when users played with
it, then they had problems with the network.
I am in a situation where I know there are queries faster than 100ms, I see
so there should be fullscans from pg_stat_user_tables, but I don't see the
queries.
The fullscan of this table needs about 30ms and has 200K rows. So
decreasing log_min_duration to this value is very risky.
Then you can search for query logs with
message ~ 'Seq Scan .* \(actual time=[.0-9]* rows=[0-9]{6,} loops=[0-9]*)'Or can you use pg_stat_all_tables.seq_scan ?
I use pg_stat_all_tables.seq_scan and I see seq scans there. But I need to
know the related queries.
Show quoted text
But it seems to me that filtering on the duration would be both a more
important criteria and a more general one, than "seq scan with number of
rows".| (split_part(message, ' ', 2)::float/1000 AS duration ..) WHERE
duration>2222;--
Justin
On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:
The fullscan of this table needs about 30ms and has 200K rows. So
decreasing log_min_duration to this value is very risky.[...]
I use pg_stat_all_tables.seq_scan and I see seq scans there. But I need to
know the related queries.
Maybe you could use pg_qualstats ([1]https://github.com/powa-team/pg_qualstats/) for that? It will give you the list of
quals (with the underlying queryid) with a tag to specify if they were executed
as an index scan or a sequential scan. It wouldn't detect queries doing
sequential scan that don't have any qual for the underlying relations, but
those shouldn't be a concern in your use case.
If you setup some sampling, the overhead should be minimal.
so 17. 4. 2021 v 18:54 odesílatel Julien Rouhaud <rjuju123@gmail.com>
napsal:
On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:
The fullscan of this table needs about 30ms and has 200K rows. So
decreasing log_min_duration to this value is very risky.[...]
I use pg_stat_all_tables.seq_scan and I see seq scans there. But I need
to
know the related queries.
Maybe you could use pg_qualstats ([1]) for that? It will give you the
list of
quals (with the underlying queryid) with a tag to specify if they were
executed
as an index scan or a sequential scan. It wouldn't detect queries doing
sequential scan that don't have any qual for the underlying relations, but
those shouldn't be a concern in your use case.If you setup some sampling, the overhead should be minimal.
It has similar functionality - there is a problem with setting. The my idea
is more simple - just
set
log_fullscall_min_tupples = 100000
or
alter table xxx set log_fullscan_min_tupples = 0;
and then the complete query can be found in the log.
I think this can be really practical so it can be core functionality. And
it can log the queries without
quals too. The productions systems can be buggy and it is important to find
bugs
Regards
Pavel
On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:
so 17. 4. 2021 v 17:09 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:
On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
today I worked on postgres's server used for critical service. Because the
application is very specific, we had to do final tuning on production
server. I fix lot of queries, but I am not able to detect fast queries that
does full scan of middle size tables - to 1M rows. Surely I wouldn't log
all queries. Now, there are these queries with freq 10 per sec.Can be nice to have a possibility to set a log of queries that do full
scan and read more tuples than is specified limit or that does full scan of
specified tables.What do you think about the proposed feature?
Are you able to use auto_explain with auto_explain.log_min_duration ?
Unfortunately, I cannot use it. This server executes 5K queries per
seconds, and I am afraid to decrease log_min_duration.The logs are forwarded to the network and last time, when users played with
it, then they had problems with the network.
..
The fullscan of this table needs about 30ms and has 200K rows. So
decreasing log_min_duration to this value is very risky.
auto_explain.sample_rate should allow setting a sufficiently low value of
log_min_duration. It exists since v9.6.
--
Justin
so 17. 4. 2021 v 20:36 odesílatel Justin Pryzby <pryzby@telsasoft.com>
napsal:
On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:
so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com>
napsal:
On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
today I worked on postgres's server used for critical service.
Because the
application is very specific, we had to do final tuning on production
server. I fix lot of queries, but I am not able to detect fastqueries that
does full scan of middle size tables - to 1M rows. Surely I wouldn't
log
all queries. Now, there are these queries with freq 10 per sec.
Can be nice to have a possibility to set a log of queries that do
full
scan and read more tuples than is specified limit or that does full
scan of
specified tables.
What do you think about the proposed feature?
Are you able to use auto_explain with auto_explain.log_min_duration ?
Unfortunately, I cannot use it. This server executes 5K queries per
seconds, and I am afraid to decrease log_min_duration.The logs are forwarded to the network and last time, when users played
with
it, then they had problems with the network.
..
The fullscan of this table needs about 30ms and has 200K rows. So
decreasing log_min_duration to this value is very risky.auto_explain.sample_rate should allow setting a sufficiently low value of
log_min_duration. It exists since v9.6.
It cannot help - these queries are executed a few times per sec. In same
time this server execute 500 - 1000 other queries per sec
Regards
Pavel
Show quoted text
--
Justin
so 17. 4. 2021 v 20:51 odesílatel Pavel Stehule <pavel.stehule@gmail.com>
napsal:
so 17. 4. 2021 v 20:36 odesílatel Justin Pryzby <pryzby@telsasoft.com>
napsal:On Sat, Apr 17, 2021 at 05:22:59PM +0200, Pavel Stehule wrote:
so 17. 4. 2021 v 17:09 odesílatel Justin Pryzby <pryzby@telsasoft.com>
napsal:
On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
today I worked on postgres's server used for critical service.
Because the
application is very specific, we had to do final tuning on
production
server. I fix lot of queries, but I am not able to detect fast
queries that
does full scan of middle size tables - to 1M rows. Surely I
wouldn't log
all queries. Now, there are these queries with freq 10 per sec.
Can be nice to have a possibility to set a log of queries that do
full
scan and read more tuples than is specified limit or that does full
scan of
specified tables.
What do you think about the proposed feature?
Are you able to use auto_explain with auto_explain.log_min_duration ?
Unfortunately, I cannot use it. This server executes 5K queries per
seconds, and I am afraid to decrease log_min_duration.The logs are forwarded to the network and last time, when users played
with
it, then they had problems with the network.
..
The fullscan of this table needs about 30ms and has 200K rows. So
decreasing log_min_duration to this value is very risky.auto_explain.sample_rate should allow setting a sufficiently low value of
log_min_duration. It exists since v9.6.It cannot help - these queries are executed a few times per sec. In same
time this server execute 500 - 1000 other queries per sec
maybe this new option for server and for auto_explain can be just simple
log_seqscan = (minimum number of tuples from one relation)
auto_explain.log_seqscan = (minimum number of tuples from one relation)
This is a similar feature like log_temp_files. Next step can be
implementing this feature like a table option.
What do you think about it?
Regards
Pavel
The extension like pg_qualstat is good, but it does different work. In
complex applications I need to detect buggy (forgotten) queries - last week
I found two queries over bigger tables without predicates. So the qualstat
doesn't help me. This is an application for a government with few (but for
government typical) specific: 1) the life cycle is short (one month), 2)
there is not slow start - from first moment the application will be used by
more hundred thousands people, 3) the application is very public - so any
issues are very interesting for press and very unpleasant for politics, and
in next step for all suppliers (there are high penalty for failures), and
an admins are not happy from external extensions, 4) the budget is not too
big - there is not any performance testing environment
First stages are covered well today. We can log and process very slow
queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do
it well on production servers too without high risk.
But the detection of some bad not too slow queries is hard. And as an
external consultant I am not able to install any external extensions to the
production environment for fixing some hot issues, The risk is not
acceptable for project managers and I understand. So I have to use only
tools available in Postgres.
Show quoted text
Regards
Pavel
--
Justin
On Sun, Apr 18, 2021 at 06:21:56AM +0200, Pavel Stehule wrote:
The extension like pg_qualstat is good, but it does different work.
Yes definitely. It was just an idea if you needed something right now that
could more or less do what you needed, not saying that we shouldn't improve the
core :)
In
complex applications I need to detect buggy (forgotten) queries - last week
I found two queries over bigger tables without predicates. So the qualstat
doesn't help me.
Also not totally helpful but powa was created to detect problematic queries in
such cases. It wouldn't say if it's because of a seq scan or not (so yes again
we need to improve that), but it would give you the slowest (or top consumer
for any resource) for a given time interval.
This is an application for a government with few (but for
government typical) specific: 1) the life cycle is short (one month), 2)
there is not slow start - from first moment the application will be used by
more hundred thousands people, 3) the application is very public - so any
issues are very interesting for press and very unpleasant for politics, and
in next step for all suppliers (there are high penalty for failures), and
an admins are not happy from external extensions, 4) the budget is not too
big - there is not any performance testing environmentFirst stages are covered well today. We can log and process very slow
queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do
it well on production servers too without high risk.But the detection of some bad not too slow queries is hard. And as an
external consultant I am not able to install any external extensions to the
production environment for fixing some hot issues, The risk is not
acceptable for project managers and I understand. So I have to use only
tools available in Postgres.
Yes I agree that having additional and more specialized tool in core postgres
would definitely help in similar scenario.
I think that having some kind of threshold for seq scan (like the mentioned
auto_explain.log_seqscan = XXX) in auto_explain would be the best approach, as
you really need the plan to know why a seq scan was chosen and if it was a
reasonable choice or not.
ne 18. 4. 2021 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com>
napsal:
On Sun, Apr 18, 2021 at 06:21:56AM +0200, Pavel Stehule wrote:
The extension like pg_qualstat is good, but it does different work.
Yes definitely. It was just an idea if you needed something right now that
could more or less do what you needed, not saying that we shouldn't
improve the
core :)In
complex applications I need to detect buggy (forgotten) queries - lastweek
I found two queries over bigger tables without predicates. So the
qualstat
doesn't help me.
Also not totally helpful but powa was created to detect problematic
queries in
such cases. It wouldn't say if it's because of a seq scan or not (so yes
again
we need to improve that), but it would give you the slowest (or top
consumer
for any resource) for a given time interval.This is an application for a government with few (but for
government typical) specific: 1) the life cycle is short (one month), 2)
there is not slow start - from first moment the application will be usedby
more hundred thousands people, 3) the application is very public - so any
issues are very interesting for press and very unpleasant for politics,and
in next step for all suppliers (there are high penalty for failures), and
an admins are not happy from external extensions, 4) the budget is nottoo
big - there is not any performance testing environment
First stages are covered well today. We can log and process very slow
queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I can do
it well on production servers too without high risk.But the detection of some bad not too slow queries is hard. And as an
external consultant I am not able to install any external extensions tothe
production environment for fixing some hot issues, The risk is not
acceptable for project managers and I understand. So I have to use only
tools available in Postgres.Yes I agree that having additional and more specialized tool in core
postgres
would definitely help in similar scenario.I think that having some kind of threshold for seq scan (like the mentioned
auto_explain.log_seqscan = XXX) in auto_explain would be the best
approach, as
you really need the plan to know why a seq scan was chosen and if it was a
reasonable choice or not.
I would like to write this for core and for auto_explain too. I was in a
situation when I hadnot used auto_explain too. Although this extension is
widely used and then the risk is low.
When I detect the query, then I can run the explanation manually. But sure
I think so it can work well inside auto_explain
Regards
Pavel
ne 18. 4. 2021 v 16:09 odesílatel Pavel Stehule <pavel.stehule@gmail.com>
napsal:
ne 18. 4. 2021 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com>
napsal:On Sun, Apr 18, 2021 at 06:21:56AM +0200, Pavel Stehule wrote:
The extension like pg_qualstat is good, but it does different work.
Yes definitely. It was just an idea if you needed something right now
that
could more or less do what you needed, not saying that we shouldn't
improve the
core :)In
complex applications I need to detect buggy (forgotten) queries - lastweek
I found two queries over bigger tables without predicates. So the
qualstat
doesn't help me.
Also not totally helpful but powa was created to detect problematic
queries in
such cases. It wouldn't say if it's because of a seq scan or not (so yes
again
we need to improve that), but it would give you the slowest (or top
consumer
for any resource) for a given time interval.This is an application for a government with few (but for
government typical) specific: 1) the life cycle is short (one month), 2)
there is not slow start - from first moment the application will beused by
more hundred thousands people, 3) the application is very public - so
any
issues are very interesting for press and very unpleasant for politics,
and
in next step for all suppliers (there are high penalty for failures),
and
an admins are not happy from external extensions, 4) the budget is not
too
big - there is not any performance testing environment
First stages are covered well today. We can log and process very slow
queries, and fix it immediately - with CREATE INDEX CONCURRENTLY I cando
it well on production servers too without high risk.
But the detection of some bad not too slow queries is hard. And as an
external consultant I am not able to install any external extensions tothe
production environment for fixing some hot issues, The risk is not
acceptable for project managers and I understand. So I have to use only
tools available in Postgres.Yes I agree that having additional and more specialized tool in core
postgres
would definitely help in similar scenario.I think that having some kind of threshold for seq scan (like the
mentioned
auto_explain.log_seqscan = XXX) in auto_explain would be the best
approach, as
you really need the plan to know why a seq scan was chosen and if it was a
reasonable choice or not.I would like to write this for core and for auto_explain too. I was in a
situation when I hadnot used auto_explain too. Although this extension is
widely used and then the risk is low.When I detect the query, then I can run the explanation manually. But sure
I think so it can work well inside auto_explain
I tried to write the patch. It is harder work for core, than I expected,
because there is not any good information if the query is top or not, so it
is not easy to decide if we should log info or not.
On second hand, the modification of auto_explain is easy.
I am sending the patch
Regards
Pavel
Show quoted text
Regards
Pavel
Attachments:
auto_explain_log_seqscan.patchtext/x-patch; charset=US-ASCII; name=auto_explain_log_seqscan.patchDownload
diff --git a/contrib/auto_explain/auto_explain.c b/contrib/auto_explain/auto_explain.c
index 445bb37191..5754e25594 100644
--- a/contrib/auto_explain/auto_explain.c
+++ b/contrib/auto_explain/auto_explain.c
@@ -17,8 +17,11 @@
#include "access/parallel.h"
#include "commands/explain.h"
#include "executor/instrument.h"
+#include "nodes/nodeFuncs.h"
+#include "parser/parsetree.h"
#include "jit/jit.h"
#include "utils/guc.h"
+#include "utils/lsyscache.h"
PG_MODULE_MAGIC;
@@ -35,6 +38,7 @@ static int auto_explain_log_format = EXPLAIN_FORMAT_TEXT;
static int auto_explain_log_level = LOG;
static bool auto_explain_log_nested_statements = false;
static double auto_explain_sample_rate = 1;
+static int auto_explain_log_seqscan = -1; /* tuples */
static const struct config_enum_entry format_options[] = {
{"text", EXPLAIN_FORMAT_TEXT, false},
@@ -65,7 +69,7 @@ static int nesting_level = 0;
static bool current_query_sampled = false;
#define auto_explain_enabled() \
- (auto_explain_log_min_duration >= 0 && \
+ ((auto_explain_log_min_duration >= 0 || auto_explain_log_seqscan != -1) && \
(nesting_level == 0 || auto_explain_log_nested_statements) && \
current_query_sampled)
@@ -105,6 +109,18 @@ _PG_init(void)
NULL,
NULL);
+ DefineCustomIntVariable("auto_explain.log_seqscan",
+ "Sets the minimum tuples produced by sequantial scans which plans will be logged",
+ "Zero prints all plans that contains seqscan node. -1 turns this feature off.",
+ &auto_explain_log_seqscan,
+ -1,
+ -1, INT_MAX,
+ PGC_SUSET,
+ 0,
+ NULL,
+ NULL,
+ NULL);
+
DefineCustomBoolVariable("auto_explain.log_analyze",
"Use EXPLAIN ANALYZE for plan logging.",
NULL,
@@ -274,7 +290,9 @@ explain_ExecutorStart(QueryDesc *queryDesc, int eflags)
*/
if (nesting_level == 0)
{
- if (auto_explain_log_min_duration >= 0 && !IsParallelWorker())
+ if ((auto_explain_log_min_duration >= 0 ||
+ auto_explain_log_seqscan != -1) &&
+ !IsParallelWorker())
current_query_sampled = (random() < auto_explain_sample_rate *
((double) MAX_RANDOM_VALUE + 1));
else
@@ -295,6 +313,10 @@ explain_ExecutorStart(QueryDesc *queryDesc, int eflags)
if (auto_explain_log_wal)
queryDesc->instrument_options |= INSTRUMENT_WAL;
}
+
+ /* We need to know number of processed rows per node */
+ if (auto_explain_log_seqscan != -1 && (eflags & EXEC_FLAG_EXPLAIN_ONLY) == 0)
+ queryDesc->instrument_options |= INSTRUMENT_ROWS;
}
if (prev_ExecutorStart)
@@ -363,16 +385,44 @@ explain_ExecutorFinish(QueryDesc *queryDesc)
PG_END_TRY();
}
+/*
+ * Colllect relations where log_seqscan limit was exceeded
+ */
+static bool
+DetectSeqScanOverLimit(PlanState *planstate, Bitmapset **rels)
+{
+ Plan *plan = planstate->plan;
+
+ if (planstate->instrument)
+ InstrEndLoop(planstate->instrument);
+
+ if (IsA(plan, SeqScan))
+ {
+ if (planstate->instrument &&
+ planstate->instrument->ntuples >= auto_explain_log_seqscan)
+ {
+ *rels = bms_add_member(*rels, ((Scan *) plan)->scanrelid);
+ }
+ }
+
+ return planstate_tree_walker(planstate, DetectSeqScanOverLimit, rels);
+}
+
/*
* ExecutorEnd hook: log results if needed
*/
static void
explain_ExecutorEnd(QueryDesc *queryDesc)
{
- if (queryDesc->totaltime && auto_explain_enabled())
+ if (auto_explain_enabled() &&
+ (queryDesc->totaltime || auto_explain_log_seqscan != -1))
{
MemoryContext oldcxt;
double msec;
+ Bitmapset *rels = NULL;
+ StringInfoData relnames;
+
+ relnames.data = NULL;
/*
* Make sure we operate in the per-query context, so any cruft will be
@@ -386,13 +436,37 @@ explain_ExecutorEnd(QueryDesc *queryDesc)
*/
InstrEndLoop(queryDesc->totaltime);
+ if (auto_explain_log_seqscan != -1)
+ {
+ DetectSeqScanOverLimit(queryDesc->planstate, &rels);
+
+ if (rels)
+ {
+ int relid = -1;
+
+ initStringInfo(&relnames);
+
+ while ((relid = bms_next_member(rels, relid)) >= 0)
+ {
+ RangeTblEntry *rte;
+
+ if (*relnames.data != '\0')
+ appendStringInfoString(&relnames, ",");
+
+ rte = rt_fetch(relid, queryDesc->plannedstmt->rtable);
+ appendStringInfoString(&relnames, get_rel_name(rte->relid));
+ }
+ }
+ }
+
/* Log plan if duration is exceeded. */
msec = queryDesc->totaltime->total * 1000.0;
- if (msec >= auto_explain_log_min_duration)
+ if ((auto_explain_log_min_duration != -1 &&
+ msec >= auto_explain_log_min_duration) || rels)
{
ExplainState *es = NewExplainState();
- es->analyze = (queryDesc->instrument_options && auto_explain_log_analyze);
+ es->analyze = (queryDesc->instrument_options && (auto_explain_log_analyze || auto_explain_log_seqscan != -1));
es->verbose = auto_explain_log_verbose;
es->buffers = (es->analyze && auto_explain_log_buffers);
es->wal = (es->analyze && auto_explain_log_wal);
@@ -427,10 +501,18 @@ explain_ExecutorEnd(QueryDesc *queryDesc)
* reported. This isn't ideal but trying to do it here would
* often result in duplication.
*/
- ereport(auto_explain_log_level,
- (errmsg("duration: %.3f ms plan:\n%s",
- msec, es->str->data),
- errhidestmt(true)));
+ if (!relnames.data)
+ ereport(auto_explain_log_level,
+ (errmsg("duration: %.3f ms plan:\n%s",
+ msec, es->str->data),
+ errhidestmt(true)));
+ else
+ {
+ ereport(auto_explain_log_level,
+ (errmsg("duration: %.3f ms, over limit seqscans: %s, plan:\n%s",
+ msec, relnames.data, es->str->data),
+ errhidestmt(true)));
+ }
}
MemoryContextSwitchTo(oldcxt);
Looking at this I like the idea in principle, but I'm not convinced that
auto_explain is the right tool for this. auto_explain is for identifying slow
queries, and what you are proposing is to identify queries with a certain
"shape" (for lack of a better term) even if they aren't slow as per the
log_min_duration setting. If log_min_duration is deemed to crude due to query
volume then sample_rate is the tool. If sample_rate is also discarded, then
pg_stat_statements seems a better option.
Also, why just sequential scans (apart from it being this specific usecase)?
If the idea is to track aspects of execution which are deemed slow, then
tracking for example spills etc would be just as valid. Do you have thoughts
on that?
That being said, a few comments on the patch:
- (auto_explain_log_min_duration >= 0 && \
+ ((auto_explain_log_min_duration >= 0 || auto_explain_log_seqscan != -1) && \
Is there a reason to not follow the existing code and check for >= 0?
+ DefineCustomIntVariable("auto_explain.log_seqscan",
It's only a matter of time before another node is proposed for logging, and
then we'll be stuck adding log_XXXnode GUCs. Is there a more future-proof way
to do this?
+ "Sets the minimum tuples produced by sequantial scans which plans will be logged",
s/sequantial/sequential/
- es->analyze = (queryDesc->instrument_options && auto_explain_log_analyze);
+ es->analyze = (queryDesc->instrument_options && (auto_explain_log_analyze || auto_explain_log_seqscan != -1));
Turning on ANALYZE when log_analyze isn't set to True is a no-no IMO.
+ * Colllect relations where log_seqscan limit was exceeded
s/Colllect/Collect/
+ if (*relnames.data != '\0')
+ appendStringInfoString(&relnames, ",");
This should use appendStringInfoChar instead.
+ (errmsg("duration: %.3f ms, over limit seqscans: %s, plan:\n%s",
The "over limit" part is superfluous since it otherwise wouldn't be logged. If
we're prefixing something wouldn't it be more helpful to include the limit,
like: "seqscans >= %d tuples returned:". I'm not a fan of "seqscans" but
spelling it out is also quite verbose and this is grep-able.
Documentation and tests are also missing
--
Daniel Gustafsson https://vmware.com/
Hi
út 6. 7. 2021 v 16:07 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:
Looking at this I like the idea in principle, but I'm not convinced that
auto_explain is the right tool for this. auto_explain is for identifying
slow
queries, and what you are proposing is to identify queries with a certain
"shape" (for lack of a better term) even if they aren't slow as per the
log_min_duration setting. If log_min_duration is deemed to crude due to
query
volume then sample_rate is the tool. If sample_rate is also discarded,
then
pg_stat_statements seems a better option.
I don't think so pg_stat_statements can be used - a) it doesn't check
execution plan, so this feature can have big overhead against current
pg_stat_statements, that works just with AST, b) pg_stat_statements has one
entry per AST - but this can be problem on execution plan level, and this
is out of perspective of pg_stat_statements.
Also, why just sequential scans (apart from it being this specific
usecase)?
If the idea is to track aspects of execution which are deemed slow, then
tracking for example spills etc would be just as valid. Do you have
thoughts
on that?
Yes, I thought about it more, and sometimes bitmap index scans are
problematic too, index scans in nested loops can be a problem too.
For my last customer I had to detect queries with a large bitmap index
scan. I can do it with a combination of pg_stat_statements and log
checking, but this work is not very friendly.
My current idea is to have some extension that can be tran for generally
specified executor nodes.
Sometimes I can say - I need to know all queries that does seq scan over
tabx where tuples processed > N. In other cases can be interesting to know
the queries that uses index x for bitmap index scan,
That being said, a few comments on the patch:
- (auto_explain_log_min_duration >= 0 && \ + ((auto_explain_log_min_duration >= 0 || auto_explain_log_seqscan != -1) && \ Is there a reason to not follow the existing code and check for >= 0?+ DefineCustomIntVariable("auto_explain.log_seqscan",
It's only a matter of time before another node is proposed for logging, and
then we'll be stuck adding log_XXXnode GUCs. Is there a more future-proof
way
to do this?+ "Sets the minimum tuples produced by sequantial scans which plans
will be logged",
s/sequantial/sequential/- es->analyze = (queryDesc->instrument_options && auto_explain_log_analyze); + es->analyze = (queryDesc->instrument_options && (auto_explain_log_analyze || auto_explain_log_seqscan != -1)); Turning on ANALYZE when log_analyze isn't set to True is a no-no IMO.+ * Colllect relations where log_seqscan limit was exceeded
s/Colllect/Collect/+ if (*relnames.data != '\0') + appendStringInfoString(&relnames, ","); This should use appendStringInfoChar instead.+ (errmsg("duration: %.3f ms, over limit seqscans: %s, plan:\n%s",
The "over limit" part is superfluous since it otherwise wouldn't be
logged. If
we're prefixing something wouldn't it be more helpful to include the limit,
like: "seqscans >= %d tuples returned:". I'm not a fan of "seqscans" but
spelling it out is also quite verbose and this is grep-able.Documentation and tests are also missing
Unfortunately, this idea is not well prepared. My patch was a proof concept
- but I think so it is not a good start point. Maybe it needs some tracing
API on executor level and some tool like "perf top", but for executor. Post
execution analysis is not a good direction with big overhead, and mainly it
is not friendly in critical situations. I need some handy tool like perf,
but for executor nodes. I don't know how to do it effectively.
Thank you for your review and for your time, but I think it is better to
remove this patch from commit fest. I have no idea how to design this
feature well :-/
Regards
Pavel
Show quoted text
--
Daniel Gustafsson https://vmware.com/
On 6 Jul 2021, at 18:14, Pavel Stehule <pavel.stehule@gmail.com> wrote:
I thought about it more, and sometimes bitmap index scans are problematic too, index scans in nested loops can be a problem too.
Right. Depending on the circumstances, pretty much anything in a plan can be
something deemed problematic in some production setting.
Unfortunately, this idea is not well prepared. My patch was a proof concept - but I think so it is not a good start point. Maybe it needs some tracing API on executor level and some tool like "perf top", but for executor. Post execution analysis is not a good direction with big overhead, and mainly it is not friendly in critical situations. I need some handy tool like perf, but for executor nodes. I don't know how to do it effectively.
These are hot codepaths so adding tracing instrumentation which collects enough
information to be useful, and which can be drained fast enough to not be a
resource problem is tricky.
Thank you for your review and for your time, but I think it is better to remove this patch from commit fest. I have no idea how to design this feature well :-/
No worries, I hope we see an updated approach at some time. In the meantime
I'm marking this patch Returned with Feedback.
--
Daniel Gustafsson https://vmware.com/