auto_explain sample rate

Started by Craig Ringeralmost 11 years ago32 messageshackers
Jump to latest
#1Craig Ringer
craig@2ndquadrant.com

Hi all

It's sometimes desirable to collect auto_explain data with ANALYZE in order
to track down hard-to-reproduce issues, but the performance impacts can be
pretty hefty on the DB.

I'm inclined to add a sample rate to auto_explain so that it can trigger
only on x percent of queries, and also add a sample test hook that can be
used to target statements of interest more narrowly (using a C hook
function).

Sound like a reasonable approach?

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Craig Ringer (#1)
Re: auto_explain sample rate

Craig Ringer <craig@2ndquadrant.com> writes:

It's sometimes desirable to collect auto_explain data with ANALYZE in order
to track down hard-to-reproduce issues, but the performance impacts can be
pretty hefty on the DB.

I'm inclined to add a sample rate to auto_explain so that it can trigger
only on x percent of queries,

That sounds reasonable ...

and also add a sample test hook that can be
used to target statements of interest more narrowly (using a C hook
function).

You'd have to be pretty desperate, *and* knowledgeable, to write a
C function for that. Can't we invent something a bit more user-friendly
for the purpose? No idea what it should look like though.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Craig Ringer
craig@2ndquadrant.com
In reply to: Tom Lane (#2)
Re: auto_explain sample rate

On 29 May 2015 at 11:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Craig Ringer <craig@2ndquadrant.com> writes:

It's sometimes desirable to collect auto_explain data with ANALYZE in

order

to track down hard-to-reproduce issues, but the performance impacts can

be

pretty hefty on the DB.

I'm inclined to add a sample rate to auto_explain so that it can trigger
only on x percent of queries,

That sounds reasonable ...

Cool, I'll cook that up then. Thanks for the sanity check.

and also add a sample test hook that can be
used to target statements of interest more narrowly (using a C hook
function).

You'd have to be pretty desperate, *and* knowledgeable, to write a
C function for that. Can't we invent something a bit more user-friendly
for the purpose? No idea what it should look like though.

I've been that desperate.

For the majority of users I'm sure it's sufficient to just have a sample
rate.

Anything that's trying to match individual queries could be interested in
all sorts of different things. Queries that touch a particular table being
one of the more obvious things, or queries that mention a particular
literal. Rather than try to design something complicated in advance that
anticipates all needs, I'm thinking it makes sense to just throw a hook in
there. If some patterns start to emerge in terms of useful real world
filtering criteria then that'd better inform any more user accessible
design down the track.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#4Pavel Stehule
pavel.stehule@gmail.com
In reply to: Craig Ringer (#3)
Re: auto_explain sample rate

2015-06-02 9:07 GMT+02:00 Craig Ringer <craig@2ndquadrant.com>:

On 29 May 2015 at 11:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Craig Ringer <craig@2ndquadrant.com> writes:

It's sometimes desirable to collect auto_explain data with ANALYZE in

order

to track down hard-to-reproduce issues, but the performance impacts can

be

pretty hefty on the DB.

I'm inclined to add a sample rate to auto_explain so that it can trigger
only on x percent of queries,

That sounds reasonable ...

Cool, I'll cook that up then. Thanks for the sanity check.

and also add a sample test hook that can be
used to target statements of interest more narrowly (using a C hook
function).

You'd have to be pretty desperate, *and* knowledgeable, to write a
C function for that. Can't we invent something a bit more user-friendly
for the purpose? No idea what it should look like though.

I've been that desperate.

For the majority of users I'm sure it's sufficient to just have a sample
rate.

Anything that's trying to match individual queries could be interested in
all sorts of different things. Queries that touch a particular table being
one of the more obvious things, or queries that mention a particular
literal. Rather than try to design something complicated in advance that
anticipates all needs, I'm thinking it makes sense to just throw a hook in
there. If some patterns start to emerge in terms of useful real world
filtering criteria then that'd better inform any more user accessible
design down the track.

same method can be interesting for interactive EXPLAIN ANALYZE too. TIMING
has about 20%-30% overhead and usually we don't need a perfectly exact
numbers

Regards

Pavel

Show quoted text

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#5Craig Ringer
craig@2ndquadrant.com
In reply to: Pavel Stehule (#4)
Re: auto_explain sample rate

On 2 June 2015 at 15:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:

2015-06-02 9:07 GMT+02:00 Craig Ringer <craig@2ndquadrant.com>:

For the majority of users I'm sure it's sufficient to just have a sample
rate.

Anything that's trying to match individual queries could be interested in
all sorts of different things. Queries that touch a particular table being
one of the more obvious things, or queries that mention a particular
literal. Rather than try to design something complicated in advance that
anticipates all needs, I'm thinking it makes sense to just throw a hook in
there. If some patterns start to emerge in terms of useful real world
filtering criteria then that'd better inform any more user accessible
design down the track.

same method can be interesting for interactive EXPLAIN ANALYZE too. TIMING
has about 20%-30% overhead and usually we don't need a perfectly exact
numbers

I don't understand what you are suggesting here.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#6Pavel Stehule
pavel.stehule@gmail.com
In reply to: Craig Ringer (#5)
Re: auto_explain sample rate

2015-06-03 9:17 GMT+02:00 Craig Ringer <craig@2ndquadrant.com>:

On 2 June 2015 at 15:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:

2015-06-02 9:07 GMT+02:00 Craig Ringer <craig@2ndquadrant.com>:

For the majority of users I'm sure it's sufficient to just have a sample
rate.

Anything that's trying to match individual queries could be interested
in all sorts of different things. Queries that touch a particular table
being one of the more obvious things, or queries that mention a particular
literal. Rather than try to design something complicated in advance that
anticipates all needs, I'm thinking it makes sense to just throw a hook in
there. If some patterns start to emerge in terms of useful real world
filtering criteria then that'd better inform any more user accessible
design down the track.

same method can be interesting for interactive EXPLAIN ANALYZE too.
TIMING has about 20%-30% overhead and usually we don't need a perfectly
exact numbers

I don't understand what you are suggesting here.

using some sampling for EXPLAIN ANALYZE statement

Pavel

Show quoted text

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#7Craig Ringer
craig@2ndquadrant.com
In reply to: Pavel Stehule (#6)
Re: auto_explain sample rate

On 3 June 2015 at 15:22, Pavel Stehule <pavel.stehule@gmail.com> wrote:

2015-06-03 9:17 GMT+02:00 Craig Ringer <craig@2ndquadrant.com>:

On 2 June 2015 at 15:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:

2015-06-02 9:07 GMT+02:00 Craig Ringer <craig@2ndquadrant.com>:

For the majority of users I'm sure it's sufficient to just have a
sample rate.

Anything that's trying to match individual queries could be interested
in all sorts of different things. Queries that touch a particular table
being one of the more obvious things, or queries that mention a particular
literal. Rather than try to design something complicated in advance that
anticipates all needs, I'm thinking it makes sense to just throw a hook in
there. If some patterns start to emerge in terms of useful real world
filtering criteria then that'd better inform any more user accessible
design down the track.

same method can be interesting for interactive EXPLAIN ANALYZE too.
TIMING has about 20%-30% overhead and usually we don't need a perfectly
exact numbers

I don't understand what you are suggesting here.

using some sampling for EXPLAIN ANALYZE statement

Do you mean that you'd like to be able to set a fraction of queries on
which auto_explain does ANALYZE, so most of the time it just outputs an
ordinary EXPLAIN?

Or maybe we're talking about different things re the original proposal? I
don't see how this would work. If you run EXPLAIN ANALYZE interactively
like you said above. You'd surely want it to report costs and timings, or
whatever it is that you ask for, all the time. Not just some of the time
based on some background setting.

Are you advocating a profiling-based approach for EXPLAIN ANALYZE timing
where we sample which executor node we're under at regular intervals,
instead of timing everything? Or suggesting a way to filter out sub-trees
so you only get timing data on some sub-portion of a query?

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#8Pavel Stehule
pavel.stehule@gmail.com
In reply to: Craig Ringer (#7)
Re: auto_explain sample rate

2015-06-03 9:46 GMT+02:00 Craig Ringer <craig@2ndquadrant.com>:

On 3 June 2015 at 15:22, Pavel Stehule <pavel.stehule@gmail.com> wrote:

2015-06-03 9:17 GMT+02:00 Craig Ringer <craig@2ndquadrant.com>:

On 2 June 2015 at 15:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:

2015-06-02 9:07 GMT+02:00 Craig Ringer <craig@2ndquadrant.com>:

For the majority of users I'm sure it's sufficient to just have a
sample rate.

Anything that's trying to match individual queries could be interested
in all sorts of different things. Queries that touch a particular table
being one of the more obvious things, or queries that mention a particular
literal. Rather than try to design something complicated in advance that
anticipates all needs, I'm thinking it makes sense to just throw a hook in
there. If some patterns start to emerge in terms of useful real world
filtering criteria then that'd better inform any more user accessible
design down the track.

same method can be interesting for interactive EXPLAIN ANALYZE too.
TIMING has about 20%-30% overhead and usually we don't need a perfectly
exact numbers

I don't understand what you are suggesting here.

using some sampling for EXPLAIN ANALYZE statement

Do you mean that you'd like to be able to set a fraction of queries on
which auto_explain does ANALYZE, so most of the time it just outputs an
ordinary EXPLAIN?

Or maybe we're talking about different things re the original proposal? I
don't see how this would work. If you run EXPLAIN ANALYZE interactively
like you said above. You'd surely want it to report costs and timings, or
whatever it is that you ask for, all the time. Not just some of the time
based on some background setting.

Are you advocating a profiling-based approach for EXPLAIN ANALYZE timing
where we sample which executor node we're under at regular intervals,
instead of timing everything? Or suggesting a way to filter out sub-trees
so you only get timing data on some sub-portion of a query?

lot of variants - I would to see cost and times for EXPLAIN ANALYZE every
time - but the precision of time can be reduced to 1ms. It is question if
we can significantly reduce the cost (or number of calls) of getting system
time.

Pavel

Show quoted text

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#9Craig Ringer
craig@2ndquadrant.com
In reply to: Pavel Stehule (#8)
Re: auto_explain sample rate

lot of variants - I would to see cost and times for EXPLAIN ANALYZE every

time - but the precision of time can be reduced to 1ms. It is question if
we can significantly reduce the cost (or number of calls) of getting system
time.

Pavel

OK, so you're suggesting a sampling-based EXPLAIN.

That'd be interesting, but is totally unrelated to this work on
auto_explain.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#10Craig Ringer
craig@2ndquadrant.com
In reply to: Craig Ringer (#3)
Re: auto_explain sample rate

On 2 June 2015 at 15:07, Craig Ringer <craig@2ndquadrant.com> wrote:

On 29 May 2015 at 11:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Craig Ringer <craig@2ndquadrant.com> writes:

It's sometimes desirable to collect auto_explain data with ANALYZE in

order

to track down hard-to-reproduce issues, but the performance impacts can

be

pretty hefty on the DB.

I'm inclined to add a sample rate to auto_explain so that it can trigger
only on x percent of queries,

That sounds reasonable ...

Cool, I'll cook that up then. Thanks for the sanity check.

OK, here we go.

To make sure it doesn't trigger on all backends at once, and to ensure it
doesn't rely on a shared point of contention in shmem, this sets up a
counter with a random value on each backend start.

Because it needs to either always run both the Start and End hooks, or run
neither, this doesn't count nested statements for sampling purposes. So if
you run my_huge_plpgsql_function() then either all its statements will be
explained or none of them will. This only applies if nested statement
explain is enabled. It's possible to get around this by adding a separate
nested statement counter that's reset at each top level End hook, but it
doesn't seem worthwhile.

The sample rate has no effect on ANALYZE, which remains enabled or disabled
for all queries. I don't see any point adding a separate sample rate
control to ANALYZE only some sub-proportion of EXPLAINed statements.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Tra

Attachments:

0001-Allow-sampling-of-only-some-queries-by-auto_explain.patchtext/x-patch; charset=US-ASCII; name=0001-Allow-sampling-of-only-some-queries-by-auto_explain.patchDownload+73-2
#11Andres Freund
andres@anarazel.de
In reply to: Craig Ringer (#10)
Re: auto_explain sample rate

On 2015-06-03 18:54:24 +0800, Craig Ringer wrote:

OK, here we go.

Hm. Wouldn't random sampling be better than what you do? If your queries
have a pattern to them (e.g. you always issue the same 10 queries in
succession), this will possibly only show a subset of the queries.

I think a formulation in fraction (i.e. a float between 0 and 1) will
also be easier to understand.

Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Craig Ringer
craig@2ndquadrant.com
In reply to: Andres Freund (#11)
Re: auto_explain sample rate

On 3 June 2015 at 20:04, Andres Freund <andres@anarazel.de> wrote:

On 2015-06-03 18:54:24 +0800, Craig Ringer wrote:

OK, here we go.

Hm. Wouldn't random sampling be better than what you do? If your queries
have a pattern to them (e.g. you always issue the same 10 queries in
succession), this will possibly only show a subset of the queries.

I think a formulation in fraction (i.e. a float between 0 and 1) will
also be easier to understand.

Could be, yeah. I was thinking about the cost of generating a random each
time, but it's going to vanish in the noise compared to the rest of the
costs in query execution.

---
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#13Julien Rouhaud
rjuju123@gmail.com
In reply to: Craig Ringer (#12)
Re: auto_explain sample rate

On 03/06/2015 15:00, Craig Ringer wrote:

On 3 June 2015 at 20:04, Andres Freund <andres@anarazel.de
<mailto:andres@anarazel.de>> wrote:

On 2015-06-03 18:54:24 +0800, Craig Ringer wrote:

OK, here we go.

Hm. Wouldn't random sampling be better than what you do? If your queries
have a pattern to them (e.g. you always issue the same 10 queries in
succession), this will possibly only show a subset of the queries.

I think a formulation in fraction (i.e. a float between 0 and 1) will
also be easier to understand.

Could be, yeah. I was thinking about the cost of generating a random
each time, but it's going to vanish in the noise compared to the rest of
the costs in query execution.

Hello, I've just reviewed the patch.

I'm not sure if there's a consensus on the sample rate format. FWIW, I
also think a fraction would be easier to understand. Any news about
generating a random at each call to avoid the query pattern problem ?

The patch applies without error. I wonder if there's any reason for
using pg_lrand48() instead of random(), as there's a port for random()
if the system lacks it.

After some quick checks, I found that auto_explain_sample_counter is
always initialized with the same value. After some digging, it seems
that pg_lrand48() always returns the same values in the same order, at
least on my computer. Have I missed something?

Otherwise, after replacing the pg_lrand48() call with a random(), it
works just fine.

---
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Julien Rouhaud
rjuju123@gmail.com
In reply to: Julien Rouhaud (#13)
Re: auto_explain sample rate

On 05/07/2015 18:22, Julien Rouhaud wrote:

On 03/06/2015 15:00, Craig Ringer wrote:

On 3 June 2015 at 20:04, Andres Freund <andres@anarazel.de
<mailto:andres@anarazel.de>> wrote:

On 2015-06-03 18:54:24 +0800, Craig Ringer wrote:

OK, here we go.

Hm. Wouldn't random sampling be better than what you do? If your queries
have a pattern to them (e.g. you always issue the same 10 queries in
succession), this will possibly only show a subset of the queries.

I think a formulation in fraction (i.e. a float between 0 and 1) will
also be easier to understand.

Could be, yeah. I was thinking about the cost of generating a random
each time, but it's going to vanish in the noise compared to the rest of
the costs in query execution.

Hello, I've just reviewed the patch.

I'm not sure if there's a consensus on the sample rate format. FWIW, I
also think a fraction would be easier to understand. Any news about
generating a random at each call to avoid the query pattern problem ?

The patch applies without error. I wonder if there's any reason for
using pg_lrand48() instead of random(), as there's a port for random()
if the system lacks it.

After some quick checks, I found that auto_explain_sample_counter is
always initialized with the same value. After some digging, it seems
that pg_lrand48() always returns the same values in the same order, at
least on my computer. Have I missed something?

Well, I obviously missed that pg_srand48() is only used if the system
lacks random/srandom, sorry for the noise. So yes, random() must be
used instead of pg_lrand48().

I'm attaching a new version of the patch fixing this issue just in case.

Otherwise, after replacing the pg_lrand48() call with a random(), it
works just fine.

---
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org

Attachments:

auto_explain_sample_rate-v2.patchtext/x-patch; name=auto_explain_sample_rate-v2.patchDownload+74-4
#15Craig Ringer
craig@2ndquadrant.com
In reply to: Julien Rouhaud (#14)
Re: auto_explain sample rate

On 7 July 2015 at 21:37, Julien Rouhaud <julien.rouhaud@dalibo.com> wrote:

Well, I obviously missed that pg_srand48() is only used if the system
lacks random/srandom, sorry for the noise. So yes, random() must be
used instead of pg_lrand48().

I'm attaching a new version of the patch fixing this issue just in case.

Thanks for picking this up. I've been trying to find time to come back
to it but been swamped in priority work.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Michael Paquier
michael@paquier.xyz
In reply to: Craig Ringer (#15)
Re: auto_explain sample rate

On Fri, Jul 17, 2015 at 2:53 PM, Craig Ringer <craig@2ndquadrant.com> wrote:

On 7 July 2015 at 21:37, Julien Rouhaud <julien.rouhaud@dalibo.com> wrote:

Well, I obviously missed that pg_srand48() is only used if the system
lacks random/srandom, sorry for the noise. So yes, random() must be
used instead of pg_lrand48().

I'm attaching a new version of the patch fixing this issue just in case.

Thanks for picking this up. I've been trying to find time to come back
to it but been swamped in priority work.

For now I am marking that as returned with feedback.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Julien Rouhaud
rjuju123@gmail.com
In reply to: Michael Paquier (#16)
Re: auto_explain sample rate

On 25/08/2015 14:45, Michael Paquier wrote:

On Fri, Jul 17, 2015 at 2:53 PM, Craig Ringer <craig@2ndquadrant.com> wrote:

On 7 July 2015 at 21:37, Julien Rouhaud <julien.rouhaud@dalibo.com> wrote:

Well, I obviously missed that pg_srand48() is only used if the system
lacks random/srandom, sorry for the noise. So yes, random() must be
used instead of pg_lrand48().

I'm attaching a new version of the patch fixing this issue just in case.

Thanks for picking this up. I've been trying to find time to come back
to it but been swamped in priority work.

For now I am marking that as returned with feedback.

PFA v3 of the patch, rebased on current head. It fixes the last issue
(sample a percentage of queries).

I'm adding it to the next commitfest.

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org

Attachments:

auto_explain_sample_rate-v3.patchtext/x-patch; name=auto_explain_sample_rate-v3.patchDownload+46-1
#18Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Julien Rouhaud (#17)
Re: auto_explain sample rate

Julien Rouhaud wrote:

Hijacking this macro is just too obscure:

#define auto_explain_enabled() \
(auto_explain_log_min_duration >= 0 && \
-	 (nesting_level == 0 || auto_explain_log_nested_statements))
+	 (nesting_level == 0 || auto_explain_log_nested_statements) && \
+	 current_query_sampled)

because it then becomes hard to figure out that assigning to _sampled is
what makes the enabled() check pass or not depending on sampling:

@@ -191,6 +211,14 @@ _PG_fini(void)
static void
explain_ExecutorStart(QueryDesc *queryDesc, int eflags)
{
+	/*
+	 * For ratio sampling, randomly choose top-level statement. Either
+	 * all nested statements will be explained or none will.
+	 */
+	if (auto_explain_log_min_duration >= 0 && nesting_level == 0)
+		current_query_sampled = (random() < auto_explain_sample_ratio *
+				MAX_RANDOM_VALUE);
+
if (auto_explain_enabled())
{

I think it's better to keep the "enabled" macro unmodified, and just add
another conditional to the "if" test there.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Julien Rouhaud
rjuju123@gmail.com
In reply to: Alvaro Herrera (#18)
Re: auto_explain sample rate

On 16/02/2016 22:51, Alvaro Herrera wrote:

Julien Rouhaud wrote:

Hijacking this macro is just too obscure:

#define auto_explain_enabled() \
(auto_explain_log_min_duration >= 0 && \
-	 (nesting_level == 0 || auto_explain_log_nested_statements))
+	 (nesting_level == 0 || auto_explain_log_nested_statements) && \
+	 current_query_sampled)

because it then becomes hard to figure out that assigning to _sampled is
what makes the enabled() check pass or not depending on sampling:

@@ -191,6 +211,14 @@ _PG_fini(void)
static void
explain_ExecutorStart(QueryDesc *queryDesc, int eflags)
{
+	/*
+	 * For ratio sampling, randomly choose top-level statement. Either
+	 * all nested statements will be explained or none will.
+	 */
+	if (auto_explain_log_min_duration >= 0 && nesting_level == 0)
+		current_query_sampled = (random() < auto_explain_sample_ratio *
+				MAX_RANDOM_VALUE);
+
if (auto_explain_enabled())
{

I think it's better to keep the "enabled" macro unmodified, and just add
another conditional to the "if" test there.

Thanks for looking at this!

Agreed, it's too obscure. Attached v4 fixes as you said.

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org

Attachments:

auto_explain_sample_rate-v4.patchtext/x-patch; name=auto_explain_sample_rate-v4.patchDownload+47-3
#20Petr Jelinek
petr@2ndquadrant.com
In reply to: Julien Rouhaud (#19)
Re: auto_explain sample rate

On 17/02/16 01:17, Julien Rouhaud wrote:

Agreed, it's too obscure. Attached v4 fixes as you said.

Seems to be simple enough patch and works. However I would like
documentation to say that the range is 0 to 1 and represents fraction of
the queries sampled, because right now both the GUC description and the
documentation say it's in percent but that's not really true as percent
is 0 to 100.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Julien Rouhaud
rjuju123@gmail.com
In reply to: Petr Jelinek (#20)
#22Petr Jelinek
petr@2ndquadrant.com
In reply to: Julien Rouhaud (#21)
#23Magnus Hagander
magnus@hagander.net
In reply to: Petr Jelinek (#22)
#24Julien Rouhaud
rjuju123@gmail.com
In reply to: Magnus Hagander (#23)
#25Magnus Hagander
magnus@hagander.net
In reply to: Julien Rouhaud (#24)
#26Petr Jelinek
petr@2ndquadrant.com
In reply to: Magnus Hagander (#23)
#27Magnus Hagander
magnus@hagander.net
In reply to: Magnus Hagander (#25)
#28Julien Rouhaud
rjuju123@gmail.com
In reply to: Magnus Hagander (#27)
#29Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Magnus Hagander (#27)
#30Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#29)
#31Julien Rouhaud
rjuju123@gmail.com
In reply to: Robert Haas (#30)
#32Magnus Hagander
magnus@hagander.net
In reply to: Julien Rouhaud (#31)