log_duration_sample config option patch

Started by Timo Savolaalmost 17 years ago4 messages
#1Timo Savola
timo.savola@dynamoid.com
1 attachment(s)

Hello. The attached patch has made it more feasible for us to gather
profiling data on a production system for analysis with pgFouine. It
has been written against PostgreSQL 8.3.5 and tested on Linux. Comments
welcome.

timo

Attachments:

log_duration_sample.patchtext/x-patch; name=log_duration_sample.patchDownload
#2Timo Savola
thain@irc-galleria.net
In reply to: Timo Savola (#1)

Sorry for the attachment; here's the patch inline.

timo

In reply to: Timo Savola (#1)
Re: log_duration_sample config option patch

Timo Savola escreveu:

Hello. The attached patch has made it more feasible for us to gather
profiling data on a production system for analysis with pgFouine. It
has been written against PostgreSQL 8.3.5 and tested on Linux. Comments
welcome.

IIRC pgFouine shows exact percentage of query by type; this new GUC implies we
can not rely on it anymore. Also, you could skip us from logging long (and
bad) running queries. Another statistics data in pgFouine will suffer from the
same problem. I see your point in reducing amount of log data but I think it
will limit the tool usability (you can always start/stop statement collection
at runtime).

--
Euler Taveira de Oliveira
http://www.timbira.com/

#4Timo Savola
timo.savola@dynamoid.com
In reply to: Euler Taveira de Oliveira (#3)
Re: log_duration_sample config option patch

Euler Taveira de Oliveira wrote:

IIRC pgFouine shows exact percentage of query by type; this new GUC implies we
can not rely on it anymore. Also, you could skip us from logging long (and
bad) running queries. Another statistics data in pgFouine will suffer from the
same problem. I see your point in reducing amount of log data but I think it
will limit the tool usability (you can always start/stop statement collection
at runtime).

The accuracy of numbers suffers - yes. But this option basically gives
you the choice of logging all queries for a short time (leaving out
queries before and after that time period) or some queries for a long
time (leaving out randomly selected queries in between). If you're
interested in a bigger picture, the numbers will be more meaningful.
(Also, if the number of queries is high enough for someone to use this
option, it means that there will be a lot of samples anyway.)

For example, it might be more useful to log 1% of queries during the
peak hours of each day of the month, than all queries during the peak
hours of one day. Or you could log 0.01% of all queries always...
Especially queries which are executed many times can be found just as
well with this approach, and chronically slow queries will eventually
pop up also. But if someone wants to specifically track any and all
long queries, then there's the option to not enable log_duration_sample
but to set log_min_duration_statement to something appropriate instead.

Another point is that during the time of the biggest load (= the most
interesting time) full logging would increase I/O load and bias the
results (and degrade the service quality). By spreading out the load,
the bias is smaller and the logging can actually be enabled at all
during peak hours.

Thanks for the feedback.

timo