Add min and max execute statement time in pg_stat_statement

Started by KONDO Mitsumasaover 12 years ago170 messageshackers
Jump to latest
#1KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp

I submit patch adding min and max execute statement time in pg_stat_statement in
next CF.

pg_stat_statement have execution time, but it is average execution time and does
not provide detail information very much. So I add min and max execute statement
time in pg_stat_statement columns. Usage is almost same as before. However, I add
pg_stat_statements_reset_time() function to get min_time and max_time in the
specific period. This function resets or inits min and max execution time before.

Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center

Attachments:

pg_stat_statements-min_max_exectime_v0.patchtext/x-diff; name=pg_stat_statements-min_max_exectime_v0.patchDownload+128-54
#2Andrew Dunstan
andrew@dunslane.net
In reply to: KONDO Mitsumasa (#1)
Re: Add min and max execute statement time in pg_stat_statement

On 10/18/2013 04:02 AM, KONDO Mitsumasa wrote:

I submit patch adding min and max execute statement time in pg_stat_statement in
next CF.

pg_stat_statement have execution time, but it is average execution time and does
not provide detail information very much. So I add min and max execute statement
time in pg_stat_statement columns. Usage is almost same as before. However, I add
pg_stat_statements_reset_time() function to get min_time and max_time in the
specific period. This function resets or inits min and max execution time before.

If we're going to extend pg_stat_statements, even more than min and max
I'd like to see the standard deviation in execution time.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Andrew Dunstan (#2)
Re: Add min and max execute statement time in pg_stat_statement

(2013/10/18 22:21), Andrew Dunstan wrote:

If we're going to extend pg_stat_statements, even more than min and max
I'd like to see the standard deviation in execution time.

OK. I do! I am making some other patches, please wait more!

Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center.;

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: KONDO Mitsumasa (#3)
Re: Add min and max execute statement time in pg_stat_statement

On 22/10/13 00:17, KONDO Mitsumasa wrote:

(2013/10/18 22:21), Andrew Dunstan wrote:

If we're going to extend pg_stat_statements, even more than min and max
I'd like to see the standard deviation in execution time.

OK. I do! I am making some other patches, please wait more!

Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center.;

How about the 'median', often a lot more useful than the 'arithmetic
mean' (which most people call the 'average').

Cheers,
Gavin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Gavin Flower (#4)
Re: Add min and max execute statement time in pg_stat_statement

Gavin Flower <GavinFlower@archidevsys.co.nz> writes:

If we're going to extend pg_stat_statements, even more than min and max
I'd like to see the standard deviation in execution time.

How about the 'median', often a lot more useful than the 'arithmetic
mean' (which most people call the 'average').

AFAIK, median is impossible to calculate cheaply (in particular, with
a fixed amount of workspace). So this apparently innocent request
is actually moving the goalposts a long way, because the space per
query table entry is a big concern for pg_stat_statements.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#5)
Re: Add min and max execute statement time in pg_stat_statement

On Mon, Oct 21, 2013 at 4:01 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Gavin Flower <GavinFlower@archidevsys.co.nz> writes:

If we're going to extend pg_stat_statements, even more than min and max
I'd like to see the standard deviation in execution time.

How about the 'median', often a lot more useful than the 'arithmetic
mean' (which most people call the 'average').

AFAIK, median is impossible to calculate cheaply (in particular, with
a fixed amount of workspace). So this apparently innocent request
is actually moving the goalposts a long way, because the space per
query table entry is a big concern for pg_stat_statements.

Yeah, and I worry about min and max not being very usable - once they
get pushed out to extreme values, there's nothing to drag them back
toward normality except resetting the stats, and that's not something
we want to encourage people to do frequently. Of course, averages over
very long sampling intervals may not be too useful anyway, dunno.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Robert Haas (#6)
Re: Add min and max execute statement time in pg_stat_statement

On Mon, Oct 21, 2013 at 1:36 PM, Robert Haas <robertmhaas@gmail.com> wrote:

Yeah, and I worry about min and max not being very usable - once they
get pushed out to extreme values, there's nothing to drag them back
toward normality except resetting the stats, and that's not something
we want to encourage people to do frequently.

My thoughts exactly. Perhaps it'd be useful to separately invalidate
min/max times, without a full reset. But then you've introduced the
possibility of the average time (total_time/calls) exceeding the max
or being less than the min.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Tom Lane (#5)
Re: Add min and max execute statement time in pg_stat_statement

On 22/10/13 09:01, Tom Lane wrote:

Gavin Flower <GavinFlower@archidevsys.co.nz> writes:

If we're going to extend pg_stat_statements, even more than min and max
I'd like to see the standard deviation in execution time.

How about the 'median', often a lot more useful than the 'arithmetic
mean' (which most people call the 'average').

AFAIK, median is impossible to calculate cheaply (in particular, with
a fixed amount of workspace). So this apparently innocent request
is actually moving the goalposts a long way, because the space per
query table entry is a big concern for pg_stat_statements.

regards, tom lane

Yeah, obvious - in retrospect! :-)

One way it could be done, but even this would consume far too much
storage and processing power (hence totally impractical), would be to
'simply' store a counter for each value found and increment it for each
occurence...

Cheers,
Gavin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Gavin Flower (#8)
Re: Add min and max execute statement time in pg_stat_statement

Gavin Flower wrote:

One way it could be done, but even this would consume far too much
storage and processing power (hence totally impractical), would be
to 'simply' store a counter for each value found and increment it
for each occurence...

An histogram? Sounds like a huge lot of code complexity to me. Not
sure the gain is enough.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Andrew Dunstan
andrew@dunslane.net
In reply to: Peter Geoghegan (#7)
Re: Add min and max execute statement time in pg_stat_statement

On 10/21/2013 04:43 PM, Peter Geoghegan wrote:

On Mon, Oct 21, 2013 at 1:36 PM, Robert Haas <robertmhaas@gmail.com> wrote:

Yeah, and I worry about min and max not being very usable - once they
get pushed out to extreme values, there's nothing to drag them back
toward normality except resetting the stats, and that's not something
we want to encourage people to do frequently.

My thoughts exactly. Perhaps it'd be useful to separately invalidate
min/max times, without a full reset. But then you've introduced the
possibility of the average time (total_time/calls) exceeding the max
or being less than the min.

This is why I suggested the standard deviation, and why I find it would
be more useful than just min and max. A couple of outliers will set the
min and max to
possibly extreme values but hardly perturb the standard deviation over a
large number of observations.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#6)
Re: Add min and max execute statement time in pg_stat_statement

Robert Haas <robertmhaas@gmail.com> writes:

Yeah, and I worry about min and max not being very usable - once they
get pushed out to extreme values, there's nothing to drag them back
toward normality except resetting the stats, and that's not something
we want to encourage people to do frequently. Of course, averages over
very long sampling intervals may not be too useful anyway, dunno.

Good point, but that doesn't mean that the request is unreasonable.

For min/max, we could possibly address this concern by introducing an
exponential decay over time --- that is, every so often, you take some
small fraction of (max - min) and add that to the running min while
subtracting it from the max. Or some other variant on that theme. There
might be a way to progressively discount old observations for average too,
though I'm not sure exactly how at the moment.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#10)
Re: Add min and max execute statement time in pg_stat_statement

Andrew Dunstan <andrew@dunslane.net> writes:

This is why I suggested the standard deviation, and why I find it would
be more useful than just min and max. A couple of outliers will set the
min and max to possibly extreme values but hardly perturb the standard
deviation over a large number of observations.

Hm. It's been a long time since college statistics, but doesn't the
entire concept of standard deviation depend on the assumption that the
underlying distribution is more-or-less normal (Gaussian)? Is there a
good reason to suppose that query runtime is Gaussian? (I'd bet not;
in particular, multimodal behavior seems very likely due to things like
plan changes.) If not, how much does that affect the usefulness of
a standard-deviation calculation?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#12)
Re: Add min and max execute statement time in pg_stat_statement

On 10/21/2013 07:29 PM, Tom Lane wrote:

Andrew Dunstan <andrew@dunslane.net> writes:

This is why I suggested the standard deviation, and why I find it would
be more useful than just min and max. A couple of outliers will set the
min and max to possibly extreme values but hardly perturb the standard
deviation over a large number of observations.

Hm. It's been a long time since college statistics, but doesn't the
entire concept of standard deviation depend on the assumption that the
underlying distribution is more-or-less normal (Gaussian)? Is there a
good reason to suppose that query runtime is Gaussian? (I'd bet not;
in particular, multimodal behavior seems very likely due to things like
plan changes.) If not, how much does that affect the usefulness of
a standard-deviation calculation?

IANA statistician, but the article at
<https://en.wikipedia.org/wiki/Standard_deviation&gt; appears to have a
diagram with one sample that's multi-modal.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Tom Lane (#12)
Re: Add min and max execute statement time in pg_stat_statement

On Mon, Oct 21, 2013 at 4:29 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Hm. It's been a long time since college statistics, but doesn't the
entire concept of standard deviation depend on the assumption that the
underlying distribution is more-or-less normal (Gaussian)?

I don't see how. The standard deviation here would be expressed in
units of milliseconds. Now, that could be misleading, in that like a
mean average, it might "mischaracterize" the distribution. But it's
still got to be a big improvement.

I like the idea of a decay, but can't think of a principled scheme offhand.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Ants Aasma
ants.aasma@cybertec.at
In reply to: Alvaro Herrera (#9)
Re: Add min and max execute statement time in pg_stat_statement

On Tue, Oct 22, 2013 at 1:09 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Gavin Flower wrote:

One way it could be done, but even this would consume far too much
storage and processing power (hence totally impractical), would be
to 'simply' store a counter for each value found and increment it
for each occurence...

An histogram? Sounds like a huge lot of code complexity to me. Not
sure the gain is enough.

I have a proof of concept patch somewhere that does exactly this. I
used logarithmic bin widths. With 8 log10 bins you can tell the
fraction of queries running at each order of magnitude from less than
1ms to more than 1000s. Or with 31 bins you can cover factor of 2
increments from 100us to over 27h. And the code is almost trivial,
just take a log of the duration and calculate the bin number from that
and increment the value in the corresponding bin.

Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Ants Aasma (#15)
Re: Add min and max execute statement time in pg_stat_statement

On 22/10/13 13:26, Ants Aasma wrote:

On Tue, Oct 22, 2013 at 1:09 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Gavin Flower wrote:

One way it could be done, but even this would consume far too much
storage and processing power (hence totally impractical), would be
to 'simply' store a counter for each value found and increment it
for each occurence...

An histogram? Sounds like a huge lot of code complexity to me. Not
sure the gain is enough.

I have a proof of concept patch somewhere that does exactly this. I
used logarithmic bin widths. With 8 log10 bins you can tell the
fraction of queries running at each order of magnitude from less than
1ms to more than 1000s. Or with 31 bins you can cover factor of 2
increments from 100us to over 27h. And the code is almost trivial,
just take a log of the duration and calculate the bin number from that
and increment the value in the corresponding bin.

Regards,
Ants Aasma

That might be useful in determining if things are sufficiently bad to be
worth investigating in more detail. No point in tuning stuff that is
behaving acceptably.

Also good enough to say 95% execute within 5 seconds (or whatever).

Cheers,
Gavin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Ants Aasma (#15)
Re: Add min and max execute statement time in pg_stat_statement

On 22/10/13 13:26, Ants Aasma wrote:

On Tue, Oct 22, 2013 at 1:09 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Gavin Flower wrote:

One way it could be done, but even this would consume far too much
storage and processing power (hence totally impractical), would be
to 'simply' store a counter for each value found and increment it
for each occurence...

An histogram? Sounds like a huge lot of code complexity to me. Not
sure the gain is enough.

I have a proof of concept patch somewhere that does exactly this. I
used logarithmic bin widths. With 8 log10 bins you can tell the
fraction of queries running at each order of magnitude from less than
1ms to more than 1000s. Or with 31 bins you can cover factor of 2
increments from 100us to over 27h. And the code is almost trivial,
just take a log of the duration and calculate the bin number from that
and increment the value in the corresponding bin.

Regards,
Ants Aasma

I suppose this has to be decided at compile time to keep the code both
simple and efficient - if so, I like the binary approach.

Curious, why start at 100us? I suppose this might be of interest if
everything of note is in RAM and/or stuff is on SSD's.

Cheers,
Gavin

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Ants Aasma
ants.aasma@cybertec.at
In reply to: Gavin Flower (#17)
Re: Add min and max execute statement time in pg_stat_statement

On Tue, Oct 22, 2013 at 4:00 AM, Gavin Flower
<GavinFlower@archidevsys.co.nz> wrote:

I have a proof of concept patch somewhere that does exactly this. I
used logarithmic bin widths. With 8 log10 bins you can tell the
fraction of queries running at each order of magnitude from less than
1ms to more than 1000s. Or with 31 bins you can cover factor of 2
increments from 100us to over 27h. And the code is almost trivial,
just take a log of the duration and calculate the bin number from that
and increment the value in the corresponding bin.

I suppose this has to be decided at compile time to keep the code both
simple and efficient - if so, I like the binary approach.

For efficiency's sake it can easily be done at run time, one extra
logarithm calculation per query will not be noticeable. Having a
proper user interface to make it configurable and changeable is where
the complexity is. We might just decide to go with something good
enough as even the 31 bin solution would bloat the pg_stat_statements
data structure only by about 10%.

Curious, why start at 100us? I suppose this might be of interest if
everything of note is in RAM and/or stuff is on SSD's.

Selecting a single row takes about 20us on my computer, I picked 100us
as a reasonable limit below where the exact speed doesn't matter
anymore.

Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Tom Lane (#12)
Re: Add min and max execute statement time in pg_stat_statement

Tom Lane <tgl@sss.pgh.pa.us> writes:

Hm. It's been a long time since college statistics, but doesn't the
entire concept of standard deviation depend on the assumption that the
underlying distribution is more-or-less normal (Gaussian)? Is there a

I just had a quick chat with a statistician friends of mine on that
topic, and it seems that the only way to make sense of an average is if
you know already the distribution.

In our case, what I keep experiencing with tuning queries is that we
have like 99% of them running under acceptable threshold and 1% of them
taking more and more time.

In a normal (Gaussian) distribution, there would be no query time
farther away from the average than any other, so my experience tells me
that the query time distribution is anything BUT normal (Gaussian).

good reason to suppose that query runtime is Gaussian? (I'd bet not;
in particular, multimodal behavior seems very likely due to things like
plan changes.) If not, how much does that affect the usefulness of
a standard-deviation calculation?

I don't know what multi-modal is.

What I've been gathering from my quick chat this morning is that either
you know how to characterize the distribution and then the min max and
average are useful on their own, or you need to keep track of an
histogram where all the bins are of the same size to be able to learn
what the distribution actually is.

We didn't get to the point where I could understand if storing histogram
with a constant size on log10 of the data rather than the data itself is
going to allow us to properly characterize the distribution.

The main question I want to answer here would be the percentiles one, I
want to get the query max execution timing for 95% of the executions,
then 99%, then 99.9% etc. There's no way to answer that without knowing
the distribution shape, so we need enough stats to learn what the
distribution shape is (hence, histograms).

Of course keeping enough stats seems to always begin with keeping the
min, max and average, so we can just begin there. We would just be
unable to answer interesting questions with just that.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Daniel Farina
daniel@heroku.com
In reply to: Dimitri Fontaine (#19)
Re: Add min and max execute statement time in pg_stat_statement

On Tue, Oct 22, 2013 at 2:56 AM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:

Tom Lane <tgl@sss.pgh.pa.us> writes:

Hm. It's been a long time since college statistics, but doesn't the
entire concept of standard deviation depend on the assumption that the
underlying distribution is more-or-less normal (Gaussian)? Is there a

I just had a quick chat with a statistician friends of mine on that
topic, and it seems that the only way to make sense of an average is if
you know already the distribution.

In our case, what I keep experiencing with tuning queries is that we
have like 99% of them running under acceptable threshold and 1% of them
taking more and more time.

Agreed.

In a lot of Heroku's performance work, the Perc99 and Perc95 have
provided a lot more value that stddev, although stddev is a lot better
than nothing and probably easier to implement.

There are apparently high-quality statistical approximations of these
that are not expensive to compute and are small in memory representation.

That said, I'd take stddev over nothing for sure.

Handily for stddev, I think by snapshots of count(x), sum(x),
sum(x**2) (which I understand to be the components of stddev), I think
one can compute stddevs across different time spans using auxiliary
tools that sample this triplet on occasion. That's kind of a handy
property that I'm not sure if percN-approximates can get too easily.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Dimitri Fontaine (#19)
#22Stephen Frost
sfrost@snowman.net
In reply to: Dimitri Fontaine (#19)
#23Jeff Janes
jeff.janes@gmail.com
In reply to: Tom Lane (#12)
#24Jeff Janes
jeff.janes@gmail.com
In reply to: Robert Haas (#6)
#25Josh Berkus
josh@agliodbs.com
In reply to: KONDO Mitsumasa (#1)
#26KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Stephen Frost (#22)
#27Stephen Frost
sfrost@snowman.net
In reply to: Josh Berkus (#25)
#28Jeff Janes
jeff.janes@gmail.com
In reply to: Stephen Frost (#27)
#29Stephen Frost
sfrost@snowman.net
In reply to: Jeff Janes (#28)
In reply to: Jeff Janes (#28)
#31Marc Mamin
M.Mamin@intershop.de
In reply to: KONDO Mitsumasa (#26)
#32Martijn van Oosterhout
kleptog@svana.org
In reply to: Jeff Janes (#23)
In reply to: Martijn van Oosterhout (#32)
#34Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Marc Mamin (#31)
In reply to: Gavin Flower (#34)
#36Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Peter Geoghegan (#35)
#37Jeff Janes
jeff.janes@gmail.com
In reply to: Peter Geoghegan (#35)
#38Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Gavin Flower (#36)
#39Jeff Janes
jeff.janes@gmail.com
In reply to: Gavin Flower (#36)
In reply to: Jeff Janes (#39)
#41Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Jeff Janes (#39)
#42Stephen Frost
sfrost@snowman.net
In reply to: Martijn van Oosterhout (#32)
#43Stephen Frost
sfrost@snowman.net
In reply to: Peter Geoghegan (#40)
#44Josh Berkus
josh@agliodbs.com
In reply to: KONDO Mitsumasa (#3)
#45Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Peter Geoghegan (#40)
In reply to: Stephen Frost (#43)
In reply to: Josh Berkus (#44)
#48Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Josh Berkus (#44)
In reply to: Gavin Flower (#45)
#50Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Peter Geoghegan (#49)
In reply to: Peter Geoghegan (#47)
In reply to: Gavin Flower (#50)
#53Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Peter Geoghegan (#51)
#54Andrew Dunstan
andrew@dunslane.net
In reply to: Peter Geoghegan (#47)
In reply to: Andrew Dunstan (#54)
#56Josh Berkus
josh@agliodbs.com
In reply to: Jeff Janes (#23)
In reply to: Alvaro Herrera (#53)
#58KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: KONDO Mitsumasa (#3)
#59KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: KONDO Mitsumasa (#58)
#60Fujii Masao
masao.fujii@gmail.com
In reply to: Peter Geoghegan (#57)
In reply to: Fujii Masao (#60)
#62KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Peter Geoghegan (#57)
In reply to: KONDO Mitsumasa (#62)
#64KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Peter Geoghegan (#63)
#65KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Fujii Masao (#60)
#66Rajeev rastogi
rajeev.rastogi@huawei.com
In reply to: KONDO Mitsumasa (#59)
#67KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: KONDO Mitsumasa (#64)
#68Simon Riggs
simon@2ndQuadrant.com
In reply to: KONDO Mitsumasa (#67)
#69Andrew Dunstan
andrew@dunslane.net
In reply to: Simon Riggs (#68)
In reply to: Simon Riggs (#68)
#71Simon Riggs
simon@2ndQuadrant.com
In reply to: Peter Geoghegan (#70)
#72KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Simon Riggs (#71)
#73Robert Haas
robertmhaas@gmail.com
In reply to: KONDO Mitsumasa (#72)
#74KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Robert Haas (#73)
In reply to: KONDO Mitsumasa (#74)
#76Andrew Dunstan
andrew@dunslane.net
In reply to: KONDO Mitsumasa (#74)
#77KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Andrew Dunstan (#76)
#78KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Peter Geoghegan (#75)
#79Andrew Dunstan
andrew@dunslane.net
In reply to: KONDO Mitsumasa (#77)
#80Simon Riggs
simon@2ndQuadrant.com
In reply to: Simon Riggs (#68)
#81Mitsumasa KONDO
kondo.mitsumasa@gmail.com
In reply to: Simon Riggs (#80)
#82Simon Riggs
simon@2ndQuadrant.com
In reply to: KONDO Mitsumasa (#78)
#83KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Andrew Dunstan (#79)
#84KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Mitsumasa KONDO (#81)
#85Andrew Dunstan
andrew@dunslane.net
In reply to: KONDO Mitsumasa (#83)
#86Mitsumasa KONDO
kondo.mitsumasa@gmail.com
In reply to: Andrew Dunstan (#85)
#87Andrew Dunstan
andrew@dunslane.net
In reply to: Mitsumasa KONDO (#86)
In reply to: Andrew Dunstan (#85)
#89Andrew Dunstan
andrew@dunslane.net
In reply to: Peter Geoghegan (#88)
In reply to: Andrew Dunstan (#89)
#91Rajeev rastogi
rajeev.rastogi@huawei.com
In reply to: KONDO Mitsumasa (#84)
#92KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Rajeev rastogi (#91)
#93Tom Lane
tgl@sss.pgh.pa.us
In reply to: KONDO Mitsumasa (#92)
In reply to: Tom Lane (#93)
#95KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Tom Lane (#93)
#96Rajeev rastogi
rajeev.rastogi@huawei.com
In reply to: KONDO Mitsumasa (#92)
#97Magnus Hagander
magnus@hagander.net
In reply to: KONDO Mitsumasa (#95)
#98Magnus Hagander
magnus@hagander.net
In reply to: Peter Geoghegan (#94)
#99Simon Riggs
simon@2ndQuadrant.com
In reply to: Magnus Hagander (#98)
#100Andrew Dunstan
andrew@dunslane.net
In reply to: Peter Geoghegan (#94)
#101Josh Berkus
josh@agliodbs.com
In reply to: Jeff Janes (#23)
#102Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Dunstan (#100)
#103Josh Berkus
josh@agliodbs.com
In reply to: Jeff Janes (#23)
In reply to: Andrew Dunstan (#100)
#105Andrew Dunstan
andrew@dunslane.net
In reply to: Peter Geoghegan (#104)
#106Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#105)
#107KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Peter Geoghegan (#94)
#108KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Rajeev rastogi (#96)
#109KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Tom Lane (#106)
In reply to: KONDO Mitsumasa (#107)
#111Tom Lane
tgl@sss.pgh.pa.us
In reply to: KONDO Mitsumasa (#109)
#112Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#110)
#113Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#112)
#114Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#113)
#115Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#114)
#116Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#115)
In reply to: Tom Lane (#116)
#118Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#117)
In reply to: Tom Lane (#118)
#120Mitsumasa KONDO
kondo.mitsumasa@gmail.com
In reply to: Peter Geoghegan (#119)
In reply to: Mitsumasa KONDO (#120)
#122Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#121)
#123KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: KONDO Mitsumasa (#108)
#124Rajeev rastogi
rajeev.rastogi@huawei.com
In reply to: KONDO Mitsumasa (#123)
#125KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Rajeev rastogi (#124)
#126KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: KONDO Mitsumasa (#123)
#127Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: KONDO Mitsumasa (#126)
#128Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#127)
#129Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#128)
#130Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#129)
#131Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#128)
#132Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andrew Dunstan (#131)
#133Andres Freund
andres@anarazel.de
In reply to: Alvaro Herrera (#132)
#134Andrew Dunstan
andrew@dunslane.net
In reply to: Alvaro Herrera (#132)
#135Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#134)
#136Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#135)
#137Andrew Dunstan
andrew@dunslane.net
In reply to: Andrew Dunstan (#136)
#138Arne Scheffer
arne.scheffer@uni-muenster.de
In reply to: Andrew Dunstan (#137)
#139Andrew Dunstan
andrew@dunslane.net
In reply to: Arne Scheffer (#138)
#140David G. Johnston
david.g.johnston@gmail.com
In reply to: Andrew Dunstan (#139)
#141Andrew Dunstan
andrew@dunslane.net
In reply to: David G. Johnston (#140)
#142Arne Scheffer
arne.scheffer@uni-muenster.de
In reply to: Andrew Dunstan (#139)
#143Arne Scheffer
arne.scheffer@uni-muenster.de
In reply to: David G. Johnston (#140)
#144Arne Scheffer
arne.scheffer@uni-muenster.de
In reply to: Andrew Dunstan (#141)
#145Arne Scheffer
arne.scheffer@uni-muenster.de
In reply to: Arne Scheffer (#144)
#146Andrew Dunstan
andrew@dunslane.net
In reply to: Arne Scheffer (#145)
#147Arne Scheffer
scheffa@uni-muenster.de
In reply to: Andrew Dunstan (#146)
#148Andrew Dunstan
andrew@dunslane.net
In reply to: Arne Scheffer (#147)
#149Arne Scheffer
arne.scheffer@uni-muenster.de
In reply to: Andrew Dunstan (#148)
#150Petr Jelinek
petr@2ndquadrant.com
In reply to: Andrew Dunstan (#148)
In reply to: Petr Jelinek (#150)
#152Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Geoghegan (#151)
#153Andrew Dunstan
andrew@dunslane.net
In reply to: Petr Jelinek (#152)
#154Andrew Dunstan
andrew@dunslane.net
In reply to: Andrew Dunstan (#153)
#155Petr Jelinek
petr@2ndquadrant.com
In reply to: Andrew Dunstan (#153)
#156Petr Jelinek
petr@2ndquadrant.com
In reply to: Andrew Dunstan (#154)
#157Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#156)
#158Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#157)
#159Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#158)
#160Peter Eisentraut
peter_e@gmx.net
In reply to: David G. Johnston (#140)
#161Andrew Dunstan
andrew@dunslane.net
In reply to: Peter Eisentraut (#160)
#162David Fetter
david@fetter.org
In reply to: Peter Eisentraut (#160)
#163Andrew Dunstan
andrew@dunslane.net
In reply to: David Fetter (#162)
#164David G. Johnston
david.g.johnston@gmail.com
In reply to: Andrew Dunstan (#163)
#165David Fetter
david@fetter.org
In reply to: David G. Johnston (#164)
#166David G. Johnston
david.g.johnston@gmail.com
In reply to: David Fetter (#165)
#167Andrew Dunstan
andrew@dunslane.net
In reply to: Petr Jelinek (#155)
#168Petr Jelinek
petr@2ndquadrant.com
In reply to: Andrew Dunstan (#167)
#169Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#168)
#170Andres Freund
andres@anarazel.de
In reply to: Andrew Dunstan (#167)