*_collapse_limit, geqo_threshold

Started by Robert Haasalmost 17 years ago75 messageshackers
Jump to latest
#1Robert Haas
robertmhaas@gmail.com

I think we should try to do something about join_collapse_limit,
from_collapse_limit, and geqo_threshold for 8.5.

http://archives.postgresql.org/message-id/9134.1243289706@sss.pgh.pa.us
http://archives.postgresql.org/message-id/603c8f070905251800g5b86d2dav26eca7f417d15dbf@mail.gmail.com

I'm still of the opinion that join_collapse_threshold is a loaded
foot-gun, because I don't think that users will expect that a join
specified this way:

SELECT ... FROM a JOIN b ON Pab JOIN c ON Pac JOIN d ON Pad ...

will behave differently than one specified this way:

SELECT ... FROM a, b, c, d WHERE Pab AND Pac AND Pad ...

The whole purpose of join_collapse_limit in the first instance is to
prevent planning time from getting out of control, but I don't see how
we can view it as a very effective safety valve when it depends so
heavily on which syntax is used. If the planning time for an N-way
join is excessive, then we're going to have a problem with excessive
planning time whenever the second syntax is selected, and I don't see
any reason to believe that users see the second syntax as "dangerous"
in terms of planning time but the first syntax as "safer".

One possibility would be to remove join_collapse_limit entirely, but
that would eliminate one possibily-useful piece of functionality that
it current enables: namely, the ability to exactly specify the join
order by setting join_collapse_limit to 1. So one possibility would
be to rename the variable something like explicit_join_order and make
it a Boolean; another possibility would be to change the default value
to INT_MAX.

The approach I've taken in the attached patch is to make 0 mean
"unlimited" and make that the default value. I don't have a strong
feeling about whether that's better than the other two options,
although it seems cleaner to me or I'd not have written the patch that
way. We could also consider adopting this same approach for
from_collapse_limit, though for some reason that behavior marginally
less pathological to me.

At any rate, regardless of whether this patch (or one of the other
approaches mentioned above) are adopted for 8.5, I think we should
raise the default values for whatever is left. The defaults basically
haven't been modified since they were put in, and my experience is
that even queries with 10 to 15 joins perform acceptably for OLTP
workloads, which are exactly the workloads where query planning time
is most likely to be an issue. So I would propose raising each of the
limits by 4 (to 12 for from_collapse_limit and join_collapse_limit if
we don't unlimit them entirely, and to 16 for geqo_threshold). I'm
interested in hearing from anyone who has practical experience with
tuning these variables, or any ideas on what we should test to get a
better idea as to how to set them.

Thanks,

...Robert

Attachments:

unlimit_join_collapse.patchtext/x-diff; charset=US-ASCII; name=unlimit_join_collapse.patchDownload+19-19
#2Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#1)
Re: *_collapse_limit, geqo_threshold

Robert Haas <robertmhaas@gmail.com> wrote:

I'm interested in hearing from anyone who has practical experience
with tuning these variables, or any ideas on what we should test to
get a better idea as to how to set them.

I don't remember any clear resolution to the wild variations in plan
time mentioned here:

http://archives.postgresql.org/pgsql-hackers/2009-06/msg00743.php

I think it would be prudent to try to figure out why small changes in
the query caused the large changes in the plan times Andres was
seeing. Has anyone else ever seen such behavior? Can we get
examples? (It should be enough to get the statistics and the schema,
since this is about planning time, not run time.)

My own experience is that when we investigate a complaint about a
query not performing to user or application programmer expectations,
we have sometimes found that boosting these values has helped. We
boost them overall (in postgresql.conf) without ever having seen a
downside. We currently have geqo disabled and set both collapse
limits to 20. We should probably just set them both to several
hundred and not wait until some query with more than 20 tables
performs badly, but I'm not sure we have any of those yet.

In short, my experience is that when setting these higher has made any
difference at all, it has always generated a plan that saved more time
than the extra planning required. Well, I'd bet that there has been
an increase in the plan time of some queries which wound up with the
same plan anyway, but the difference has never been noticeable; the
net
effect has been a plus for us.

I guess the question is whether there is anyone who has had a contrary
experience. (There must have been some benchmarks to justify adding
geqo at some point?)

-Kevin

#3Robert Haas
robertmhaas@gmail.com
In reply to: Kevin Grittner (#2)
Re: *_collapse_limit, geqo_threshold

On Jul 7, 2009, at 9:31 AM, "Kevin Grittner" <Kevin.Grittner@wicourts.gov

wrote:

Robert Haas <robertmhaas@gmail.com> wrote:

I'm interested in hearing from anyone who has practical experience
with tuning these variables, or any ideas on what we should test to
get a better idea as to how to set them.

I don't remember any clear resolution to the wild variations in plan
time mentioned here:

http://archives.postgresql.org/pgsql-hackers/2009-06/msg00743.php

I think it would be prudent to try to figure out why small changes in
the query caused the large changes in the plan times Andres was
seeing. Has anyone else ever seen such behavior? Can we get
examples? (It should be enough to get the statistics and the schema,
since this is about planning time, not run time.)

Well, there's not really enough information there to figure out
specifically what was happening, but from 10,000 feet,
join_collapse_limit and from_collapse_limit constrain the join order.
If the estimates are all accurate, setting them to a value < infinity
will either leave the plans unchanged or make them worse. If it's
making them better, then the estimates are off and the join order
constraint happens to be preventing the planner from considering the
cases what really hurts you. But that's mostly luck.

My own experience is that when we investigate a complaint about a
query not performing to user or application programmer expectations,
we have sometimes found that boosting these values has helped. We
boost them overall (in postgresql.conf) without ever having seen a
downside. We currently have geqo disabled and set both collapse
limits to 20. We should probably just set them both to several
hundred and not wait until some query with more than 20 tables
performs badly, but I'm not sure we have any of those yet.

In short, my experience is that when setting these higher has made any
difference at all, it has always generated a plan that saved more time
than the extra planning required. Well, I'd bet that there has been
an increase in the plan time of some queries which wound up with the
same plan anyway, but the difference has never been noticeable; the
net
effect has been a plus for us.

You have a big dataset AIUI so the right values for you might be too
high for some people with, say, OLTP workloads.

I guess the question is whether there is anyone who has had a contrary
experience. (There must have been some benchmarks to justify adding
geqo at some point?)

GEQO or something like it is certainly needed for very large planning
problems. The non-GEQO planner takes exponential time in the size of
the problem, so at some point that's going to get ugly. But
triggering it at the level we do now seems unnecessarily pessimistic
about what constitutes too much planning.

...Robert

#4Andres Freund
andres@anarazel.de
In reply to: Kevin Grittner (#2)
Re: *_collapse_limit, geqo_threshold

Hi Kevin, Hi all,

On Tuesday 07 July 2009 16:31:14 Kevin Grittner wrote:

Robert Haas <robertmhaas@gmail.com> wrote:

I'm interested in hearing from anyone who has practical experience
with tuning these variables, or any ideas on what we should test to
get a better idea as to how to set them.

I don't remember any clear resolution to the wild variations in plan
time mentioned here:

http://archives.postgresql.org/pgsql-hackers/2009-06/msg00743.php

I think it would be prudent to try to figure out why small changes in
the query caused the large changes in the plan times Andres was
seeing. Has anyone else ever seen such behavior? Can we get
examples? (It should be enough to get the statistics and the schema,
since this is about planning time, not run time.)

I don't think it is surprising that small changes on those variables change
the plan time widely on a complex query.
I.e. a increase by one in from_collapse_limit can completely change the plan
before optimizations change due to more inlining.

I don't know the exact behaviour in the case more joins exists than
join_collapse_limit but is not hard to imagine that this also can dramatically
change the plan complexity. As there were quite many different views involved
all the changes on the *_limit variables could have triggered plan changes in
different parts of the query.

I plan to revisit the issue you referenced btw. Only first was release phase
and then I could not motivate myself to investigate a bit more...

The mail you referenced contains a completely bogus and ugly query that shows
similar symptoms by the way. I guess the variations would be even bigger if
differently sized views/subqueries would be used.

My own experience is that when we investigate a complaint about a
query not performing to user or application programmer expectations,
we have sometimes found that boosting these values has helped. We
boost them overall (in postgresql.conf) without ever having seen a
downside. We currently have geqo disabled and set both collapse
limits to 20. We should probably just set them both to several
hundred and not wait until some query with more than 20 tables
performs badly, but I'm not sure we have any of those yet.

I have not found consistently better results with geqo enabled. Some queries
are better, others worse. Often the comparison is not reliably reproducable.
(The possibility to set geqo to some "know" starting value would be nice for
such comparisons)

I cannot reasonably plan some queries with join_collapse_limit set to 20. At
least not without setting the geqo limit very low and a geqo_effort to a low
value.
So I would definitely not agree that removing j_c_l is a good idea.

Andres

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#4)
Re: *_collapse_limit, geqo_threshold

Andres Freund <andres@anarazel.de> writes:

I cannot reasonably plan some queries with join_collapse_limit set to 20. At
least not without setting the geqo limit very low and a geqo_effort to a low
value.
So I would definitely not agree that removing j_c_l is a good idea.

Can you show some specific examples? All of this discussion seems like
speculation in a vacuum ...

regards, tom lane

#6Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#5)
Re: *_collapse_limit, geqo_threshold

On Tuesday 07 July 2009 17:40:50 Tom Lane wrote:

Andres Freund <andres@anarazel.de> writes:

I cannot reasonably plan some queries with join_collapse_limit set to 20.
At least not without setting the geqo limit very low and a geqo_effort to
a low value.
So I would definitely not agree that removing j_c_l is a good idea.

Can you show some specific examples? All of this discussion seems like
speculation in a vacuum ...

I still may not publish the original schema (And I still have not heard any
reasonable reasons) - the crazy query in the referenced email shows similar
problems and has a somewhat similar structure.

If that is not enough I will try to design a schema that is similar and
different enough from the original schema. Will take a day or two though.

Andres

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#2)
Re: *_collapse_limit, geqo_threshold

"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:

I guess the question is whether there is anyone who has had a contrary
experience. (There must have been some benchmarks to justify adding
geqo at some point?)

The CVS history shows that geqo was integrated on 1997-02-19, which
I think means that it must have been developed against Postgres95
(or even earlier Berkeley releases?). That was certainly before any
of the current community's work on the optimizer began. A quick look
at the code as it stood on that date suggests that the regular
optimizer's behavior for large numbers of rels was a lot worse than it
is today --- notably, it looks like it would consider a whole lot more
Cartesian-product joins than we do now; especially if you had "bushy"
mode turned on, which you'd probably have to do to find good plans in
complicated cases. There were also a bunch of enormous inefficiencies
that we've whittled down over time, such as the mechanisms for comparing
pathkeys or the use of integer Lists to represent relid sets.

So while I don't doubt that geqo was absolutely essential when it was
written, it's fair to question whether it still provides a real win.
And we could definitely stand to take another look at the default
thresholds.

regards, tom lane

#8Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#1)
Re: *_collapse_limit, geqo_threshold

Robert Haas <robertmhaas@gmail.com> writes:

One possibility would be to remove join_collapse_limit entirely, but
that would eliminate one possibily-useful piece of functionality that
it current enables: namely, the ability to exactly specify the join
order by setting join_collapse_limit to 1. So one possibility would
be to rename the variable something like explicit_join_order and make
it a Boolean; another possibility would be to change the default value
to INT_MAX.

As the person who put in those thresholds, I kind of prefer going over
to the boolean definition. That was the alternative that we considered;
the numeric thresholds were used instead because they were easy to
implement and seemed to possibly offer more control. But I'm not
convinced that anyone has really used them profitably. I agree that
the ability to use JOIN syntax to specify the join order exactly (with
join_collapse_limit=1) is the only really solid use-case anyone has
proposed for either threshold. I'm interested in Andreas' comment that
he has use-cases where using the collapse_limit is better than allowing
geqo to take over for very large problems ... but I think we need to see
those use-cases and see if there's a better fix.

regards, tom lane

#9Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#7)
Re: *_collapse_limit, geqo_threshold

On Tue, Jul 7, 2009 at 5:58 PM, Tom Lane<tgl@sss.pgh.pa.us> wrote:

So while I don't doubt that geqo was absolutely essential when it was
written, it's fair to question whether it still provides a real win.
And we could definitely stand to take another look at the default
thresholds

The whole point of these parameters is to save time planning large
complex queries -- which are rarely going to be the kind of short,
simple, fast to execute oltp queries where planning time makes a big
difference. The larger more complex the query the more likely it is to
be a long-running dss or olap style query where shaving one percent
off the runtime would be worth spending many seconds planning.

I propose that there's a maximum reasonable planning time which a
programmer woulod normally expect the database to be able to come up
with a plan for virtually any query within. Personally I would be
surprised if a plain EXPLAIN took more than, say, 30s. perhaps even
something more like 10s.

We should benchmark the planner on increasingly large sets of
relations on a typical developer machine and set geqo to whatever
value the planner can handle in that length of time. I suspect even at
10s you're talking about substantially larger values than the current
default.

--
greg
http://mit.edu/~gsstark/resume.pdf

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#9)
Re: *_collapse_limit, geqo_threshold

Greg Stark <gsstark@mit.edu> writes:

We should benchmark the planner on increasingly large sets of
relations on a typical developer machine and set geqo to whatever
value the planner can handle in that length of time. I suspect even at
10s you're talking about substantially larger values than the current
default.

The problem is to find some realistic "benchmark" cases. That's one
reason why I was pestering Andreas to see his actual use cases ...

regards, tom lane

#11Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#10)
Re: *_collapse_limit, geqo_threshold

On Tuesday 07 July 2009 19:45:44 Tom Lane wrote:

Greg Stark <gsstark@mit.edu> writes:

We should benchmark the planner on increasingly large sets of
relations on a typical developer machine and set geqo to whatever
value the planner can handle in that length of time. I suspect even at
10s you're talking about substantially larger values than the current
default.

The problem is to find some realistic "benchmark" cases. That's one
reason why I was pestering Andreas to see his actual use cases ...

I will start writing a reduced/altered schema tomorrow then...

Andres

PS: Its "Andres" btw ;-)

#12Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#8)
Re: *_collapse_limit, geqo_threshold

On Jul 7, 2009, at 12:32 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

One possibility would be to remove join_collapse_limit entirely, but
that would eliminate one possibily-useful piece of functionality that
it current enables: namely, the ability to exactly specify the join
order by setting join_collapse_limit to 1. So one possibility would
be to rename the variable something like explicit_join_order and make
it a Boolean; another possibility would be to change the default
value
to INT_MAX.

As the person who put in those thresholds, I kind of prefer going over
to the boolean definition.

I'm OK with that, but out of conservatism suggested changing the
default to unlimited in this release. If by chance there is something
we're missing and these parameters are doing someone any good, we can
suggest that they set them back to the old values rather than telling
them to use a private build. If on the other hand we don't get any
complaints, we can remove them with greater confidence in a future
release. But maybe that's too conservative.

Now, here's another thought: if we think it's reasonable for people to
want to explicitly specify the join order, a GUC isn't really the best
fit, because it's all or nothing. Maybe we'd be better off dropping
the GUCs entirely and adding some other bit of syntax that forces the
join order, but only for that particular join.

That was the alternative that we considered;
the numeric thresholds were used instead because they were easy to
implement and seemed to possibly offer more control. But I'm not
convinced that anyone has really used them profitably. I agree that
the ability to use JOIN syntax to specify the join order exactly (with
join_collapse_limit=1) is the only really solid use-case anyone has
proposed for either threshold. I'm interested in Andreas' comment
that
he has use-cases where using the collapse_limit is better than
allowing
geqo to take over for very large problems ... but I think we need to
see
those use-cases and see if there's a better fix.

regards, tom lane

Agreed.

...Robert

#13Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Bruce Momjian (#9)
Re: *_collapse_limit, geqo_threshold

Le 7 juil. 09 à 19:37, Greg Stark a écrit :

I propose that there's a maximum reasonable planning time

It sounds so much like the planner_effort GUC that has been talked
about in the past...
http://archives.postgresql.org/pgsql-performance/2009-05/msg00137.php

...except this time you want to measure it in seconds. The problem
with measuring it in seconds is that when the time has elapsed, it's
uneasy to switch from classic to geqo and avoid beginning from scratch
again.
Would it be possible to start geqo from current planner state?

Another idea would be to have more complex metrics for deciding when
to run geqo, that is guesstimate the query planning difficulty very
quickly, based on more than just the number of relations in the from:
presence of subqueries, UNION, EXISTS, IN, or branches in where
clause, number of operators and index support for them, maybe some
information from the stats too... The idea would be to
- set an effort threshold from where we'd better run geqo (GUC,
disabling possible)
- if threshold enabled, compute metrics
- if metric >= threshold, use geqo, if not, classic planner
- maybe default to disabling the threshold

It seems it'd be easier to set the new GUC on a per query basis...

The obvious problem to this approach is that computing the metric will
take some time better spent at planning queries, but maybe we could
have fast path for easy queries, which will look a lot like $subject.

Regards,
--
dim

I hope this will give readers better ideas than its bare content...

#14Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Robert Haas (#12)
Re: *_collapse_limit, geqo_threshold

Le 7 juil. 09 à 21:16, Robert Haas a écrit :

Now, here's another thought: if we think it's reasonable for people
to want to explicitly specify the join order, a GUC isn't really the
best fit, because it's all or nothing. Maybe we'd be better off
dropping the GUCs entirely and adding some other bit of syntax that
forces the join order, but only for that particular join.

MySQL calls them Straight Joins:
http://www.mysqlperformanceblog.com/2006/12/28/mysql-session-variables-and-hints/

I'm not sure our best move here would be in this direction :)
--
dim

#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Dimitri Fontaine (#13)
Re: *_collapse_limit, geqo_threshold

Dimitri Fontaine <dfontaine@hi-media.com> writes:

Another idea would be to have more complex metrics for deciding when
to run geqo, that is guesstimate the query planning difficulty very
quickly, based on more than just the number of relations in the from:
presence of subqueries, UNION, EXISTS, IN, or branches in where
clause, number of operators and index support for them, maybe some
information from the stats too...

Pointless, since GEQO is only concerned with examining alternative join
orderings. I see no reason whatever to think that number-of-relations
isn't the correct variable to test to decide whether to use it.

regards, tom lane

#16Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Dimitri Fontaine (#14)
Re: *_collapse_limit, geqo_threshold

Robert Haas <robertmhaas@gmail.com> wrote:

if we think it's reasonable for people to want to explicitly specify
the join order

Regardless of the syntax (GUC or otherwise), that is an optimizer
hint. I thought we were trying to avoid those.

Although -- we do have all those enable_* GUC values which are also
optimizer hints; perhaps this should be another of those?
enable_join_reorder?

-Kevin

#17Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Tom Lane (#15)
Re: *_collapse_limit, geqo_threshold

Le 7 juil. 09 à 21:45, Tom Lane a écrit :

Dimitri Fontaine <dfontaine@hi-media.com> writes:

Another idea would be to have more complex metrics for deciding when
to run geqo

Pointless, since GEQO is only concerned with examining alternative
join
orderings. I see no reason whatever to think that number-of-relations
isn't the correct variable to test to decide whether to use it.

Oh. It seems I prefer showing my ignorance rather than learning enough
first. Writing mails is so much easier...

Sorry for the noise,
--
dim

#18Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#16)
Re: *_collapse_limit, geqo_threshold

"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:

Although -- we do have all those enable_* GUC values which are also
optimizer hints; perhaps this should be another of those?
enable_join_reorder?

Not a bad suggestion, especially since turning it off would usually be
considered just about as bad an idea as turning off the other ones.

regards, tom lane

#19Robert Haas
robertmhaas@gmail.com
In reply to: Kevin Grittner (#16)
Re: *_collapse_limit, geqo_threshold

On Jul 7, 2009, at 3:03 PM, "Kevin Grittner" <Kevin.Grittner@wicourts.gov

wrote:

Robert Haas <robertmhaas@gmail.com> wrote:

if we think it's reasonable for people to want to explicitly specify
the join order

Regardless of the syntax (GUC or otherwise), that is an optimizer
hint. I thought we were trying to avoid those.

I guess my point is that there's not a lot of obvious benefit in
allowing the functionality to exist but handicapping it so that it's
useful in as few cases as possible. If the consensus is that we want
half a feature (but not more or less than half), that's OK with me,
but it's not obvious to me why we should choose to want that.

...Robert

#20Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#19)
Re: *_collapse_limit, geqo_threshold

Robert Haas <robertmhaas@gmail.com> writes:

I guess my point is that there's not a lot of obvious benefit in
allowing the functionality to exist but handicapping it so that it's
useful in as few cases as possible. If the consensus is that we want
half a feature (but not more or less than half), that's OK with me,
but it's not obvious to me why we should choose to want that.

Well, the question to my mind is whether the collapse_threshold GUCs in
their current form actually represent a feature ;-). They were put
in pretty much entirely on speculation that someone might find them
useful. Your argument is that they are not only useless but a foot-gun,
and so far we haven't got any clear contrary evidence. If we accept
that argument then we should take them out, not just change the default.

My own thought is that from_collapse_limit has more justification,
since it basically acts to stop a subquery from being flattened when
that would make the parent query too complex, and that seems like a
more understandable and justifiable behavior than treating JOIN
syntax specially. But I'm fine with removing join_collapse_limit
or reducing it to a boolean.

regards, tom lane

#21Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#19)
#22Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tom Lane (#20)
#23Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#22)
#24Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#20)
#25Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#24)
#26Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#25)
#27Jan Urbański
wulczer@wulczer.org
In reply to: Tom Lane (#7)
#28Noah Misch
noah@leadboat.com
In reply to: Kevin Grittner (#2)
In reply to: Noah Misch (#28)
#30Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#26)
#31Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#30)
In reply to: Kevin Grittner (#31)
#33Tom Lane
tgl@sss.pgh.pa.us
In reply to: Noah Misch (#28)
#34Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#31)
In reply to: Tom Lane (#34)
#36Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#30)
#37Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#36)
#38Joshua Tolley
eggyknap@gmail.com
In reply to: Tom Lane (#37)
#39Tom Lane
tgl@sss.pgh.pa.us
In reply to: Joshua Tolley (#38)
#40Noah Misch
noah@leadboat.com
In reply to: Tom Lane (#33)
#41Tom Lane
tgl@sss.pgh.pa.us
In reply to: Noah Misch (#40)
#42Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#37)
#43Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Robert Haas (#42)
#44Noah Misch
noah@leadboat.com
In reply to: Robert Haas (#1)
#45Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#36)
#46Peter Hunsberger
peter.hunsberger@gmail.com
In reply to: Tom Lane (#37)
#47Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#5)
#48Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Joshua Tolley (#38)
#49Andres Freund
andres@anarazel.de
In reply to: Kevin Grittner (#48)
#50Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#30)
#51Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Robert Haas (#50)
#52Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#50)
#53Robert Haas
robertmhaas@gmail.com
In reply to: Dimitri Fontaine (#51)
#54Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#53)
#55Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#54)
#56Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#55)
#57Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#52)
#58Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Robert Haas (#50)
#59Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#56)
#60Ron Mayer
rm_pg@cheapcomplexdevices.com
In reply to: Tom Lane (#59)
#61Robert Haas
robertmhaas@gmail.com
In reply to: Jaime Casanova (#58)
#62Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#59)
#63Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#34)
#64Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#63)
#65Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#64)
#66Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#65)
In reply to: Tom Lane (#64)
#68Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#64)
#69Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#68)
#70Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#69)
#71Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#70)
#72Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#71)
#73Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#72)
#74Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#63)
#75marcin mank
marcin.mank@gmail.com
In reply to: Noah Misch (#40)