add_path optimization

Started by Robert Haasabout 17 years ago59 messageshackers
Jump to latest
#1Robert Haas
robertmhaas@gmail.com

I've been doing some benchmarking and profiling on the PostgreSQL
query analyzer, and it seems that (at least for the sorts of queries
that I typically run) the dominant cost is add_path(). I've been able
to find two optimizations that seem to help significantly:

1. add_path() often calls compare_fuzzy_path_costs() twice on the same
pair of paths, and when the paths compare equal on one criterion, some
comparisons are duplicated. I've refactored this function to return
the results of both calculations without repeating any floating-point
arithmetic.

2. match_unsorted_outer() adds as many as 5 nested loop joins at a
time with the same set of pathkeys. In my tests, it tended to be ~3 -
cheapest inner, cheapest inner materialized, and cheapest inner index.
Since these all have the same pathkeys, clearly only the one with the
cheapest total cost is in the running for cheapest total cost for that
set of pathkeys, and likewise for startup cost (and the two may be the
same). Yet we compare all of them against the whole pathlist, one
after the other, including (for the most part) the rather expensive
pathkey comparison. I've added a function add_similar_paths() and
refactored match_unsorted_outer() to use it.

On a couple of complex (and proprietary) queries with 12+ joins each,
I measure a planning time improvement of 8-12% with the attached patch
applied. It would be interesting to try to replicate this on a
publicly available data set, but I don't know of a good one to use.
Suggestions welcome - results of performance testing on your own
favorite big queries even more welcome. Simple test harness also
attached. I took the approach of dropping caches, starting the
server, and then running this 5 times each on several queries,
dropping top and bottom results.

...Robert

Attachments:

fast_add_path.patchtext/x-patch; charset=US-ASCII; name=fast_add_path.patchDownload+130-82
explain_loop.plapplication/x-perl; name=explain_loop.plDownload
#2David Fetter
david@fetter.org
In reply to: Robert Haas (#1)
Re: add_path optimization

On Sat, Jan 31, 2009 at 11:37:39PM -0500, Robert Haas wrote:

I've been doing some benchmarking and profiling on the PostgreSQL
query analyzer, and it seems that (at least for the sorts of queries
that I typically run) the dominant cost is add_path(). I've been
able to find two optimizations that seem to help significantly:

Are there any cases you've found where this change significantly
impairs performance, and if so, how did you find them? If not, would
you be up for trying to find some?

Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

#3Robert Haas
robertmhaas@gmail.com
In reply to: David Fetter (#2)
Re: add_path optimization

On Sun, Feb 1, 2009 at 12:03 PM, David Fetter <david@fetter.org> wrote:

On Sat, Jan 31, 2009 at 11:37:39PM -0500, Robert Haas wrote:

I've been doing some benchmarking and profiling on the PostgreSQL
query analyzer, and it seems that (at least for the sorts of queries
that I typically run) the dominant cost is add_path(). I've been
able to find two optimizations that seem to help significantly:

Are there any cases you've found where this change significantly
impairs performance, and if so, how did you find them? If not, would
you be up for trying to find some?

Basically, the patch is just performing the same operations with less
overhead. For example, add_similar_path() is pretty much the same
thing as repeated calls to add_path(), but you save the cost of
unnecessary pathkey comparisons and maybe some ListCell alloc/free
cycles. So I'm not really sure how it could make things worse, but
I'd be interested in knowing if there's a case that you're worried
about. It's pretty low-level code, so I don't think there's room for
a lot of surprises.

...Robert

#4Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Robert Haas (#1)
Re: add_path optimization

On Sat, Jan 31, 2009 at 11:37 PM, Robert Haas <robertmhaas@gmail.com> wrote:

I've been doing some benchmarking and profiling on the PostgreSQL
query analyzer, and it seems that (at least for the sorts of queries
that I typically run) the dominant cost is add_path(). I've been able
to find two optimizations that seem to help significantly:

1. add_path() often calls compare_fuzzy_path_costs() twice on the same

2. match_unsorted_outer() adds as many as 5 nested loop joins at a

if there are two optimizations maybe two different patches are better
to test them individually

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

#5Robert Haas
robertmhaas@gmail.com
In reply to: Jaime Casanova (#4)
Re: add_path optimization

On Sun, Feb 1, 2009 at 1:34 PM, Jaime Casanova
<jcasanov@systemguards.com.ec> wrote:

On Sat, Jan 31, 2009 at 11:37 PM, Robert Haas <robertmhaas@gmail.com> wrote:

I've been doing some benchmarking and profiling on the PostgreSQL
query analyzer, and it seems that (at least for the sorts of queries
that I typically run) the dominant cost is add_path(). I've been able
to find two optimizations that seem to help significantly:

1. add_path() often calls compare_fuzzy_path_costs() twice on the same

2. match_unsorted_outer() adds as many as 5 nested loop joins at a

if there are two optimizations maybe two different patches are better
to test them individually

I did test the changes independently and either one alone has a
measurable benefit (with sufficiently careful measuring), but they're
closely related changes so I think it makes more sense as one patch.
It's only 84 insertions and 46 deletions, so we're not talking about
some massive patch that will be difficult to review. There's also
some synergy between the two changes: I don't think either works as
well without the other. But please feel free to test it for yourself
and let me know what you find. The changes are all very simple - the
hard part was figuring out which changes would actually produce a
benefit.

...Robert

#6Grzegorz Jaskiewicz
gj@pointblue.com.pl
In reply to: Robert Haas (#3)
Re: add_path optimization

disclaimer: I don't know that bit of postgresql code, in fact - this
is the first time I see it.

*** a/src/backend/optimizer/path/joinpath.c
--- b/src/backend/optimizer/path/joinpath.c
***************
*** 473,478 **** match_unsorted_outer(PlannerInfo *root,
--- 473,481 ----

if (nestjoinOK)
{
+ Path *paths[5];

I don't like the fact that you hardcoded that here. I know that you
are trying to pass on few calls in one go here, but still... ugly.

static int
compare_fuzzy_path_costs(Path *path1, Path *path2, int *startup_cost)
{
....
*startup_cost = (s == 0) ? t : s;

Why not *startup_cost = s, and let the caller decide which value it
wants to use ?
or just return both, from single call (which would ?
...

return t;
}

To be fair, I don't see compare_fuzzy_path_costs change to save too
much of time in planner.
I would myself probably convert that function into two defines/inline
funcs, but that's just me.

#7Robert Haas
robertmhaas@gmail.com
In reply to: Grzegorz Jaskiewicz (#6)
Re: add_path optimization

On Sun, Feb 1, 2009 at 3:25 PM, Grzegorz Jaskiewicz <gj@pointblue.com.pl> wrote:

I don't like the fact that you hardcoded that here. I know that you are
trying to pass on few calls in one go here, but still... ugly.

Well, I think you'll find that using a dynamically sized data
structure destroys the possibility of squeezing any additional
performance out of this part of the code. The nice thing about
fixed-size data structures is that they cost essentially nothing to
stack-allocate; you just move the stack pointer and away you go. We
should in fact be looking for MORE places where we can avoid the use
of constructs like List, since the second-highest CPU hog in my tests
was AllocSetAlloc(), beaten out only by add_path(). With this patch
applied, AllocSetAlloc() moves up to first.

(It would really be rather better to include all the paths generated
in each pass of the loop in the call to add_similar_path(), but that
looked rather more complicated because we can't predict how many of
them there will be, and so adding a fixed-size data structure is not
so easy. Plus, the code would all have to be rewritten not to assume
that "continue" was the right way to move on to the next iteration of
the loop. What would potentially be better still is to try to figure
out which nested loop will be the winner without allocating all of the
NestPath nodes in the first place, but that didn't seem possible
without much more invasive changes, and it's not clear that you would
actually still have a winner by the time you got done beating on it.)

I am also somewhat mystified as to why using an array of size 5 to
hold up to 5 data structure allocated in nearly-consecutive lines of C
code would qualify as ugly (completely apart from the fact that it's a
clear performance win).

static int
compare_fuzzy_path_costs(Path *path1, Path *path2, int *startup_cost)
{
....
*startup_cost = (s == 0) ? t : s;

Why not *startup_cost = s, and let the caller decide which value it wants to
use ?
or just return both, from single call (which would ?
...

return t;
}

You're essentially suggesting removing logic from
compare_fuzzy_path_costs() and duplicating it at the two call sites of
that function.
If there's a point to that, I don't see it. You might also take a
look at the widely used function compare_path_costs().

To be fair, I don't see compare_fuzzy_path_costs change to save too much of
time in planner.

Hmm, well I didn't either, but there's this handy tool called gprof
that you might want to try out. I wouldn't be wasting my time
patching this part of the code if it didn't make a difference, and in
fact if you do 10% of the amount of benchmarking that I did in the
process of creating this patch, you will find that it in fact does
make a difference.

I would myself probably convert that function into two defines/inline funcs,
but that's just me.

It's already static to that .c file, so the compiler likely will
inline it. In fact, I suspect you will find that removing the static
keyword from the implementation of that function in CVS HEAD is itself
sufficient to produce a small but measurable slowdown in planning of
large join trees, exactly because it will defeat inlining.

...Robert

#8Grzegorz Jaskiewicz
gj@pointblue.com.pl
In reply to: Robert Haas (#7)
Re: add_path optimization

On 1 Feb 2009, at 21:35, Robert Haas wrote:

On Sun, Feb 1, 2009 at 3:25 PM, Grzegorz Jaskiewicz <gj@pointblue.com.pl

wrote:
I don't like the fact that you hardcoded that here. I know that you
are
trying to pass on few calls in one go here, but still... ugly.

Well, I think you'll find that using a dynamically sized data
structure destroys the possibility of squeezing any additional
performance out of this part of the code. The nice thing about
fixed-size data structures is that they cost essentially nothing to
stack-allocate; you just move the stack pointer and away you go. We
should in fact be looking for MORE places where we can avoid the use
of constructs like List, since the second-highest CPU hog in my tests
was AllocSetAlloc(), beaten out only by add_path(). With this patch
applied, AllocSetAlloc() moves up to first.

well, true - but also, statically allocated table, without any
predefined size (with #DEFINE) , and no boundary check - is bad as well.
I suppose , this code is easy enough to let it be with your changes,
but I would still call it not pretty.

Hmm, well I didn't either, but there's this handy tool called gprof
that you might want to try out. I wouldn't be wasting my time
patching this part of the code if it didn't make a difference, and in
fact if you do 10% of the amount of benchmarking that I did in the
process of creating this patch, you will find that it in fact does
make a difference.

To be honest, I really didn't had a time to run it down with your
patch and gprof. I believe that you did that already, hence your
suggestion, right ?
Actually - if you did profile postgresql with bunch of queries, I
wouldn't mind to see results of it - I don't know whether it makes
sense to send that to the list (I would think it does), but if it is
too big, or something - you could send it to me in private.

It's already static to that .c file, so the compiler likely will
inline it. In fact, I suspect you will find that removing the static
keyword from the implementation of that function in CVS HEAD is itself
sufficient to produce a small but measurable slowdown in planning of
large join trees, exactly because it will defeat inlining.

that depends on many things, including whether optimizations are on or
not.
Because that function basically consists of two ifs essentially - it
could easily be turned into two separate inlines/macros - that would
remove any function's specific overhead (stack alloc, etc, etc).

#9Robert Haas
robertmhaas@gmail.com
In reply to: Grzegorz Jaskiewicz (#8)
Re: add_path optimization

well, true - but also, statically allocated table, without any predefined
size (with #DEFINE) , and no boundary check - is bad as well.
I suppose , this code is easy enough to let it be with your changes, but I
would still call it not pretty.

Well, it might merit a comment.

Actually - if you did profile postgresql with bunch of queries, I wouldn't
mind to see results of it - I don't know whether it makes sense to send that
to the list (I would think it does), but if it is too big, or something -
you could send it to me in private.

What I'd really like to do is develop some tests based on a publicly
available dataset. Any suggestions?

...Robert

#10Grzegorz Jaskiewicz
gj@pointblue.com.pl
In reply to: Robert Haas (#9)
Re: add_path optimization

On 2 Feb 2009, at 14:50, Robert Haas wrote:

well, true - but also, statically allocated table, without any
predefined
size (with #DEFINE) , and no boundary check - is bad as well.
I suppose , this code is easy enough to let it be with your
changes, but I
would still call it not pretty.

Well, it might merit a comment.

:)

What I'd really like to do is develop some tests based on a publicly
available dataset. Any suggestions?

I would say, it wouldn't hurt to do benchmarking/profiling regression
tests on real hardware - but someone will have to generate quite
substantial amount of data, so we could test it on small queries, up
to 20+ join/sort/window function/aggregation queries, with various
indexes, and data types. The more real the data, the better.
I could make some of my stuff public - but without the lookup tables
(id->some real data - like, names, surnames, mac addr, etc).

#11Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#1)
Re: add_path optimization

Robert Haas <robertmhaas@gmail.com> wrote:

running this 5 times each on several queries,
dropping top and bottom results.

Running a complex query (posted in previous threads, runs about
300,000 time per day in a production web application), I got these
timings on a production quality machine (4 quad CPU chips, that is 16
CPUs like this: Intel(R) Xeon(R) CPU X7350 @ 2.93GHz, 128 GB RAM, big
RAID with BBU). I ran explain in each environment 5 times, tossed
high and low, and averaged. The 8.4devel was from today's
(2008-02-02) snapshot, built the same way we did 8.3.5.

8.3.5, statistics target 10: 36.188 ms
8.4devel without patch, statistics target 100: 109.862 ms
8.4devel with patch, statistics target 100: 104.015 ms

After seeing that, I re-analyzed to eliminate the statistics target as
the cause of the 8.4 increase.

8.4devel with patch, statistics target 10: 99.421 ms

-Kevin

#12Robert Haas
robertmhaas@gmail.com
In reply to: Kevin Grittner (#11)
Re: add_path optimization

On Mon, Feb 2, 2009 at 8:10 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:

Robert Haas <robertmhaas@gmail.com> wrote:

running this 5 times each on several queries,
dropping top and bottom results.

Running a complex query (posted in previous threads, runs about
300,000 time per day in a production web application), I got these
timings on a production quality machine (4 quad CPU chips, that is 16
CPUs like this: Intel(R) Xeon(R) CPU X7350 @ 2.93GHz, 128 GB RAM, big
RAID with BBU). I ran explain in each environment 5 times, tossed
high and low, and averaged. The 8.4devel was from today's
(2008-02-02) snapshot, built the same way we did 8.3.5.

8.3.5, statistics target 10: 36.188 ms
8.4devel without patch, statistics target 100: 109.862 ms
8.4devel with patch, statistics target 100: 104.015 ms

After seeing that, I re-analyzed to eliminate the statistics target as
the cause of the 8.4 increase.

8.4devel with patch, statistics target 10: 99.421 ms

Yikes! The impact of the patch is about what I'd expect, but the fact
that planning time has nearly tripled is... way poor. Can you repost
the query and the EXPLAIN output for 8.3.5 and CVS HEAD?

...Robert

#13Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#12)
Re: add_path optimization

Running a complex query (posted in previous threads, runs about
300,000 time per day in a production web application), I got these
timings on a production quality machine (4 quad CPU chips, that is 16
CPUs like this: Intel(R) Xeon(R) CPU X7350 @ 2.93GHz, 128 GB RAM, big
RAID with BBU). I ran explain in each environment 5 times, tossed
high and low, and averaged. The 8.4devel was from today's
(2008-02-02) snapshot, built the same way we did 8.3.5.

8.3.5, statistics target 10: 36.188 ms
8.4devel without patch, statistics target 100: 109.862 ms
8.4devel with patch, statistics target 100: 104.015 ms

After seeing that, I re-analyzed to eliminate the statistics target as
the cause of the 8.4 increase.

8.4devel with patch, statistics target 10: 99.421 ms

Yikes! The impact of the patch is about what I'd expect, but the fact
that planning time has nearly tripled is... way poor. Can you repost
the query and the EXPLAIN output for 8.3.5 and CVS HEAD?

FYI, I retested my queries on REL8_3_STABLE and the results were not
all that different from CVS HEAD. So the problem is apparently
specific to something your query is doing that mine isn't., rather
than a general slowdown in planning (or else one of us goofed up the
testing).

...Robert

#14Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#12)
Re: add_path optimization

Robert Haas <robertmhaas@gmail.com> writes:

Yikes! The impact of the patch is about what I'd expect, but the fact
that planning time has nearly tripled is... way poor.

We're going to need to see the test case, because I don't see that in
some simple tests here.

regards, tom lane

#15Stephen Frost
sfrost@snowman.net
In reply to: Tom Lane (#14)
Re: add_path optimization

* Tom Lane (tgl@sss.pgh.pa.us) wrote:

Robert Haas <robertmhaas@gmail.com> writes:

Yikes! The impact of the patch is about what I'd expect, but the fact
that planning time has nearly tripled is... way poor.

We're going to need to see the test case, because I don't see that in
some simple tests here.

A good data set, plus complex queries against it, might be the data from
the US Census, specifically the TIGER data and the TIGER geocoder. I've
been following this thread with the intention of putting together a
large-data test set, but I just havn't found the time to yet. Right now
there's alot of dependencies on PostGIS (which aren't really required to
just do the queries to pull out the street segment) which I figure
people would want ripped out. It'd also be nice to include the other
Census data besides just the road data.

If people really are interested, I'll see what I can put together. It's
*alot* of data (around 23G total in PG), though perhaps just doing 1
state would be enough for a good test, I keep the states split up
anyway using CHECK constraints. Don't think that would change this
case, though there might be cases where it does affect things..

Thanks,

Stephen

#16Robert Haas
robertmhaas@gmail.com
In reply to: Stephen Frost (#15)
Re: add_path optimization

A good data set, plus complex queries against it, might be the data from
the US Census, specifically the TIGER data and the TIGER geocoder. I've
been following this thread with the intention of putting together a
large-data test set, but I just havn't found the time to yet. Right now
there's alot of dependencies on PostGIS (which aren't really required to
just do the queries to pull out the street segment) which I figure
people would want ripped out. It'd also be nice to include the other
Census data besides just the road data.

If people really are interested, I'll see what I can put together. It's
*alot* of data (around 23G total in PG), though perhaps just doing 1
state would be enough for a good test, I keep the states split up
anyway using CHECK constraints. Don't think that would change this
case, though there might be cases where it does affect things..

I'm interested, but I need maybe a 1GB data set, or smaller. The
thing that we are benchmarking is the planner, and planning times are
related to the complexity of the database and the accompanying
queries, not the raw volume of data. (It's not size that matters,
it's how you use it?) In fact, in a large database, one could argue
that there is less reason to care about the planner, because the
execution time will dominate anyway. I'm interested in complex
queries in web/OLTP type applications, where you need the query to be
planned and executed in 400 ms at the outside (and preferably less
than half of that).

...Robert

#17Stephen Frost
sfrost@snowman.net
In reply to: Robert Haas (#16)
Re: add_path optimization

* Robert Haas (robertmhaas@gmail.com) wrote:

I'm interested, but I need maybe a 1GB data set, or smaller. The
thing that we are benchmarking is the planner, and planning times are
related to the complexity of the database and the accompanying
queries, not the raw volume of data. (It's not size that matters,
it's how you use it?) In fact, in a large database, one could argue
that there is less reason to care about the planner, because the
execution time will dominate anyway. I'm interested in complex
queries in web/OLTP type applications, where you need the query to be
planned and executed in 400 ms at the outside (and preferably less
than half of that).

We prefer that our geocoding be fast... :) Doing 1 state should give
you about the right size (half to 1G of data). I'll try to put together
a good test set this week.

Stephen

#18Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#16)
Re: add_path optimization

Robert Haas <robertmhaas@gmail.com> writes:

I'm interested, but I need maybe a 1GB data set, or smaller. The
thing that we are benchmarking is the planner, and planning times are
related to the complexity of the database and the accompanying
queries, not the raw volume of data.

In fact, the only reason to care whether there is any data in the DB
*at all* is that you need some realistic content in pg_statistic.
So it should be possible to set up a planner test DB with very little
data bulk, which would surely make testing a lot less painful.

regards, tom lane

#19Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#18)
Re: add_path optimization

Tom Lane <tgl@sss.pgh.pa.us> wrote:

In fact, the only reason to care whether there is any data in the DB
*at all* is that you need some realistic content in pg_statistic.
So it should be possible to set up a planner test DB with very

little

data bulk, which would surely make testing a lot less painful.

Can you suggest a query (or queries) which, together with a schema
dump, would give you enough to duplicate my results?

-Kevin

#20Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#13)
Re: add_path optimization

Robert Haas <robertmhaas@gmail.com> wrote:

FYI, I retested my queries on REL8_3_STABLE and the results were not
all that different from CVS HEAD. So the problem is apparently
specific to something your query is doing that mine isn't., rather
than a general slowdown in planning (or else one of us goofed up the
testing).

I know you said size doesn't matter, but just for the record, the ten
tables I loaded for this test put the database at 56G. I'm pulling
information together to share on this, but I was wondering: is there
any possibility that the tendency to use index scans in nested loops
(given the table sizes and the availability of useful indexes)
contributes to the difference?

Other possible factors:

Most keys are multi-column and include varchar-based data types.

Most columns are defined via domains.

(More info to follow.)

-Kevin

#21Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#14)
#22Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Kevin Grittner (#21)
#23Robert Haas
robertmhaas@gmail.com
In reply to: Kevin Grittner (#21)
#24Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#21)
#25Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#23)
#26Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#23)
#27Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Kevin Grittner (#26)
#28Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Kevin Grittner (#26)
#29Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#28)
#30Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#29)
#31Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Kevin Grittner (#30)
#32Bruce Momjian
bruce@momjian.us
In reply to: Kevin Grittner (#31)
#33Jonah H. Harris
jonah.harris@gmail.com
In reply to: Bruce Momjian (#32)
#34Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#31)
#35Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#34)
#36Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#35)
#37Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#36)
#38Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#37)
#39Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#38)
#40Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#39)
#41Robert Haas
robertmhaas@gmail.com
In reply to: Kevin Grittner (#39)
#42David E. Wheeler
david@kineticode.com
In reply to: Robert Haas (#41)
#43Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#40)
#44Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#41)
#45Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#43)
#46Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#44)
#47Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#45)
#48Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#47)
#49Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#12)
#50Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#49)
#51Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Bruce Momjian (#49)
#52Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#46)
#53Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#1)
#54Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#53)
#55Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#54)
#56Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#55)
#57Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#52)
#58Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#1)
#59Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#58)