Re: How much expensive are row level statistics?

Started by Merlin Moncureover 20 years ago48 messageshackers
Jump to latest
#1Merlin Moncure
merlin.moncure@rcsonline.com

On Sun, Dec 11, 2005 at 11:53:36AM +0000, Carlos Benkendorf wrote:

I would like to use autovacuum but is not too much expensive
collecting row level statistics?

The cost depends on your usage patterns. I did tests with one of
my applications and saw no significant performance difference for
simple selects, but a series of insert/update/delete operations ran
about 30% slower when block- and row-level statistics were enabled
versus when the statistics collector was disabled.

That approximately confirms my results, except that the penalty may even
be a little bit higher in the worst-case scenario. Row level stats hit
the hardest if you are doing 1 row at a time operations over a
persistent connection. Since my apps inherited this behavior from their
COBOL legacy, I keep them off. If your app follows the monolithic query
approach to problem solving (pull lots of rows in, edit them on the
client, and send them back), penalty is basically zero.

Merlin

#2Michael Fuhr
mike@fuhr.org
In reply to: Merlin Moncure (#1)

On Mon, Dec 12, 2005 at 01:33:27PM -0500, Merlin Moncure wrote:

The cost depends on your usage patterns. I did tests with one of
my applications and saw no significant performance difference for
simple selects, but a series of insert/update/delete operations ran
about 30% slower when block- and row-level statistics were enabled
versus when the statistics collector was disabled.

That approximately confirms my results, except that the penalty may even
be a little bit higher in the worst-case scenario. Row level stats hit
the hardest if you are doing 1 row at a time operations over a
persistent connection.

That's basically how the application I tested works: it receives
data from a stream and performs whatever insert/update/delete
statements are necessary to update the database for each chunk of
data. Repeat a few thousand times.

--
Michael Fuhr

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Merlin Moncure (#1)

"Merlin Moncure" <merlin.moncure@rcsonline.com> writes:

The cost depends on your usage patterns. I did tests with one of
my applications and saw no significant performance difference for
simple selects, but a series of insert/update/delete operations ran
about 30% slower when block- and row-level statistics were enabled
versus when the statistics collector was disabled.

That approximately confirms my results, except that the penalty may even
be a little bit higher in the worst-case scenario. Row level stats hit
the hardest if you are doing 1 row at a time operations over a
persistent connection.

IIRC, the only significant cost from enabling stats is the cost of
transmitting the counts to the stats collector, which is a cost
basically paid once at each transaction commit. So short transactions
will definitely have more overhead than longer ones. Even for a really
simple transaction, though, 30% seems high --- the stats code is
designed deliberately to minimize the penalty.

regards, tom lane

#4Michael Fuhr
mike@fuhr.org
In reply to: Tom Lane (#3)

On Mon, Dec 12, 2005 at 06:01:01PM -0500, Tom Lane wrote:

IIRC, the only significant cost from enabling stats is the cost of
transmitting the counts to the stats collector, which is a cost
basically paid once at each transaction commit. So short transactions
will definitely have more overhead than longer ones. Even for a really
simple transaction, though, 30% seems high --- the stats code is
designed deliberately to minimize the penalty.

Now there goes Tom with his skeptical eye again, and here comes me
saying "oops" again. Further tests show that for this application
the killer is stats_command_string, not stats_block_level or
stats_row_level. Here are timings for the same set of operations
(thousands of insert, update, and delete statements in one transaction)
run under various settings:

stats_command_string = off
stats_block_level = off
stats_row_level = off
time: 2:09.46

stats_command_string = off
stats_block_level = on
stats_row_level = off
time: 2:12.28

stats_command_string = off
stats_block_level = on
stats_row_level = on
time: 2:14.38

stats_command_string = on
stats_block_level = off
stats_row_level = off
time: 2:50.58

stats_command_string = on
stats_block_level = on
stats_row_level = on
time: 2:53.76

[Wanders off, swearing that he ran these tests before and saw higher
penalties for block- and row-level statistics.]

--
Michael Fuhr

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Michael Fuhr (#4)

Michael Fuhr <mike@fuhr.org> writes:

Further tests show that for this application
the killer is stats_command_string, not stats_block_level or
stats_row_level.

I tried it with pgbench -c 10, and got these results:
41% reduction in TPS rate for stats_command_string
9% reduction in TPS rate for stats_block/row_level (any combination)

strace'ing a backend confirms my belief that stats_block/row_level send
just one stats message per transaction (at least for the relatively
small number of tables touched per transaction by pgbench). However
stats_command_string sends 14(!) --- there are seven commands per
pgbench transaction and each results in sending a <command> message and
later an <IDLE> message.

Given the rather lackadaisical way in which the stats collector makes
the data available, it seems like the backends are being much too
enthusiastic about posting their stats_command_string status
immediately. Might be worth thinking about how to cut back the
overhead by suppressing some of these messages.

regards, tom lane

#6Michael Fuhr
mike@fuhr.org
In reply to: Tom Lane (#5)

On Mon, Dec 12, 2005 at 10:20:45PM -0500, Tom Lane wrote:

Given the rather lackadaisical way in which the stats collector makes
the data available, it seems like the backends are being much too
enthusiastic about posting their stats_command_string status
immediately. Might be worth thinking about how to cut back the
overhead by suppressing some of these messages.

Would a GUC setting akin to log_min_duration_statement be feasible?
Does the backend support, or could it be easily modified to support,
a mechanism that would post the command string after a configurable
amount of time had expired, and then continue processing the query?
That way admins could avoid the overhead of posting messages for
short-lived queries that nobody's likely to see in pg_stat_activity
anyway.

--
Michael Fuhr

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Michael Fuhr (#6)

Michael Fuhr <mike@fuhr.org> writes:

Does the backend support, or could it be easily modified to support,
a mechanism that would post the command string after a configurable
amount of time had expired, and then continue processing the query?

Not really, unless you want to add the overhead of setting a timer
interrupt for every query. Which is sort of counterproductive when
the motivation is to reduce overhead ...

(It might be more or less free if you have statement_timeout set, since
there would be a setitimer call anyway. But I don't think that's the
norm.)

regards, tom lane

#8Kevin Brown
kevin@sysexperts.com
In reply to: Tom Lane (#7)

Tom Lane wrote:

Michael Fuhr <mike@fuhr.org> writes:

Does the backend support, or could it be easily modified to support,
a mechanism that would post the command string after a configurable
amount of time had expired, and then continue processing the query?

Not really, unless you want to add the overhead of setting a timer
interrupt for every query. Which is sort of counterproductive when
the motivation is to reduce overhead ...

(It might be more or less free if you have statement_timeout set, since
there would be a setitimer call anyway. But I don't think that's the
norm.)

Actually, it's probably not necessary to set the timer at the
beginning of every query. It's probably sufficient to just have it go
off periodically, e.g. once every second, and thus set it when the
timer goes off. And the running command wouldn't need to be re-posted
if it's the same as last time around. Turn off the timer if the
connection is idle now and was idle last time around (or not, if
there's no harm in having the timer running all the time), turn it on
again at the start of the next transaction.

In essence, the backend would be "polling" itself every second or so
and recording its state at that time, rather than on every
transaction.

Assuming that doing all that wouldn't screw something else up...

--
Kevin Brown kevin@sysexperts.com

#9Simon Riggs
simon@2ndQuadrant.com
In reply to: Tom Lane (#7)

On Thu, 2005-12-15 at 19:06 -0500, Tom Lane wrote:

Michael Fuhr <mike@fuhr.org> writes:

Does the backend support, or could it be easily modified to support,
a mechanism that would post the command string after a configurable
amount of time had expired, and then continue processing the query?

Not really, unless you want to add the overhead of setting a timer
interrupt for every query. Which is sort of counterproductive when
the motivation is to reduce overhead ...

(It might be more or less free if you have statement_timeout set, since
there would be a setitimer call anyway. But I don't think that's the
norm.)

We could do the deferred send fairly easily. You need only set a timer
when stats_command_string = on, so we'd only do that when requested by
the admin. Overall, that would be a cheaper way of doing it than now.

However, I'm more inclined to the idea of a set of functions that allow
an administrator to retrieve the full SQL text executing in a backend,
with an option to return an EXPLAIN of the currently executing plan.
Right now, stats only gives you the first 1000 chars, so you're always
stuck if its a big query. Plus we don't yet have a way of getting the
exact EXPLAIN of a running query (you can get close, but it could
differ).

Pull is better than push. Asking specific backends what they're doing
when you need to know will be efficient; asking them to send their
command strings, all of the time, deferred or not will always be more
wasteful. Plus if you forgot to turn on stats_command_string before
execution, then you've no way of knowing anyhow.

Best Regards, Simon Riggs

#10Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#5)

Tom Lane wrote:

Michael Fuhr <mike@fuhr.org> writes:

Further tests show that for this application
the killer is stats_command_string, not stats_block_level or
stats_row_level.

I tried it with pgbench -c 10, and got these results:
41% reduction in TPS rate for stats_command_string

Woh, 41%. That's just off the charts! What are we doing internally
that would cause that?

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#11Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#5)
Stats collector performance improvement

Tom Lane wrote:

Michael Fuhr <mike@fuhr.org> writes:

Further tests show that for this application
the killer is stats_command_string, not stats_block_level or
stats_row_level.

I tried it with pgbench -c 10, and got these results:
41% reduction in TPS rate for stats_command_string
9% reduction in TPS rate for stats_block/row_level (any combination)

strace'ing a backend confirms my belief that stats_block/row_level send
just one stats message per transaction (at least for the relatively
small number of tables touched per transaction by pgbench). However
stats_command_string sends 14(!) --- there are seven commands per
pgbench transaction and each results in sending a <command> message and
later an <IDLE> message.

Given the rather lackadaisical way in which the stats collector makes
the data available, it seems like the backends are being much too
enthusiastic about posting their stats_command_string status
immediately. Might be worth thinking about how to cut back the
overhead by suppressing some of these messages.

I did some research on this because the numbers Tom quotes indicate there
is something wrong in the way we process stats_command_string
statistics.

I made a small test script:

if [ ! -f /tmp/pgstat.sql ]
then i=0
while [ $i -lt 10000 ]
do
i=`expr $i + 1`
echo "SELECT 1;"
done > /tmp/pgstat.sql
fi

time sql test </tmp/pgstat.sql >/dev/null

This sends 10,000 "SELECT 1" queries to the backend, and reports the
execution time. I found that without stats_command_string defined, it
ran in 3.5 seconds. With stats_command_string defined, it took 5.5
seconds, meaning the command string is causing a 57% slowdown. That is
way too much considering that the SELECT 1 has to be send from psql to
the backend, parsed, optimized, and executed, and the result returned to
the psql, while stats_command_string only has to send a string to a
backend collector. There is _no_ way that collector should take 57% of
the time it takes to run the actual query.

With the test program, I tried various options. The basic code we have
sends a UDP packet to a statistics buffer process, which recv()'s the
packet, puts it into a memory queue buffer, and writes it to a pipe()
that is read by the statistics collector process which processes the
packet.

I tried various ways of speeding up the buffer and collector processes.
I found if I put a pg_usleep(100) in the buffer process the backend
speed was good, but packets were lost. What I found worked well was to
do multiple recv() calls in a loop. The previous code did a select(),
then perhaps a recv() and pipe write() based on the results of the
select(). This caused many small packets to be written to the pipe and
the pipe write overhead seems fairly large. The best fix I found was to
loop over the recv() call at most 25 times, collecting a group of
packets that can then be sent to the collector in one pipe write. The
recv() socket is non-blocking, so a zero return indicates there are no
more packets available. Patch attached.

This change reduced the stats_command_string time from 5.5 to 3.9, which
is closer to the 3.5 seconds with stats_command_string off.

A second improvement I discovered is that the statistics collector is
calling gettimeofday() for every packet received, so it can determine
the timeout for the select() call to write the flat file. I removed
that behavior and instead used setitimer() to issue a SIGINT every
500ms, which was the original behavior. This eliminates the
gettimeofday() call and makes the code cleaner. Second patch attached.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

Attachments:

/pgpatches/stattext/plainDownload+93-91
/pgpatches/stat2text/plainDownload+65-74
#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#11)
Re: Stats collector performance improvement

Bruce Momjian <pgman@candle.pha.pa.us> writes:

I found if I put a pg_usleep(100) in the buffer process the backend
speed was good, but packets were lost. What I found worked well was to
do multiple recv() calls in a loop. The previous code did a select(),
then perhaps a recv() and pipe write() based on the results of the
select(). This caused many small packets to be written to the pipe and
the pipe write overhead seems fairly large. The best fix I found was to
loop over the recv() call at most 25 times, collecting a group of
packets that can then be sent to the collector in one pipe write. The
recv() socket is non-blocking, so a zero return indicates there are no
more packets available. Patch attached.

This seems incredibly OS-specific. How many platforms did you test it
on?

A more serious objection is that it will cause the stats machinery to
work very poorly if there isn't a steady stream of incoming messages.
You can't just sit on 24 messages until the 25th one arrives next week.

regards, tom lane

#13Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#12)
Re: Stats collector performance improvement

Tom Lane wrote:

Bruce Momjian <pgman@candle.pha.pa.us> writes:

I found if I put a pg_usleep(100) in the buffer process the backend
speed was good, but packets were lost. What I found worked well was to
do multiple recv() calls in a loop. The previous code did a select(),
then perhaps a recv() and pipe write() based on the results of the
select(). This caused many small packets to be written to the pipe and
the pipe write overhead seems fairly large. The best fix I found was to
loop over the recv() call at most 25 times, collecting a group of
packets that can then be sent to the collector in one pipe write. The
recv() socket is non-blocking, so a zero return indicates there are no
more packets available. Patch attached.

This seems incredibly OS-specific. How many platforms did you test it
on?

Only mine. I am posting the patch so others can test it, of course.

A more serious objection is that it will cause the stats machinery to
work very poorly if there isn't a steady stream of incoming messages.
You can't just sit on 24 messages until the 25th one arrives next week.

You wouldn't. It exits out of the loop on a not found, checks the pipe
write descriptor, and writes on it.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#14Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#11)
Re: Stats collector performance improvement

[ moving to -hackers ]

Bruce Momjian <pgman@candle.pha.pa.us> writes:

I did some research on this because the numbers Tom quotes indicate there
is something wrong in the way we process stats_command_string
statistics.
[ ... proposed patch that seems pretty klugy to me ... ]

I wonder whether we shouldn't consider something more drastic, like
getting rid of the intermediate stats buffer process entirely.

The original design for the stats communication code was based on the
premise that it's better to drop data than to make backends wait on
the stats collector. However, as things have turned out I think this
notion is a flop: the people who are using stats at all want the stats
to be reliable. We've certainly seen plenty of gripes from people who
are unhappy that backend-exit messages got dropped, and anyone who's
using autovacuum would really like the tuple update counts to be pretty
solid too.

If we abandoned the unreliable-communication approach, could we build
something with less overhead?

regards, tom lane

#15Qingqing Zhou
zhouqq@cs.toronto.edu
In reply to: Bruce Momjian (#11)
Re: Stats collector performance improvement

"Tom Lane" <tgl@sss.pgh.pa.us> wrote

I wonder whether we shouldn't consider something more drastic, like
getting rid of the intermediate stats buffer process entirely.

The original design for the stats communication code was based on the
premise that it's better to drop data than to make backends wait on
the stats collector. However, as things have turned out I think this
notion is a flop: the people who are using stats at all want the stats
to be reliable. We've certainly seen plenty of gripes from people who
are unhappy that backend-exit messages got dropped, and anyone who's
using autovacuum would really like the tuple update counts to be pretty
solid too.

AFAICS if we can maintain the stats counts solid, then it may hurt
performance dramatically. Think if we maintain
pgstat_count_heap_insert()/pgstat_count_heap_delete() pretty well, then we
get a replacement of count(*). To do so, I believe that will add another
lock contention on the target table stats.

Regards,
Qingqing

#16Hannu Krosing
hannu@tm.ee
In reply to: Tom Lane (#14)
Re: Stats collector performance improvement

Ühel kenal päeval, E, 2006-01-02 kell 15:20, kirjutas Tom Lane:

[ moving to -hackers ]

Bruce Momjian <pgman@candle.pha.pa.us> writes:

I did some research on this because the numbers Tom quotes indicate there
is something wrong in the way we process stats_command_string
statistics.
[ ... proposed patch that seems pretty klugy to me ... ]

I wonder whether we shouldn't consider something more drastic, like
getting rid of the intermediate stats buffer process entirely.

The original design for the stats communication code was based on the
premise that it's better to drop data than to make backends wait on
the stats collector. However, as things have turned out I think this
notion is a flop: the people who are using stats at all want the stats
to be reliable. We've certainly seen plenty of gripes from people who
are unhappy that backend-exit messages got dropped, and anyone who's
using autovacuum would really like the tuple update counts to be pretty
solid too.

If we abandoned the unreliable-communication approach, could we build
something with less overhead?

Weell, at least it should be non-WAL, and probably non-fsync, at least
optionally . Maybe also inserts inserts + offline aggregator (instead of
updates) to avoid lock contention. Something that collects data in
blocks of local or per-backend shared memory in each backend and then
gives complete blocks to aggregator process. Maybe use 2 alternating
blocks per backend - 1 for ongoing stats collection and another given to
aggregator. this has a little time shift, but will deliver accurate
starts in the end. Things that need up-to-date stats (like
pg_stat_activity), should look (and lock) also the ongoing satas
collection blocks if needed (how do we know know the *if*) and delay
each backend process momentaryly by looking.

-----------------
Hannu

#17Tom Lane
tgl@sss.pgh.pa.us
In reply to: Qingqing Zhou (#15)
Re: Stats collector performance improvement

"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:

AFAICS if we can maintain the stats counts solid, then it may hurt
performance dramatically. Think if we maintain
pgstat_count_heap_insert()/pgstat_count_heap_delete() pretty well, then we
get a replacement of count(*).

Not at all. For one thing, the stats don't attempt to maintain
per-transaction state, so they don't have the MVCC issues of count(*).
I'm not suggesting any fundamental changes in what is counted or when.

The two compromises that were made in the original stats design to make
it fast were (1) stats updates lag behind reality, and (2) some updates
may be missed entirely. Now that we have a couple of years' field
experience with the code, it seems that (1) is acceptable for real usage
but (2) not so much. And it's not even clear that we are buying any
performance gain from (2), considering that it's adding the overhead of
passing the data through an extra process.

regards, tom lane

#18Jan Wieck
JanWieck@Yahoo.com
In reply to: Tom Lane (#14)
Re: Stats collector performance improvement

On 1/2/2006 3:20 PM, Tom Lane wrote:

[ moving to -hackers ]

Bruce Momjian <pgman@candle.pha.pa.us> writes:

I did some research on this because the numbers Tom quotes indicate there
is something wrong in the way we process stats_command_string
statistics.
[ ... proposed patch that seems pretty klugy to me ... ]

I wonder whether we shouldn't consider something more drastic, like
getting rid of the intermediate stats buffer process entirely.

The original design for the stats communication code was based on the
premise that it's better to drop data than to make backends wait on

The original design was geared towards searching for useless/missing
indexes and tuning activity like that. This never happened, but instead
people tried to use it as a reliable debugging or access statistics aid
... which is fine but not what it originally was intended for.

So yes, I think looking at what it usually is used for, a message
passing system like SysV message queues (puke) or similar would do a
better job.

Jan

the stats collector. However, as things have turned out I think this
notion is a flop: the people who are using stats at all want the stats
to be reliable. We've certainly seen plenty of gripes from people who
are unhappy that backend-exit messages got dropped, and anyone who's
using autovacuum would really like the tuple update counts to be pretty
solid too.

If we abandoned the unreliable-communication approach, could we build
something with less overhead?

regards, tom lane

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #

#19Simon Riggs
simon@2ndQuadrant.com
In reply to: Tom Lane (#17)
Re: Stats collector performance improvement

On Mon, 2006-01-02 at 16:48 -0500, Tom Lane wrote:

The two compromises that were made in the original stats design to make
it fast were (1) stats updates lag behind reality, and (2) some updates
may be missed entirely. Now that we have a couple of years' field
experience with the code, it seems that (1) is acceptable for real usage
but (2) not so much.

We decided that the stats update had to occur during execution, in case
the statement aborted and row versions were not notified. That means we
must notify things as they happen, yet could use a reliable queuing
system that could suffer a delay in the stats becoming available.

But how often do we lose a backend? Could we simply buffer that a little
better? i.e. don't send message to stats unless we have altered at least
10 rows? So we would buffer based upon the importance of the message,
not the actual size of the message. That way singleton-statements won't
generate the same stats traffic, but we risk losing a buffers worth of
row changes should we crash - everything would still work if we lost a
few small row change notifications.

We can also save lots of cycles on the current statement overhead, which
is currently the worst part of the stats, performance-wise. That
definitely needs redesign. AFAICS we only ever need to know the SQL
statement via the stats system if the statement has been running for
more than a few minutes - the main use case is for an admin to be able
to diagnose a rogue or hung statement. Pushing the statement to stats
every time is just a big overhead. That suggests we should either have a
pull or a deferred push (longer-than-X-secs) approach.

Best Regards, Simon Riggs

#20Simon Riggs
simon@2ndQuadrant.com
In reply to: Bruce Momjian (#11)
Re: Stats collector performance improvement

On Mon, 2006-01-02 at 13:40 -0500, Bruce Momjian wrote:

This change reduced the stats_command_string time from 5.5 to 3.9, which
is closer to the 3.5 seconds with stats_command_string off.

Excellent work, port specific or not.

Best Regards, Simon Riggs

#21Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Simon Riggs (#19)
#22Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#11)
#23Hannu Krosing
hannu@tm.ee
In reply to: Simon Riggs (#19)
#24Bruce Momjian
bruce@momjian.us
In reply to: Jim Nasby (#21)
#25Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#11)
#26Bruce Momjian
bruce@momjian.us
In reply to: Hannu Krosing (#23)
#27Hannu Krosing
hannu@tm.ee
In reply to: Bruce Momjian (#26)
#28Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#11)
#29Qingqing Zhou
zhouqq@cs.toronto.edu
In reply to: Bruce Momjian (#11)
#30Larry Rosenman
ler@lerctr.org
In reply to: Bruce Momjian (#28)
#31Tom Lane
tgl@sss.pgh.pa.us
In reply to: Qingqing Zhou (#29)
#32Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#31)
#33Stefan Kaltenbrunner
stefan@kaltenbrunner.cc
In reply to: Bruce Momjian (#28)
#34Stefan Kaltenbrunner
stefan@kaltenbrunner.cc
In reply to: Bruce Momjian (#28)
#35Josh Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#32)
#36Tom Lane
tgl@sss.pgh.pa.us
In reply to: Josh Berkus (#35)
#37Bruce Momjian
bruce@momjian.us
In reply to: Josh Berkus (#35)
#38Qingqing Zhou
zhouqq@cs.toronto.edu
In reply to: Bruce Momjian (#11)
#39Tom Lane
tgl@sss.pgh.pa.us
In reply to: Qingqing Zhou (#38)
#40Bruce Momjian
bruce@momjian.us
In reply to: Qingqing Zhou (#38)
#41Qingqing Zhou
zhouqq@cs.toronto.edu
In reply to: Qingqing Zhou (#38)
#42Bruce Momjian
bruce@momjian.us
In reply to: Qingqing Zhou (#41)
#43Qingqing Zhou
zhouqq@cs.toronto.edu
In reply to: Qingqing Zhou (#41)
#44Bruce Momjian
bruce@momjian.us
In reply to: Qingqing Zhou (#43)
#45Stefan Kaltenbrunner
stefan@kaltenbrunner.cc
In reply to: Bruce Momjian (#44)
#46Bruce Momjian
bruce@momjian.us
In reply to: Stefan Kaltenbrunner (#45)
#47Stefan Kaltenbrunner
stefan@kaltenbrunner.cc
In reply to: Bruce Momjian (#46)
#48Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#44)