Less than ideal error reporting in pg_stat_statements

Started by Jim Nasbyover 10 years ago49 messageshackers
Jump to latest
#1Jim Nasby
Jim.Nasby@BlueTreble.com

A client was getting some hard to diagnose out of memory errors. What
made this especially confusing was that there was no context reported at
all, other than the (enormous) statement that triggered the error.

At first I thought the lack of context indicated a palloc had failed
during ereport() (since we apparently just toss the previous error when
that happens), but it turns out there's some error reporting in
pg_stat_statements that's less than ideal. Attached patch fixes, though
I'm not sure if %lld is portable or not.

I'll also argue that this is a bug and should be backpatched, but I'm
not hell-bent on that.

At the same time I looked for other messages that don't explicitly
reference pg_stat_statements; the only others are in
pg_stat_statements_internal() complaining about being called in an
inappropriate function context. Presumably at that point there's a
reasonable error context stack so I didn't bother with them.

This still seems a bit fragile to me though. Anyone working in here has
to notice that most every errmsg mentions pg_stat_statements and decide
there's a good reason for that. ISTM it'd be better to push a new
ErrorContextCallback onto the stack any time we enter the module. If
folks think that's a good idea I'll pursue it as a separate patch.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com

Attachments:

patch.difftext/plain; charset=UTF-8; name=patch.diff; x-mac-creator=0; x-mac-type=0Download+15-5
#2David Rowley
dgrowleyml@gmail.com
In reply to: Jim Nasby (#1)
Re: Less than ideal error reporting in pg_stat_statements

On 23 September 2015 at 10:16, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

Attached patch fixes, though I'm not sure if %lld is portable or not.

I think you could probably use INT64_FORMAT, and cast the stat.st_size to
int64 too.

There's an example in FileRead() in fd.c

Regards

David Rowley

--
David Rowley http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Development, 24x7 Support, Training & Services

In reply to: Jim Nasby (#1)
Re: Less than ideal error reporting in pg_stat_statements

On Tue, Sep 22, 2015 at 3:16 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

At first I thought the lack of context indicated a palloc had failed during
ereport() (since we apparently just toss the previous error when that
happens), but it turns out there's some error reporting in
pg_stat_statements that's less than ideal. Attached patch fixes, though I'm
not sure if %lld is portable or not.

+ ereport(LOG,
+              (errcode(ERRCODE_OUT_OF_MEMORY),
+               errmsg("out of memory attempting to pg_stat_statement file"),
+               errdetail("file \"%s\": size %lld", PGSS_TEXT_FILE,
stat.st_size)));

Uh, what?

I'm not opposed to this basic idea, but I think the message should be
reworded, and that the presence of two separate ereport() call sites
like the above is totally unnecessary. The existing MaxAllocSize check
is just defensive; no user-visible distinction needs to be made.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Peter Geoghegan (#3)
Re: Less than ideal error reporting in pg_stat_statements

On 9/22/15 5:58 PM, Peter Geoghegan wrote:

On Tue, Sep 22, 2015 at 3:16 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

At first I thought the lack of context indicated a palloc had failed during
ereport() (since we apparently just toss the previous error when that
happens), but it turns out there's some error reporting in
pg_stat_statements that's less than ideal. Attached patch fixes, though I'm
not sure if %lld is portable or not.

+ ereport(LOG,
+              (errcode(ERRCODE_OUT_OF_MEMORY),
+               errmsg("out of memory attempting to pg_stat_statement file"),
+               errdetail("file \"%s\": size %lld", PGSS_TEXT_FILE,
stat.st_size)));

Uh, what?

Oops. I'll fix that and address David's concern tomorrow.

I'm not opposed to this basic idea, but I think the message should be
reworded, and that the presence of two separate ereport() call sites
like the above is totally unnecessary. The existing MaxAllocSize check
is just defensive; no user-visible distinction needs to be made.

I disagree. If you're running this on a 200+GB machine with plenty of
free memory and get that error you're going to be scratching your head.
I can see an argument for using the OOM SQLSTATE, but treating an
artificial limit the same as a system error seems pretty bogus.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: David Rowley (#2)
Re: Less than ideal error reporting in pg_stat_statements

David Rowley <david.rowley@2ndquadrant.com> writes:

On 23 September 2015 at 10:16, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

Attached patch fixes, though I'm not sure if %lld is portable or not.

It is not.

I think you could probably use INT64_FORMAT,

Not in a message you expect to be translatable.

There are ways around that, but TBH I do not think that including the file
size in the errdetail is valuable enough to be worth the trouble. I'd
just leave it out. "insufficient memory to load statement file" seems
quite enough.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#3)
Re: Less than ideal error reporting in pg_stat_statements

Peter Geoghegan <pg@heroku.com> writes:

I'm not opposed to this basic idea, but I think the message should be
reworded, and that the presence of two separate ereport() call sites
like the above is totally unnecessary. The existing MaxAllocSize check
is just defensive; no user-visible distinction needs to be made.

I wonder whether the real problem here is failure to truncate statement
texts to something sane. Do we really need to record the whole text of
multi-megabyte statements? Especially if doing so could render the entire
feature nonfunctional?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Tom Lane (#6)
Re: Less than ideal error reporting in pg_stat_statements

On Tue, Sep 22, 2015 at 4:40 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

I wonder whether the real problem here is failure to truncate statement
texts to something sane. Do we really need to record the whole text of
multi-megabyte statements? Especially if doing so could render the entire
feature nonfunctional?

I recently encountered a 9.4 customer database that had an insanely
large query text stored by pg_stat_statements, apparently created as
part of a process of kicking the tires of their new installation. I
don't know how large it actually was, but it caused psql to stall for
over 10 seconds. Insane queries happen, so truncating query text could
conceal the extent of how unreasonable a query is.

I think that the real problem here is that garbage collection needs to
deal with OOM more appropriately. That's the only way there could be a
problem with an in-flight query as opposed to a query that looks at
pg_stat_statements, which seems to be Nasby's complaint.

My guess is that this very large query involved a very large number of
constants, possibly contained inside an " IN ( )". Slight variants of
the same query, that a human would probably consider to be equivalent
have caused artificial pressure on garbage collection.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Peter Geoghegan (#7)
Re: Less than ideal error reporting in pg_stat_statements

On Tue, Sep 22, 2015 at 5:01 PM, Peter Geoghegan <pg@heroku.com> wrote:

My guess is that this very large query involved a very large number of
constants, possibly contained inside an " IN ( )". Slight variants of
the same query, that a human would probably consider to be equivalent
have caused artificial pressure on garbage collection.

I could write a patch to do compaction in-place. The basic idea is
that there'd be a slow path in the event of an OOM-like condition
(i.e. an actual OOM, or when the MaxAllocSize limitation is violated)
that first scans through entries, and determines the exact required
buffer size for every non-garbage query text. As this
iteration/scanning occurs, the entries' offsets in shared memory are
rewritten assuming that the first entry starts at 0, the second at 0 +
length of first + 1 (for NUL sentinal byte), and so on. We then
allocate a minimal buffer, lseek() and copy into the buffer, so that
the expectation of finding query texts at those offsets is actually
met. Finally, unlink() old file, create new one, and write new buffer
out. I think I wanted to do things that way originally.

If even that exact, minimal buffer size cannot be allocated, then ISTM
that the user is out of luck. That will be very rare in practice, but
should it occur we log the issue and give up on storing query texts
entirely, so as to avoid thrashing while still giving the user
something to go on. This new code path is never hit until a garbage
collection is required, so hopefully the garbage created was not a
pathological issue with a weird workload, but rather something that
will not recur for a very long time.

That seems to me to be better than getting into the business of
deciding how long of a query text is too long.

I'm doubtful that this had anything to do with MaxAllocSize. You'd
certainly need a lot of bloat to be affected by that in any way. I
wonder how high pg_stat_statements.max was set to on this system, and
how long each query text was on average.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Peter Geoghegan (#7)
Re: Less than ideal error reporting in pg_stat_statements

Peter Geoghegan wrote:

My guess is that this very large query involved a very large number of
constants, possibly contained inside an " IN ( )". Slight variants of
the same query, that a human would probably consider to be equivalent
have caused artificial pressure on garbage collection.

So if I have multiple queries like

SELECT foo FROM bar WHERE baz IN (a, b)
SELECT foo FROM bar WHERE baz IN (a, b, c)

they are not normalized down to the same? That seems odd.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Alvaro Herrera (#9)
Re: Less than ideal error reporting in pg_stat_statements

On Tue, Sep 22, 2015 at 6:55 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

So if I have multiple queries like

SELECT foo FROM bar WHERE baz IN (a, b)
SELECT foo FROM bar WHERE baz IN (a, b, c)

they are not normalized down to the same? That seems odd.

Yes, although in practice it's usually down to a variable number of
constants appearing within the "IN ( )", which is more odd IMV.

We discussed changing this before. I don't have strong feelings either way.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Peter Geoghegan (#8)
Re: Less than ideal error reporting in pg_stat_statements

On 9/22/15 8:01 PM, Peter Geoghegan wrote:

I'm doubtful that this had anything to do with MaxAllocSize. You'd
certainly need a lot of bloat to be affected by that in any way. I
wonder how high pg_stat_statements.max was set to on this system, and
how long each query text was on average.

max was set to 10000. I don't know about average query text size, but
the command that was causing the error was a very large number of
individual INSERT ... VALUES statements all in one command.

The machine had plenty of free memory and no ulimit, so I don't see how
this could have been anything but MaxAllocSize, unless there's some
other failure mode in malloc I don't know about.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Jim Nasby (#4)
Re: Less than ideal error reporting in pg_stat_statements

On 9/22/15 6:27 PM, Jim Nasby wrote:

+ ereport(LOG,
+              (errcode(ERRCODE_OUT_OF_MEMORY),
+               errmsg("out of memory attempting to pg_stat_statement
file"),
+               errdetail("file \"%s\": size %lld", PGSS_TEXT_FILE,
stat.st_size)));

Uh, what?

Oops. I'll fix that and address David's concern tomorrow.

New patch attached. I stripped the size reporting out and simplified the
conditionals a bit as well.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com

Attachments:

patch.difftext/plain; charset=UTF-8; name=patch.diff; x-mac-creator=0; x-mac-type=0Download+14-5
#13Marti Raudsepp
marti@juffo.org
In reply to: Peter Geoghegan (#7)
Re: Less than ideal error reporting in pg_stat_statements

On Wed, Sep 23, 2015 at 3:01 AM, Peter Geoghegan <pg@heroku.com> wrote:

I think that the real problem here is that garbage collection needs to
deal with OOM more appropriately.

+1

I've also been seeing lots of log messages saying "LOG: out of
memory" on a server that's hosting development databases. I put off
debugging this until now because it didn't seem to have any adverse
effects on the system.

The file on my system is currently 5.1GB (!). I don't know how it got
there -- under normal circumstances we don't have any enormous
queries, but perhaps our application bugs during development triggered
that.

The configuration on this system is pg_stat_statements.max = 10000 and
pg_stat_statements.track = all.

----
The comment near gc_qtexts says:
* This won't be called often in the typical case, since it's likely that
* there won't be too much churn, and besides, a similar compaction process
* occurs when serializing to disk at shutdown or as part of resetting.
* Despite this, it seems prudent to plan for the edge case where the file
* becomes unreasonably large, with no other method of compaction likely to
* occur in the foreseeable future.
[...]
* Load the old texts file. If we fail (out of memory, for instance) just
* skip the garbage collection.

So, as I understand it: if the system runs low on memory for an
extended period, and/or the file grows beyond 1GB (MaxAlloc), garbage
collection stops entirely, meaning it starts leaking disk space until
a manual intervention.

It's very frustrating when debugging aides cause further problems on a
system. If the in-line compaction doesn't materialize (or it's decided
not to backport it), I would propose instead to add a test to
pgss_store() to avoid growing the file beyond MaxAlloc (or perhaps
even a lower limit). Surely dropping some statistics is better than
this pathology.

Regards,
Marti

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Marti Raudsepp (#13)
Re: Less than ideal error reporting in pg_stat_statements

On Fri, Sep 25, 2015 at 8:51 AM, Marti Raudsepp <marti@juffo.org> wrote:

I've also been seeing lots of log messages saying "LOG: out of
memory" on a server that's hosting development databases. I put off
debugging this until now because it didn't seem to have any adverse
effects on the system.

The file on my system is currently 5.1GB (!). I don't know how it got
there -- under normal circumstances we don't have any enormous
queries, but perhaps our application bugs during development triggered
that.

It could be explained by legitimate errors during planning, for
example. The query text is still stored.

So, as I understand it: if the system runs low on memory for an
extended period, and/or the file grows beyond 1GB (MaxAlloc), garbage
collection stops entirely, meaning it starts leaking disk space until
a manual intervention.

I don't think that there is much more to discuss here: this is a bug.
I will try and write a patch to fix it shortly. It will be
non-trivial, and I'm quite busy right now, so it might take a while. A
short-term remediation is to call pg_stat_statements_reset() on
systems affected like this.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Peter Geoghegan (#14)
Re: Less than ideal error reporting in pg_stat_statements

On Fri, Sep 25, 2015 at 11:37 AM, Peter Geoghegan <pg@heroku.com> wrote:

So, as I understand it: if the system runs low on memory for an
extended period, and/or the file grows beyond 1GB (MaxAlloc), garbage
collection stops entirely, meaning it starts leaking disk space until
a manual intervention.

I don't think that there is much more to discuss here: this is a bug.
I will try and write a patch to fix it shortly.

I should add that it only leaks disk space at the rate at which new
queries are observed that are not stored within pg_stat_statements
(due to an error originating in the planner or something -- they
remain "sticky" entries). The reason we've not heard far more problem
reports is that it usually never gets out of hand in the first place.

Come to think of it, you'd have to repeatedly have new queries that
are never "unstickied"; if you have substantively the same query as an
error-during-planning "sticky" entry, it will still probably be able
to use that existing entry (it will become "unstickied" by this second
execution of what the fingerprinting logic considers to be the same
query).

In short, you have to have just the right workload to hit the bug.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Peter Geoghegan (#8)
Re: Less than ideal error reporting in pg_stat_statements

On Tue, Sep 22, 2015 at 6:01 PM, Peter Geoghegan <pg@heroku.com> wrote:

I'm doubtful that this had anything to do with MaxAllocSize. You'd
certainly need a lot of bloat to be affected by that in any way. I
wonder how high pg_stat_statements.max was set to on this system, and
how long each query text was on average.

To clarify: I think it probably starts off not having much to do with
MaxAllocSize. However, it might well be the case that transient memory
pressure results in the problematic code path hitting the MaxAllocSize
imitation. So it starts with malloc() returning NULL, which
temporarily blocks garbage collection, but in bad cases the
MaxAllocSize limitation becomes a permanent barrier to performing a
garbage collection (without a manual intervention).

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Marti Raudsepp (#13)
Re: Less than ideal error reporting in pg_stat_statements

On Fri, Sep 25, 2015 at 8:51 AM, Marti Raudsepp <marti@juffo.org> wrote:

I've also been seeing lots of log messages saying "LOG: out of
memory" on a server that's hosting development databases. I put off
debugging this until now because it didn't seem to have any adverse
effects on the system.

That's unfortunate.

It's very frustrating when debugging aides cause further problems on a
system. If the in-line compaction doesn't materialize (or it's decided
not to backport it), I would propose instead to add a test to
pgss_store() to avoid growing the file beyond MaxAlloc (or perhaps
even a lower limit). Surely dropping some statistics is better than
this pathology.

I heard a customer complaint today that seems similar. A Heroku
customer attempted a migration from MySQL to PostgreSQL in this
manner:

mysqldump | psql

This at least worked well enough to cause problems for
pg_stat_statements (some queries were not rejected by the parser, I
suppose).

While I'm opposed to arbitrary limits for tools like
pg_stat_statements, I think the following defensive measure might be
useful on top of better OOM defenses:

Test for query text length within pgss_store() where a pgssJumbleState
is passed by caller (the post-parse-analysis hook caller -- not
executor hook caller that has query costs to store). If it is greater
than, say, 10 * Max(ASSUMED_MEDIAN_INIT, pgss->cur_median_usage), do
not bother to normalize the query text, or store anything at all.
Simply return.

Any entry we create at that point will be a sticky entry (pending
actually completing execution without the entry being evicted), and it
doesn't seem worth worrying about normalizing very large query texts,
which tend to be qualitatively similar to utility statements from the
user's perspective anyway. Besides, query text normalization always
occurred on a best-effort basis. It's not very uncommon for
pg_stat_statements to fail to normalize query texts today for obscure
reasons.

This would avoid storing very large query texts again and again when a
very large DML statement repeatedly fails (e.g. due to a data
integration task that can run into duplicate violations) and is
repeatedly rerun with tweaks. Maybe there is a substantively distinct
table in each case, because a CREATE TABLE is performed as part of the
same transaction, so each failed query gets a new pg_stat_statements
entry, and a new, large query text.

I should probably also assume that sticky entries have a generic
length (existing pgss->mean_query_len) for the purposes of
accumulating query text length within entry_dealloc(). They should not
get to contribute to median query length (which throttles garbage
collection to prevent thrashing).

Anyone have an opinion on that?

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Peter Geoghegan (#8)
Re: Less than ideal error reporting in pg_stat_statements

On Tue, Sep 22, 2015 at 6:01 PM, Peter Geoghegan <pg@heroku.com> wrote:

On Tue, Sep 22, 2015 at 5:01 PM, Peter Geoghegan <pg@heroku.com> wrote:

My guess is that this very large query involved a very large number of
constants, possibly contained inside an " IN ( )". Slight variants of
the same query, that a human would probably consider to be equivalent
have caused artificial pressure on garbage collection.

I could write a patch to do compaction in-place.

In the end, I decided on a simpler approach to fixing this general
sort of problem with the attached patch. See commit message for
details.

I went this way not because compaction in-place was necessarily a bad
idea, but because I think that a minimal approach will work just as
well in real world cases.

It would be nice to get this committed before the next point releases
are tagged, since I've now heard a handful of complaints like this.

--
Peter Geoghegan

Attachments:

0001-Fix-pg_stat_statements-garbage-collection-bugs.patchtext/x-patch; charset=US-ASCII; name=0001-Fix-pg_stat_statements-garbage-collection-bugs.patchDownload+46-8
In reply to: Peter Geoghegan (#18)
Re: Less than ideal error reporting in pg_stat_statements

On Fri, Oct 2, 2015 at 2:04 PM, Peter Geoghegan <pg@heroku.com> wrote:

It would be nice to get this committed before the next point releases
are tagged, since I've now heard a handful of complaints like this.

I grep'd for SIZE_MAX to make sure that that was something that is
available on all supported platforms, since it's C99. What I
originally thought was code now turns out to actually be a code-like
comment within aset.c.

I think that SIZE_MAX should be replaced by MaxAllocHugeSize before
the patch is committed. That should be perfectly portable.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#18)
Re: Less than ideal error reporting in pg_stat_statements

Peter Geoghegan <pg@heroku.com> writes:

It would be nice to get this committed before the next point releases
are tagged, since I've now heard a handful of complaints like this.

I'm not too impressed with this bit:

 	/* Allocate buffer; beware that off_t might be wider than size_t */
-	if (stat.st_size <= MaxAllocSize)
+	if (stat.st_size <= SIZE_MAX)
 		buf = (char *) malloc(stat.st_size);

because there are no, zero, not one uses of SIZE_MAX in our code today,
and I do not see such a symbol required by the POSIX v2 spec either.
Perhaps this will work, but you're asking us to introduce a brand new
portability hazard just hours before a wrap deadline. That is not
happening.

Other than that, this seems roughly sane, though I've not read it in
detail or tested it. Does anyone have an objection to trying to squeeze
in something along this line?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Tom Lane (#20)
#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#19)
In reply to: Tom Lane (#22)
In reply to: Peter Geoghegan (#23)
In reply to: Peter Geoghegan (#23)
In reply to: Peter Geoghegan (#25)
#27Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#26)
In reply to: Tom Lane (#27)
In reply to: Tom Lane (#27)
#30Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#29)
In reply to: Tom Lane (#30)
#32Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#31)
In reply to: Tom Lane (#32)
#34Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#33)
In reply to: Peter Geoghegan (#33)
#36Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#33)
#37Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#35)
#38Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#37)
#39Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#38)
In reply to: Tom Lane (#36)
In reply to: Tom Lane (#37)
In reply to: Jim Nasby (#11)
#43Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#42)
#44Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#43)
#45Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andrew Dunstan (#44)
#46Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#45)
In reply to: Tom Lane (#46)
#48Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Tom Lane (#46)
#49Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Tom Lane (#39)