Read Uncommitted

Started by Simon Riggsover 6 years ago34 messageshackers
Jump to latest
#1Simon Riggs
simon@2ndQuadrant.com

I present a patch to allow READ UNCOMMITTED that is simple, useful and
efficient. This was previously thought to have no useful definition within
PostgreSQL, though I have identified a use case for diagnostics and
recovery that merits adding a short patch to implement it.

My docs for this are copied here:

In <productname>PostgreSQL</productname>'s <acronym>MVCC</acronym>
architecture, readers are not blocked by writers, so in general
you should have no need for this transaction isolation level.

In general, read uncommitted will return inconsistent results and
wrong answers. If you look at the changes made by a transaction
while it continues to make changes then you may get partial results
from queries, or you may miss index entries that haven't yet been
written. However, if you are reading transactions that are paused
at the end of their execution for whatever reason then you can
see a consistent result.

The main use case for this transaction isolation level is for
investigating or recovering data. Examples of this would be when
inspecting the writes made by a locked or hanging transaction, when
you are running queries on a standby node that is currently paused,
such as when a standby node has halted at a recovery target with
<literal>recovery_target_inclusive = false</literal> or when you
need to inspect changes made by an in-doubt prepared transaction to
decide whether to commit or abort that transaction.

In <productname>PostgreSQL</productname> read uncommitted mode gives
a consistent snapshot of the currently running transactions at the
time the snapshot was taken. Transactions starting after that time
will not be visible, even though they are not yet committed.

This is a new and surprising thought, so please review the attached patch.

Please notice that almost all of the infrastructure already exists to
support this, so this patch does very little. It avoids additional locking
on main execution paths and as far as I am aware, does not break anything.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Solutions for the Enterprise

Attachments:

read_uncommitted.v1.patchapplication/octet-stream; name=read_uncommitted.v1.patchDownload+98-11
#2Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Simon Riggs (#1)
Re: Read Uncommitted

On 18.12.2019 13:01, Simon Riggs wrote:

I present a patch to allow READ UNCOMMITTED that is simple, useful and
efficient.  This was previously thought to have no useful definition
within PostgreSQL, though I have identified a use case for diagnostics
and recovery that merits adding a short patch to implement it.

My docs for this are copied here:

    In <productname>PostgreSQL</productname>'s
<acronym>MVCC</acronym>./configure
--prefix=/home/knizhnik/postgresql/dist --enable-debug
--enable-cassert CFLAGS=-O0

    architecture, readers are not blocked by writers, so in general
    you should have no need for this transaction isolation level.

    In general, read uncommitted will return inconsistent results and
    wrong answers. If you look at the changes made by a transaction
    while it continues to make changes then you may get partial results
    from queries, or you may miss index entries that haven't yet been
    written. However, if you are reading transactions that are paused
    at the end of their execution for whatever reason then you can
    see a consistent result.

    The main use case for this transaction isolation level is for
    investigating or recovering data. Examples of this would be when
    inspecting the writes made by a locked or hanging transaction, when
    you are running queries on a standby node that is currently paused,
    such as when a standby node has halted at a recovery target with
    <literal>recovery_target_inclusive = false</literal> or when you
    need to inspect changes made by an in-doubt prepared transaction to
    decide whether to commit or abort that transaction.

    In <productname>PostgreSQL</productname> read uncommitted mode gives
    a consistent snapshot of the currently running transactions at the
    time the snapshot was taken. Transactions starting after that time
    will not be visible, even though they are not yet committed.

This is a new and surprising thought, so please review the attached patch.

Please notice that almost all of the infrastructure already exists to
support this, so this patch does very little. It avoids additional
locking on main execution paths and as far as I am aware, does not
break anything.

--
Simon Riggshttp://www.2ndQuadrant.com/ <http://www.2ndquadrant.com/&gt;
PostgreSQL Solutions for the Enterprise

As far as I understand with "read uncommitted" policy we can see two
versions of the same tuple if it was updated by two transactions both of
which were started before us and committed during table traversal by
transaction with "read uncommitted" policy. Certainly "read uncommitted"
means that we are ready to get inconsistent results, but is it really
acceptable to multiple versions of the same tuple?

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#1)
Re: Read Uncommitted

Simon Riggs <simon@2ndquadrant.com> writes:

I present a patch to allow READ UNCOMMITTED that is simple, useful and
efficient.

Won't this break entirely the moment you try to read a tuple containing
toasted-out-of-line values? There's no guarantee that the toast-table
entries haven't been vacuumed away.

I suspect it can also be broken by cases involving, eg, dropped columns.
There are a lot of assumptions in the system that no one will ever try
to read dead tuples.

The fact that you can construct a use-case in which it's good for
something doesn't make it safe in general :-(

regards, tom lane

#4Simon Riggs
simon@2ndQuadrant.com
In reply to: Konstantin Knizhnik (#2)
Re: Read Uncommitted

On Wed, 18 Dec 2019 at 12:11, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>
wrote:

As far as I understand with "read uncommitted" policy we can see two

versions of the same tuple if it was updated by two transactions both of
which were started before us and committed during table traversal by
transaction with "read uncommitted" policy. Certainly "read uncommitted"
means that we are ready to get inconsistent results, but is it really
acceptable to multiple versions of the same tuple?

"In general, read uncommitted will return inconsistent results and
wrong answers. If you look at the changes made by a transaction
while it continues to make changes then you may get partial results
from queries, or you may miss index entries that haven't yet been
written. However, if you are reading transactions that are paused
at the end of their execution for whatever reason then you can
see a consistent result."

I think I already covered your concerns in my suggested docs for this
feature.

I'm not suggesting it for general use.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Solutions for the Enterprise

#5Simon Riggs
simon@2ndQuadrant.com
In reply to: Tom Lane (#3)
Re: Read Uncommitted

On Wed, 18 Dec 2019 at 14:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

I present a patch to allow READ UNCOMMITTED that is simple, useful and
efficient.

Won't this break entirely the moment you try to read a tuple containing
toasted-out-of-line values? There's no guarantee that the toast-table
entries haven't been vacuumed away.

I suspect it can also be broken by cases involving, eg, dropped columns.
There are a lot of assumptions in the system that no one will ever try
to read dead tuples.

This was my first concern when I thought about it, but I realised that by
taking a snapshot and then calculating xmin normally, this problem would go
away.

So this won't happen with the proposed patch.

The fact that you can construct a use-case in which it's good for
something doesn't make it safe in general :-(

I agree that safety is a concern, but I don't see any safety issues in the
patch as proposed.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Solutions for the Enterprise

#6Robert Haas
robertmhaas@gmail.com
In reply to: Simon Riggs (#5)
Re: Read Uncommitted

On Wed, Dec 18, 2019 at 10:18 AM Simon Riggs <simon@2ndquadrant.com> wrote:

This was my first concern when I thought about it, but I realised that by taking a snapshot and then calculating xmin normally, this problem would go away.

Why? As soon as a transaction aborts, the TOAST rows can be vacuumed
away, but the READ UNCOMMITTED transaction might've already seen the
main tuple. This is not even a particularly tight race, necessarily,
since for example the table might be scanned, feeding tuples into a
tuplesort, and then the detoating might happen further up in the query
tree after the sort has completed.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#7Simon Riggs
simon@2ndQuadrant.com
In reply to: Robert Haas (#6)
Re: Read Uncommitted

On Wed, 18 Dec 2019 at 17:35, Robert Haas <robertmhaas@gmail.com> wrote:

On Wed, Dec 18, 2019 at 10:18 AM Simon Riggs <simon@2ndquadrant.com>
wrote:

This was my first concern when I thought about it, but I realised that

by taking a snapshot and then calculating xmin normally, this problem would
go away.

Why? As soon as a transaction aborts...

So this is the same discussion as elsewhere about potentially aborted
transactions...
AFAIK, the worst that happens in that case is that the reading transaction
will end with an ERROR, similar to a serializable error.

And that won't happen in the use cases I've explicitly described this as
being useful for, which is where the writing transactions have completed
executing.

I'm not claiming any useful behavior outside of those specific use cases;
this is not some magic discovery that goes faster - the user has absolutely
no reason to run this whatsoever unless they want to see uncommitted data.
There is a very explicit warning advising against using it for anything
else.

Just consider this part of the recovery toolkit.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Solutions for the Enterprise

#8Robert Haas
robertmhaas@gmail.com
In reply to: Simon Riggs (#7)
Re: Read Uncommitted

On Wed, Dec 18, 2019 at 1:06 PM Simon Riggs <simon@2ndquadrant.com> wrote:

So this is the same discussion as elsewhere about potentially aborted transactions...

Yep.

AFAIK, the worst that happens in that case is that the reading transaction will end with an ERROR, similar to a serializable error.

I'm not convinced of that. There's a big difference between a
serializable error, which is an error that is expected to be
user-facing and was designed with that in mind, and just failing a
bunch of random sanity checks all over the backend. If those sanity
checks happen to be less than comprehensive, which I suspect is
likely, there will probably be scenarios where you can crash a backend
and cause a system-wide restart. And you can probably also return just
plain wrong answers to queries in some scenarios.

Just consider this part of the recovery toolkit.

I agree that it would be useful to have a recovery toolkit for reading
uncommitted data, but I think a lot more thought needs to be given to
how such a thing should be designed. If you just add something called
READ UNCOMMITTED, people are going to expect it to have *way* saner
semantics than this will. They'll use it routinely, not just as a
last-ditch mechanism to recover otherwise-lost data. And I'm
reasonably confident that will not work out well.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#9Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#7)
Re: Read Uncommitted

Simon Riggs <simon@2ndquadrant.com> writes:

So this is the same discussion as elsewhere about potentially aborted
transactions...
AFAIK, the worst that happens in that case is that the reading transaction
will end with an ERROR, similar to a serializable error.

No, the worst case is transactions trying to read invalid data, resulting
in either crashes or exploitable security breaches (in the usual vein of
what can go wrong if you can get the C code to follow an invalid pointer).

This seems possible, for example, if you can get a transaction to read
uncommitted data that was written according to some other rowtype than
what the reading transaction thinks the table rowtype is. Casting my eyes
through AlterTableGetLockLevel(), it looks like all the easy ways to break
it like that are safe (for now) because they require AccessExclusiveLock.
But I am quite afraid that we'd introduce security holes by future
reductions of required lock levels --- or else that this feature would be
the sole reason why we couldn't reduce the lock level for some DDL
operation. I'm doubtful that its use-case is worth that.

And that won't happen in the use cases I've explicitly described this as
being useful for, which is where the writing transactions have completed
executing.

My concerns, at least, are not about whether this has any interesting
use-cases. They're about whether the feature can be abused to cause
security problems. I think the odds are fair that that'd be true
even today, and higher that it'd become true sometime in the future.

regards, tom lane

#10Mark Dilger
mark.dilger@enterprisedb.com
In reply to: Simon Riggs (#7)
Re: Read Uncommitted

On 12/18/19 10:06 AM, Simon Riggs wrote:

On Wed, 18 Dec 2019 at 17:35, Robert Haas <robertmhaas@gmail.com
<mailto:robertmhaas@gmail.com>> wrote:

On Wed, Dec 18, 2019 at 10:18 AM Simon Riggs <simon@2ndquadrant.com
<mailto:simon@2ndquadrant.com>> wrote:

This was my first concern when I thought about it, but I realised

that by taking a snapshot and then calculating xmin normally, this
problem would go away.

Why? As soon as a transaction aborts...

So this is the same discussion as elsewhere about potentially aborted
transactions... AFAIK, the worst that happens in that case is that
the reading transaction will end with an ERROR, similar to a
serializable error.

And that won't happen in the use cases I've explicitly described this
as being useful for, which is where the writing transactions have
completed executing.

I'm not claiming any useful behavior outside of those specific use
cases; this is not some magic discovery that goes faster - the user
has absolutely no reason to run this whatsoever unless they want to
see uncommitted data. There is a very explicit warning advising
against using it for anything else.

Just consider this part of the recovery toolkit.

In that case, don't call it "read uncommitted". Call it some other
thing entirely. Users coming from other databases may request
"read uncommitted" isolation expecting something that works.
Currently, that gets promoted to "read committed" and works. After
your change, that simply breaks and gives them an error.

I was about to write something about security and stability problems,
but Robert and Tom did elsewhere, so I'll just echo their concerns.

Looking at the regression tests, I'm surprised read uncommitted gets
so little test coverage. There's a test in src/test/isolation but
nothing at all in src/test/regression covering this isolation level.

The one in src/test/isolation doesn't look very comprehensive. I'd
at least expect a test that verifies you don't get a syntax error
when you request READ UNCOMMITTED isolation from SQL.

--
Mark Dilger

#11Simon Riggs
simon@2ndQuadrant.com
In reply to: Tom Lane (#9)
Re: Read Uncommitted

On Wed, 18 Dec 2019 at 18:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Simon Riggs <simon@2ndquadrant.com> writes:

So this is the same discussion as elsewhere about potentially aborted
transactions...
AFAIK, the worst that happens in that case is that the reading

transaction

will end with an ERROR, similar to a serializable error.

No, the worst case is transactions trying to read invalid data, resulting
in either crashes or exploitable security breaches (in the usual vein of
what can go wrong if you can get the C code to follow an invalid pointer).

Yes, but we're not following any pointers as a result of this. The output
is just rows.

This seems possible, for example, if you can get a transaction to read
uncommitted data that was written according to some other rowtype than
what the reading transaction thinks the table rowtype is. Casting my eyes
through AlterTableGetLockLevel(), it looks like all the easy ways to break
it like that are safe (for now) because they require AccessExclusiveLock.
But I am quite afraid that we'd introduce security holes by future
reductions of required lock levels --- or else that this feature would be
the sole reason why we couldn't reduce the lock level for some DDL
operation. I'm doubtful that its use-case is worth that.

I think we can limit it to Read Only transactions without any limitation on
the proposed use cases.

But I'll think some more about that, just in case.

And that won't happen in the use cases I've explicitly described this as
being useful for, which is where the writing transactions have completed
executing.

My concerns, at least, are not about whether this has any interesting
use-cases. They're about whether the feature can be abused to cause
security problems. I think the odds are fair that that'd be true
even today, and higher that it'd become true sometime in the future.

I share your concerns. We have no need or reason to make a quick decision
on this patch.

I submit this patch only as a useful tool for recovering data.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Solutions for the Enterprise

#12Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Mark Dilger (#10)
Re: Read Uncommitted

On 18/12/2019 20:46, Mark Dilger wrote:

On 12/18/19 10:06 AM, Simon Riggs wrote:

Just consider this part of the recovery toolkit.

In that case, don't call it "read uncommitted". Call it some other
thing entirely. Users coming from other databases may request
"read uncommitted" isolation expecting something that works.
Currently, that gets promoted to "read committed" and works. After
your change, that simply breaks and gives them an error.

I agree that if we have a user-exposed READ UNCOMMITTED isolation level,
it shouldn't be just a recovery tool. For a recovery tool, I think a
set-returning function as part of contrib/pageinspect, for example,
would be more appropriate. Then it could also try to be more defensive
against corrupt pages, and be superuser-only.

- Heikki

#13Stephen Frost
sfrost@snowman.net
In reply to: Robert Haas (#8)
Re: Read Uncommitted

Greetings,

* Robert Haas (robertmhaas@gmail.com) wrote:

On Wed, Dec 18, 2019 at 1:06 PM Simon Riggs <simon@2ndquadrant.com> wrote:

Just consider this part of the recovery toolkit.

I agree that it would be useful to have a recovery toolkit for reading
uncommitted data, but I think a lot more thought needs to be given to
how such a thing should be designed. If you just add something called
READ UNCOMMITTED, people are going to expect it to have *way* saner
semantics than this will. They'll use it routinely, not just as a
last-ditch mechanism to recover otherwise-lost data. And I'm
reasonably confident that will not work out well.

+1.

Thanks,

Stephen

#14Jim Finnerty
jfinnert@amazon.com
In reply to: Stephen Frost (#13)
Re: Read Uncommitted

Many will want to use it to do aggregation, e.g. a much more efficient COUNT(*), because they want performance and don't care very much about transaction consistency. E.g. they want to compute SUM(sales) by salesperson, region for the past 5 years, and don't care very much if some concurrent transaction aborted in the middle of computing this result.

On 12/18/19, 2:35 PM, "Stephen Frost" <sfrost@snowman.net> wrote:

Greetings,

* Robert Haas (robertmhaas@gmail.com) wrote:

On Wed, Dec 18, 2019 at 1:06 PM Simon Riggs <simon@2ndquadrant.com> wrote:

Just consider this part of the recovery toolkit.

I agree that it would be useful to have a recovery toolkit for reading
uncommitted data, but I think a lot more thought needs to be given to
how such a thing should be designed. If you just add something called
READ UNCOMMITTED, people are going to expect it to have *way* saner
semantics than this will. They'll use it routinely, not just as a
last-ditch mechanism to recover otherwise-lost data. And I'm
reasonably confident that will not work out well.

+1.

Thanks,

Stephen

#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jim Finnerty (#14)
Re: Read Uncommitted

"Finnerty, Jim" <jfinnert@amazon.com> writes:

Many will want to use it to do aggregation, e.g. a much more efficient COUNT(*), because they want performance and don't care very much about transaction consistency. E.g. they want to compute SUM(sales) by salesperson, region for the past 5 years, and don't care very much if some concurrent transaction aborted in the middle of computing this result.

It's fairly questionable whether there's any real advantage to be gained
by READ UNCOMMITTED in that sort of scenario --- almost all the tuples
you'd be looking at would be hinted as committed-good, ordinarily, so that
bypassing the relevant checks isn't going to save much. But I take your
point that people would *think* that READ UNCOMMITTED could be used that
way, if they come from some other DBMS. So this reinforces Mark's point
that if we provide something like this, it shouldn't be called READ
UNCOMMITTED. That should be reserved for something that has reasonably
consistent, standards-compliant behavior.

regards, tom lane

#16Robert Haas
robertmhaas@gmail.com
In reply to: Heikki Linnakangas (#12)
Re: Read Uncommitted

On Wed, Dec 18, 2019 at 2:29 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:

I agree that if we have a user-exposed READ UNCOMMITTED isolation level,
it shouldn't be just a recovery tool. For a recovery tool, I think a
set-returning function as part of contrib/pageinspect, for example,
would be more appropriate. Then it could also try to be more defensive
against corrupt pages, and be superuser-only.

+1.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#17Mark Dilger
mark.dilger@enterprisedb.com
In reply to: Mark Dilger (#10)
Read Uncommitted regression test coverage

Over in [1]/messages/by-id/CANP8+j+mgWfcX9cTPsk7t+1kQCxgyGqHTR5R7suht7mCm_x_hA@mail.gmail.com, I became concerned that, although postgres supports
Read Uncommitted transaction isolation (by way of Read Committed
mode), there was very little test coverage for it:

On 12/18/19 10:46 AM, Mark Dilger wrote:

Looking at the regression tests, I'm surprised read uncommitted gets
so little test coverage. There's a test in src/test/isolation but
nothing at all in src/test/regression covering this isolation level.

The one in src/test/isolation doesn't look very comprehensive.  I'd
at least expect a test that verifies you don't get a syntax error
when you request READ UNCOMMITTED isolation from SQL.

The attached patch set adds a modicum of test coverage for this.
Do others feel these tests are worth the small run time overhead
they add?

--
Mark Dilger

[1]: /messages/by-id/CANP8+j+mgWfcX9cTPsk7t+1kQCxgyGqHTR5R7suht7mCm_x_hA@mail.gmail.com
/messages/by-id/CANP8+j+mgWfcX9cTPsk7t+1kQCxgyGqHTR5R7suht7mCm_x_hA@mail.gmail.com

Attachments:

0001-regress.patchtext/x-patch; charset=UTF-8; name=0001-regress.patchDownload+16-0
0002-isolation.patchtext/x-patch; charset=UTF-8; name=0002-isolation.patchDownload+84-0
#18Tom Lane
tgl@sss.pgh.pa.us
In reply to: Mark Dilger (#17)
Re: Read Uncommitted regression test coverage

Mark Dilger <hornschnorter@gmail.com> writes:

The one in src/test/isolation doesn't look very comprehensive.  I'd
at least expect a test that verifies you don't get a syntax error
when you request READ UNCOMMITTED isolation from SQL.

The attached patch set adds a modicum of test coverage for this.
Do others feel these tests are worth the small run time overhead
they add?

No. As you pointed out yourself, READ UNCOMMITTED is the same as READ
COMMITTED, so there's hardly any point in testing its semantic behavior.
One or two tests that check that it is accepted by the grammar seem
like plenty (and even there, what's there to break? If bison starts
failing us to that extent, we've got bigger problems.)

Obviously, if we made it behave differently from READ COMMITTED, then
it would need testing ... but the nature and extent of such testing
would depend a lot on what we did to it, so I'm not eager to try to
predict the need in advance.

regards, tom lane

#19David Steele
david@pgmasters.net
In reply to: Heikki Linnakangas (#12)
Re: Read Uncommitted

On 12/18/19 2:29 PM, Heikki Linnakangas wrote:

On 18/12/2019 20:46, Mark Dilger wrote:

On 12/18/19 10:06 AM, Simon Riggs wrote:

Just consider this part of the recovery toolkit.

In that case, don't call it "read uncommitted".  Call it some other
thing entirely.  Users coming from other databases may request
"read uncommitted" isolation expecting something that works.
Currently, that gets promoted to "read committed" and works.  After
your change, that simply breaks and gives them an error.

I agree that if we have a user-exposed READ UNCOMMITTED isolation level,
it shouldn't be just a recovery tool. For a recovery tool, I think a
set-returning function as part of contrib/pageinspect, for example,
would be more appropriate. Then it could also try to be more defensive
against corrupt pages, and be superuser-only.

+1.

--
-David
david@pgmasters.net

#20Simon Riggs
simon@2ndQuadrant.com
In reply to: Tom Lane (#15)
Re: Read Uncommitted

On Wed, 18 Dec 2019 at 20:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:

"Finnerty, Jim" <jfinnert@amazon.com> writes:

Many will want to use it to do aggregation, e.g. a much more efficient

COUNT(*), because they want performance and don't care very much about
transaction consistency. E.g. they want to compute SUM(sales) by
salesperson, region for the past 5 years, and don't care very much if some
concurrent transaction aborted in the middle of computing this result.

It's fairly questionable whether there's any real advantage to be gained
by READ UNCOMMITTED in that sort of scenario --- almost all the tuples
you'd be looking at would be hinted as committed-good, ordinarily, so that
bypassing the relevant checks isn't going to save much.

Agreed; this was not intended to give any kind of backdoor benefit and I
don't see any, just tears.

But I take your
point that people would *think* that READ UNCOMMITTED could be used that
way, if they come from some other DBMS. So this reinforces Mark's point
that if we provide something like this, it shouldn't be called READ
UNCOMMITTED.

Seems like general agreement on that point from others on this thread.

That should be reserved for something that has reasonably
consistent, standards-compliant behavior.

Since we're discussing it, exactly what standard are we talking about here?
I'm not saying I care about that, just to complete the discussion.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Solutions for the Enterprise

#21Simon Riggs
simon@2ndQuadrant.com
In reply to: Heikki Linnakangas (#12)
#22Andres Freund
andres@anarazel.de
In reply to: Simon Riggs (#7)
#23Mark Dilger
mark.dilger@enterprisedb.com
In reply to: Tom Lane (#18)
#24Simon Riggs
simon@2ndQuadrant.com
In reply to: Andres Freund (#22)
#25Peter Eisentraut
peter_e@gmx.net
In reply to: Simon Riggs (#4)
#26Bernd Helmle
mailings@oopsware.de
In reply to: Simon Riggs (#21)
#27Simon Riggs
simon@2ndQuadrant.com
In reply to: Bernd Helmle (#26)
#28Mark Dilger
mark.dilger@enterprisedb.com
In reply to: Simon Riggs (#24)
#29Mark Dilger
mark.dilger@enterprisedb.com
In reply to: Mark Dilger (#28)
#30Andres Freund
andres@anarazel.de
In reply to: Simon Riggs (#24)
#31Andres Freund
andres@anarazel.de
In reply to: Mark Dilger (#28)
#32Craig Ringer
craig@2ndquadrant.com
In reply to: Andres Freund (#30)
#33Tom Lane
tgl@sss.pgh.pa.us
In reply to: Craig Ringer (#32)
#34Craig Ringer
craig@2ndquadrant.com
In reply to: Tom Lane (#33)