Why copy_relation_data only use wal when WAL archiving is enabled

Started by Jacky Lengover 18 years ago38 messageshackers
Jump to latest
#1Jacky Leng
lengjianquan@163.com

If I run the database under non-archiving mode, and execute the following
command:
alter table t set tablespace tblspc1;
Isn't it possible that the "new t" cann't be recovered?

#2Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Jacky Leng (#1)
Re: Why copy_relation_data only use wal when WAL archiving is enabled

Jacky Leng wrote:

If I run the database under non-archiving mode, and execute the following
command:
alter table t set tablespace tblspc1;
Isn't it possible that the "new t" cann't be recovered?

No. At the end of copy_relation_data we call smgrimmedsync, which fsyncs
the new relation file.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#3Jacky Leng
lengjianquan@163.com
In reply to: Jacky Leng (#1)
Re: Why copy_relation_data only use wal when WAL archiving is enabled

Jacky Leng wrote:

If I run the database under non-archiving mode, and execute the following
command:
alter table t set tablespace tblspc1;
Isn't it possible that the "new t" cann't be recovered?

No. At the end of copy_relation_data we call smgrimmedsync, which fsyncs
the new relation file.

Usually it's true, but how about this situation:
* First, do the following series:
* Create two tablespace SPC1, SPC2;
* Create table T1 in SPC1 and insert some values into it, suppose T1's
oid/relfilenode is OID1;
* Drop table T1;----------OID1 was released in pg_class and can be
reused.
* Do anything that will make the next oid that'll be allocated from
pg_class be OID1, e.g. insert
many many tuples into a relation with oid;
* Create table T2 in SPC2, and insert some values into it, and its
oid/relfilenode is OID1;
* Alter table T2 set tablespace SPC1;---------T2 goes to SPC1 and uses
the same file name with old T1;
* Second, suppose that no checkpoint has occured during the upper
series--authough not quite possible;
* Kill the database abnormaly;
* Restart the database;

Let's analyze what will happen during the recovery process:
* When T1 is re-created, it finds that its file has already been
there--actually this file is T2's;
* "T1" ' s file(actually T2's) is re-dropped;
* ....
* T2 is re-created, and finds that its file has disappeared, so it re-create
one;
* As copy_relation_data didn't record any xlog about T2's AlterTableSpace
op,
after recovery, we'll find that T2 is empty!!!

Show quoted text

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq

#4Simon Riggs
simon@2ndQuadrant.com
In reply to: Jacky Leng (#3)
Re: Why copy_relation_data only use wal when WAL archiving is enabled

On Wed, 2007-10-17 at 17:18 +0800, Jacky Leng wrote:

Second, suppose that no checkpoint has occured during the upper
series--authough not quite possible;

That part is irrelevant. It's forced out to disk and doesn't need
recovery, with or without the checkpoint.

There's no hole that I can see.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#5Jacky Leng
lengjianquan@163.com
In reply to: Jacky Leng (#1)
Re: Why copy_relation_data only use wal when WALarchiving is enabled

On Wed, 2007-10-17 at 17:18 +0800, Jacky Leng wrote:

Second, suppose that no checkpoint has occured during the upper
series--authough not quite possible;

That part is irrelevant. It's forced out to disk and doesn't need
recovery, with or without the checkpoint.

There's no hole that I can see.

Yes, it's really forced out.
But if there's no checkpoint, the recovery process will begin from
the time point before T1 is created, and as T1 was dropped, it'll
remove T2's file!

Show quoted text

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

#6Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Simon Riggs (#4)
Re: Why copy_relation_data only use wal when WALarchiving is enabled

Simon Riggs wrote:

On Wed, 2007-10-17 at 17:18 +0800, Jacky Leng wrote:

Second, suppose that no checkpoint has occured during the upper
series--authough not quite possible;

That part is irrelevant. It's forced out to disk and doesn't need
recovery, with or without the checkpoint.

There's no hole that I can see.

No, Jacky is right. The same problem exists at least with CLUSTER, and I
think there's other commands that rely on immediate fsync as well.

Attached is a shell script that demonstrates the problem on CVS HEAD
with CLUSTER. It creates two tables, T1 and T2, both with one row. Then
T1 is dropped, and T2 is CLUSTERed, so that the new T2 relation file
happens to get the same relfilenode that T1 had. Then we crash the
server, forcing a WAL replay. After that, T2 is empty. Oops.

Unfortunately I don't see any easy way to fix it. One approach would be
to avoid reusing the relfilenodes until next checkpoint, but I don't see
any nice place to keep track of OIDs that have been dropped since last
checkpoint.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#7Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Heikki Linnakangas (#6)
Re: Why copy_relation_data only use wal when WALarchiving is enabled

Forgot to attach the script I promised..

You need to set $PGDATA before running the script. And psql,pg_ctl and
pg_resetxlog need to be in $PATH. After running the script, restart
postmaster and run "SELECT * FROM t2". There should be one row in the
table, but it's empty.

Heikki Linnakangas wrote:

Simon Riggs wrote:

On Wed, 2007-10-17 at 17:18 +0800, Jacky Leng wrote:

Second, suppose that no checkpoint has occured during the upper
series--authough not quite possible;

That part is irrelevant. It's forced out to disk and doesn't need
recovery, with or without the checkpoint.

There's no hole that I can see.

No, Jacky is right. The same problem exists at least with CLUSTER, and I
think there's other commands that rely on immediate fsync as well.

Attached is a shell script that demonstrates the problem on CVS HEAD
with CLUSTER. It creates two tables, T1 and T2, both with one row. Then
T1 is dropped, and T2 is CLUSTERed, so that the new T2 relation file
happens to get the same relfilenode that T1 had. Then we crash the
server, forcing a WAL replay. After that, T2 is empty. Oops.

Unfortunately I don't see any easy way to fix it. One approach would be
to avoid reusing the relfilenodes until next checkpoint, but I don't see
any nice place to keep track of OIDs that have been dropped since last
checkpoint.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

Attachments:

cluster-relfilenode-clash.sh.gzapplication/x-gzip; name=cluster-relfilenode-clash.sh.gzDownload
#8Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Heikki Linnakangas (#6)
Re: Why copy_relation_data only use wal when WALarchiving is enabled

I wrote:

Unfortunately I don't see any easy way to fix it. One approach would be
to avoid reusing the relfilenodes until next checkpoint, but I don't see
any nice place to keep track of OIDs that have been dropped since last
checkpoint.

Ok, here's one idea:

Instead of deleting the file immediately on commit of DROP TABLE, the
file is truncated to release the space, but not unlink()ed, to avoid
reusing that relfilenode. The truncated file can be deleted after next
checkpoint.

Now, how does checkpoint know what to delete? We can use the fsync
request mechanism for that. When a file is truncated, a new kind of
fsync request, a "deletion request", is sent to the bgwriter, which
collects all such requests to a list. Before checkpoint calculates new
RedoRecPtr, the list is swapped with an empty one, and after writing the
new checkpoint record, all the files that were in the list are deleted.

We would leak empty files on crashes, but we leak files on crashes
anyway, so that shouldn't be an issue. This scheme wouldn't require
catalog changes, so it would be suitable for backpatching.

Any better ideas?

Do we care enough about this to fix this? Enough to backpatch? The
probability of this happening is pretty small, but the consequences are
really bad, so my vote is "yes" and "yes".

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#9Florian Pflug
fgp@phlo.org
In reply to: Heikki Linnakangas (#8)
Re: Why copy_relation_data only use wal when WALarchiving is enabled

Heikki Linnakangas wrote:

I wrote:

Unfortunately I don't see any easy way to fix it. One approach would be
to avoid reusing the relfilenodes until next checkpoint, but I don't see
any nice place to keep track of OIDs that have been dropped since last
checkpoint.

Ok, here's one idea:

Instead of deleting the file immediately on commit of DROP TABLE, the
file is truncated to release the space, but not unlink()ed, to avoid
reusing that relfilenode. The truncated file can be deleted after next
checkpoint.

Now, how does checkpoint know what to delete? We can use the fsync
request mechanism for that. When a file is truncated, a new kind of
fsync request, a "deletion request", is sent to the bgwriter, which
collects all such requests to a list. Before checkpoint calculates new
RedoRecPtr, the list is swapped with an empty one, and after writing the
new checkpoint record, all the files that were in the list are deleted.

We would leak empty files on crashes, but we leak files on crashes
anyway, so that shouldn't be an issue. This scheme wouldn't require
catalog changes, so it would be suitable for backpatching.

Any better ideas?

Couldn't we fix this by forcing a checkpoint before we commit the transaction
that created the new pg_class entry for the clustered table? Or rather, more
generally, before committing a transaction that created a new non-temporary
relfilenode but didn't WAL-log any subsequent inserts.

Thats of course a rather sledgehammer-like approach to this problem - but at
least for the backbranched the fix would be less intrusive...

regards, Florian Pflug

#10Simon Riggs
simon@2ndQuadrant.com
In reply to: Heikki Linnakangas (#6)
Re: Why copy_relation_data only use wal when WALarchiving is enabled

On Wed, 2007-10-17 at 12:11 +0100, Heikki Linnakangas wrote:

Simon Riggs wrote:

On Wed, 2007-10-17 at 17:18 +0800, Jacky Leng wrote:

Second, suppose that no checkpoint has occured during the upper
series--authough not quite possible;

That part is irrelevant. It's forced out to disk and doesn't need
recovery, with or without the checkpoint.

There's no hole that I can see.

No, Jacky is right. The same problem exists at least with CLUSTER, and I
think there's other commands that rely on immediate fsync as well.

Attached is a shell script that demonstrates the problem on CVS HEAD
with CLUSTER. It creates two tables, T1 and T2, both with one row. Then
T1 is dropped, and T2 is CLUSTERed, so that the new T2 relation file
happens to get the same relfilenode that T1 had. Then we crash the
server, forcing a WAL replay. After that, T2 is empty. Oops.

Unfortunately I don't see any easy way to fix it.

So, what you are saying is that re-using relfilenodes can cause problems
during recovery in any command that alters the relfilenode of a
relation?

If you've got a better problem statement it would be good to get that
right first before we discuss solutions.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#11Florian Pflug
fgp@phlo.org
In reply to: Simon Riggs (#10)
Re: Why copy_relation_data only use wal when WALarchiving is enabled

Simon Riggs wrote:

On Wed, 2007-10-17 at 12:11 +0100, Heikki Linnakangas wrote:

Simon Riggs wrote:

On Wed, 2007-10-17 at 17:18 +0800, Jacky Leng wrote:

Second, suppose that no checkpoint has occured during the upper
series--authough not quite possible;

That part is irrelevant. It's forced out to disk and doesn't need
recovery, with or without the checkpoint.

There's no hole that I can see.

No, Jacky is right. The same problem exists at least with CLUSTER, and I
think there's other commands that rely on immediate fsync as well.

Attached is a shell script that demonstrates the problem on CVS HEAD with
CLUSTER. It creates two tables, T1 and T2, both with one row. Then T1 is
dropped, and T2 is CLUSTERed, so that the new T2 relation file happens to
get the same relfilenode that T1 had. Then we crash the server, forcing a
WAL replay. After that, T2 is empty. Oops.

Unfortunately I don't see any easy way to fix it.

So, what you are saying is that re-using relfilenodes can cause problems
during recovery in any command that alters the relfilenode of a relation?

For what I understand, I'd say that creating a relfilenode *and* subsequently
inserting data without WAL-logging causes the problem. If the relfilenode was
recently deleted, the inserts might be effectively undone upon recovery (because
we first replay the delete), but later *not* redone (because we didn't WAL-log
the inserts).

That brings me to another idea from a fix that is less heavyweight than my
previous checkpoint-before-commit suggestion.

We could make relfilenodes globally unique if we added the xid and epoch of the
creating transaction to the filename. Those are 64 bits, so if we encode them
in base 36 (using A-Z,0-9), that'd increase the length of the filenames by 13.

regards, Florian Pflug

#12Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Simon Riggs (#10)
Re: Why copy_relation_data only use wal whenWALarchiving is enabled

Simon Riggs wrote:

If you've got a better problem statement it would be good to get that
right first before we discuss solutions.

Reusing a relfilenode of a deleted relation, before next checkpoint
following the commit of the deleting transaction, for an operation that
doesn't WAL log the contents of the new relation, leads to data loss on
recovery.

Or

Performing non-WAL logged operations on a relation file leads to a
truncated file on recovery, if the relfilenode of that file used to
belong to a relation that was dropped after the last checkpoint.

Happy?

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#13Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Florian Pflug (#9)
Re: Why copy_relation_data only use wal when WALarchiving is enabled

Florian G. Pflug wrote:

Heikki Linnakangas wrote:

I wrote:

Unfortunately I don't see any easy way to fix it. One approach would be
to avoid reusing the relfilenodes until next checkpoint, but I don't see
any nice place to keep track of OIDs that have been dropped since last
checkpoint.

Ok, here's one idea:

Instead of deleting the file immediately on commit of DROP TABLE, the
file is truncated to release the space, but not unlink()ed, to avoid
reusing that relfilenode. The truncated file can be deleted after next
checkpoint.

Now, how does checkpoint know what to delete? We can use the fsync
request mechanism for that. When a file is truncated, a new kind of
fsync request, a "deletion request", is sent to the bgwriter, which
collects all such requests to a list. Before checkpoint calculates new
RedoRecPtr, the list is swapped with an empty one, and after writing the
new checkpoint record, all the files that were in the list are deleted.

We would leak empty files on crashes, but we leak files on crashes
anyway, so that shouldn't be an issue. This scheme wouldn't require
catalog changes, so it would be suitable for backpatching.

Any better ideas?

Couldn't we fix this by forcing a checkpoint before we commit the
transaction that created the new pg_class entry for the clustered table?
Or rather, more generally, before committing a transaction that created
a new non-temporary relfilenode but didn't WAL-log any subsequent inserts.

Yes, that would work. As a small optimization, you could set a flag in
shared mem whenever you delete a rel file, and skip the checkpoint when
that flag isn't set.

Thats of course a rather sledgehammer-like approach to this problem -
but at least for the backbranched the fix would be less intrusive...

Too much of a sledgehammer IMHO.

BTW, CREATE INDEX is also vulnerable. And in 8.3, COPY to a table
created/truncated in the same transaction.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#14Simon Riggs
simon@2ndQuadrant.com
In reply to: Heikki Linnakangas (#12)
Re: Why copy_relation_data only use wal whenWALarchiving is enabled

On Wed, 2007-10-17 at 15:02 +0100, Heikki Linnakangas wrote:

Simon Riggs wrote:

If you've got a better problem statement it would be good to get that
right first before we discuss solutions.

Reusing a relfilenode of a deleted relation, before next checkpoint
following the commit of the deleting transaction, for an operation that
doesn't WAL log the contents of the new relation, leads to data loss on
recovery.

OK, thanks.

I wasn't aware we reused refilenode ids. The code in GetNewOid() doesn't
look deterministic to me, or at least isn't meant to be.
GetNewObjectId() should be cycling around, so although the oid index
scan using SnapshotDirty won't see committed deleted rows that shouldn't
matter for 2^32 oids. So what gives?

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#15Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Simon Riggs (#14)
Re: Why copy_relation_data only use wal whenWALarchivingis enabled

Simon Riggs wrote:

On Wed, 2007-10-17 at 15:02 +0100, Heikki Linnakangas wrote:

Simon Riggs wrote:

If you've got a better problem statement it would be good to get that
right first before we discuss solutions.

Reusing a relfilenode of a deleted relation, before next checkpoint
following the commit of the deleting transaction, for an operation that
doesn't WAL log the contents of the new relation, leads to data loss on
recovery.

OK, thanks.

I wasn't aware we reused refilenode ids. The code in GetNewOid() doesn't
look deterministic to me, or at least isn't meant to be.
GetNewObjectId() should be cycling around, so although the oid index
scan using SnapshotDirty won't see committed deleted rows that shouldn't
matter for 2^32 oids. So what gives?

I don't think you still quite understand what's happening. GetNewOid()
is not interesting here, look at GetNewRelFileNode() instead. And
neither are snapshots or MVCC visibility rules.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#16Tom Lane
tgl@sss.pgh.pa.us
In reply to: Heikki Linnakangas (#15)
Re: Why copy_relation_data only use wal whenWALarchivingis enabled

Heikki Linnakangas <heikki@enterprisedb.com> writes:

I don't think you still quite understand what's happening. GetNewOid()
is not interesting here, look at GetNewRelFileNode() instead. And
neither are snapshots or MVCC visibility rules.

Simon has a legitimate objection; not that there's no bug, but that the
probability of getting bitten is exceedingly small. The test script you
showed cheats six-ways-from-Sunday to cause an OID collision that would
never happen in practice. The only case where it would really happen
is if a table that has existed for a long time (~ 2^32 OID creations)
gets dropped and then you're unlucky enough to recycle that exact OID
before the next checkpoint --- and then crash before the checkpoint.

I think we should think about ways to fix this, but I don't feel a need
to try to backpatch a solution.

I tend to agree that truncating the file, and extending the fsync
request mechanism to actually delete it after the next checkpoint,
is the most reasonable route to a fix.

I think the objection about leaking files on crash is wrong. We'd
have the replay of the deletion to fix things up --- it could probably
delete the file immediately, and if not could certainly put it back
on the fsync request queue.

regards, tom lane

#17Simon Riggs
simon@2ndQuadrant.com
In reply to: Heikki Linnakangas (#15)
Re: Why copy_relation_data only use wal whenWALarchivingis enabled

On Wed, 2007-10-17 at 17:36 +0100, Heikki Linnakangas wrote:

Simon Riggs wrote:

On Wed, 2007-10-17 at 15:02 +0100, Heikki Linnakangas wrote:

Simon Riggs wrote:

If you've got a better problem statement it would be good to get that
right first before we discuss solutions.

Reusing a relfilenode of a deleted relation, before next checkpoint
following the commit of the deleting transaction, for an operation that
doesn't WAL log the contents of the new relation, leads to data loss on
recovery.

OK, thanks.

I wasn't aware we reused refilenode ids. The code in GetNewOid() doesn't
look deterministic to me, or at least isn't meant to be.
GetNewObjectId() should be cycling around, so although the oid index
scan using SnapshotDirty won't see committed deleted rows that shouldn't
matter for 2^32 oids. So what gives?

I don't think you still quite understand what's happening.

Clearly. It's not a problem to admit that.

GetNewOid()
is not interesting here, look at GetNewRelFileNode() instead. And
neither are snapshots or MVCC visibility rules.

Which calls GetNewOid() in all cases, AFAICS.

How does the reuse you say is happening come about? Seems like the bug
is in the reuse, not in how we cope with potential reuse.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#18Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Tom Lane (#16)
Re: Why copy_relation_data only use wal whenWALarchivingis enabled

Tom Lane wrote:

Simon has a legitimate objection; not that there's no bug, but that the
probability of getting bitten is exceedingly small.

Oh, if that's what he meant, he's right.

The test script you
showed cheats six-ways-from-Sunday to cause an OID collision that would
never happen in practice. The only case where it would really happen
is if a table that has existed for a long time (~ 2^32 OID creations)
gets dropped and then you're unlucky enough to recycle that exact OID
before the next checkpoint --- and then crash before the checkpoint.

Yeah, it's unlikely to happen, but the consequences are horrible.

Note that it's not just DROP TABLE that's a problem, but anything that
uses smgrscheduleunlink, including CLUSTER and REINDEX.

I tend to agree that truncating the file, and extending the fsync
request mechanism to actually delete it after the next checkpoint,
is the most reasonable route to a fix.

Ok, I'll write a patch to do that.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#19Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Simon Riggs (#17)
Re: Why copy_relation_data only use walwhenWALarchivingis enabled

Simon Riggs wrote:

On Wed, 2007-10-17 at 17:36 +0100, Heikki Linnakangas wrote:

Simon Riggs wrote:

On Wed, 2007-10-17 at 15:02 +0100, Heikki Linnakangas wrote:

Simon Riggs wrote:

If you've got a better problem statement it would be good to get that
right first before we discuss solutions.

Reusing a relfilenode of a deleted relation, before next checkpoint
following the commit of the deleting transaction, for an operation that
doesn't WAL log the contents of the new relation, leads to data loss on
recovery.

OK, thanks.

I wasn't aware we reused refilenode ids. The code in GetNewOid() doesn't
look deterministic to me, or at least isn't meant to be.
GetNewObjectId() should be cycling around, so although the oid index
scan using SnapshotDirty won't see committed deleted rows that shouldn't
matter for 2^32 oids. So what gives?

I don't think you still quite understand what's happening.

Clearly. It's not a problem to admit that.

GetNewOid()
is not interesting here, look at GetNewRelFileNode() instead. And
neither are snapshots or MVCC visibility rules.

Which calls GetNewOid() in all cases, AFAICS.

How does the reuse you say is happening come about? Seems like the bug
is in the reuse, not in how we cope with potential reuse.

After a table is dropped, the dropping transaction has been committed,
and the relation file has been deleted, there's nothing preventing the
reuse. There's no trace of that relfilenode in the system (except in the
WAL, which we never look into except on WAL replay). There's a dead row
in pg_class with that relfilenode, but even that could be vacuumed away
(not that it matters because we don't examine that).

Now the problem is that there's a record in the WAL to delete a relation
file with that relfilenode. If that relfilenode was reused, we delete
the contents of the new relation file when we replay that WAL record.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#20Simon Riggs
simon@2ndQuadrant.com
In reply to: Heikki Linnakangas (#18)
Re: Why copy_relation_data only use wal whenWALarchivingis enabled

On Wed, 2007-10-17 at 18:13 +0100, Heikki Linnakangas wrote:

The test script you
showed cheats six-ways-from-Sunday to cause an OID collision that would
never happen in practice. The only case where it would really happen
is if a table that has existed for a long time (~ 2^32 OID creations)
gets dropped and then you're unlucky enough to recycle that exact OID
before the next checkpoint --- and then crash before the checkpoint.

Yeah, it's unlikely to happen, but the consequences are horrible.

When is this going to happen?

We'd need to insert 2^32 toast chunks, which is >4 TB of data, or insert
2^32 large objects, or create 2^32 tables, or any combination of the
above all within one checkpoint duration *and* exactly hit the exact
same relation.

That's a weird and huge application, a very fast server and an unlucky
DBA to hit the exact OID to be reused and then have the server crash so
we'll ever notice.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

#21Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Simon Riggs (#20)
#22Jacky Leng
lengjianquan@163.com
In reply to: Jacky Leng (#1)
#23Jacky Leng
lengjianquan@163.com
In reply to: Jacky Leng (#1)
#24Jacky Leng
lengjianquan@163.com
In reply to: Jacky Leng (#1)
#25Jacky Leng
lengjianquan@163.com
In reply to: Jacky Leng (#1)
#26Jacky Leng
lengjianquan@163.com
In reply to: Jacky Leng (#1)
#27Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Jacky Leng (#22)
#28Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Heikki Linnakangas (#18)
#29Tom Lane
tgl@sss.pgh.pa.us
In reply to: Heikki Linnakangas (#28)
#30Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Tom Lane (#29)
#31Florian Pflug
fgp@phlo.org
In reply to: Heikki Linnakangas (#18)
#32Florian Pflug
fgp@phlo.org
In reply to: Heikki Linnakangas (#18)
#33Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Florian Pflug (#31)
#34Tom Lane
tgl@sss.pgh.pa.us
In reply to: Florian Pflug (#31)
#35Florian Pflug
fgp@phlo.org
In reply to: Tom Lane (#34)
#36Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Tom Lane (#29)
#37Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Heikki Linnakangas (#36)
#38Tom Lane
tgl@sss.pgh.pa.us
In reply to: Heikki Linnakangas (#37)