Proposal for 9.1: WAL streaming from WAL buffers

Started by Fujii Masaoalmost 16 years ago52 messageshackers
Jump to latest
#1Fujii Masao
masao.fujii@gmail.com

Hi,

In 9.0, walsender reads WAL always from the disk and sends it to the standby.
That is, we cannot send WAL until it has been written (and flushed) to the disk.
This degrades the performance of synchronous replication very much since a
transaction commit must wait for the WAL write time *plus* the replication time.

The attached patch enables walsender to read data from WAL buffers in addition
to the disk. Since we can write and send WAL simultaneously, in synchronous
replication, a transaction commit has only to wait for either of them. So the
performance would significantly increase.

Now three hackers (Zoltan, Simon and me) are planning to develop synchronous
replication feature. I'm not sure whose patch will be committed at last. But
since the attached patch provides just a infrastructure to optimize SR, it
would work fine with any of them together and have a good effect.

I'll add the patch into the next CF. AFAIK the ReviewFest will start Jun 15.
During that, if you are interested in the patch, please feel free to review it.
Also you can get the code change from my git repository:

git://git.postgresql.org/git/users/fujii/postgres.git
branch: read-wal-buffers

From here I talk about the detail of the change. At first, walsender reads WAL
from the disk. If it has reached the current write location (i.e., there is no
unsent WAL in the disk), then it attempts to read from WAL buffers. This buffer
reading continues until the WAL to send has been purged from WAL buffers. IOW,
If WAL buffers is large enough and walsender has been catching up with insertion
of WAL, it can read WAL from the buffers forever.

Then if WAL to send has purged from the buffers, walsender backs off and tries
to read it from the disk. If we can find no WAL to send in the disk, walsender
attempts to read WAL from the buffers again. Walsender repeats these operations.

The location of the oldest record in the buffers is saved in the shared memory.
This location is used to calculate whether the particular WAL is in the buffers
or not.

To avoid lock contention, walsender reads WAL buffers and XLogCtl->xlblocks
without holding neither WALInsertLock nor WALWriteLock. Of course, they might be
changed because of buffer replacement while being read. So after reading them,
we check that what we read was valid by comparing the location of the read WAL
with the location of the oldest record in the buffers. This logic is similar to
what XLogRead() does at the end.

This feature is required for preventing the performance of synchronous
replication from dropping significantly. It can cut the time that a transaction
committed on the master takes to become visible on the standby. So, it's also
useful for asynchronous replication.

Thought? Comment? Objection?

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

Attachments:

read_wal_buffers_v1.patchapplication/octet-stream; name=read_wal_buffers_v1.patchDownload+338-108
#2Robert Haas
robertmhaas@gmail.com
In reply to: Fujii Masao (#1)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On Fri, Jun 11, 2010 at 9:14 AM, Fujii Masao <masao.fujii@gmail.com> wrote:

Thought? Comment? Objection?

What happens if the WAL is streamed to the standby and then the master
crashes without writing that WAL to disk?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

#3Fujii Masao
masao.fujii@gmail.com
In reply to: Robert Haas (#2)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On Fri, Jun 11, 2010 at 10:22 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Jun 11, 2010 at 9:14 AM, Fujii Masao <masao.fujii@gmail.com> wrote:

Thought? Comment? Objection?

What happens if the WAL is streamed to the standby and then the master
crashes without writing that WAL to disk?

What are you concerned about?

I think that the situation would be the same as 9.0 from users' perspective.
After failover, the transaction which a client regards as aborted (because
of the crash) might be visible or invisible on new master (i.e., original
standby). For now, we cannot control that.

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

#4Robert Haas
robertmhaas@gmail.com
In reply to: Fujii Masao (#3)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On Fri, Jun 11, 2010 at 9:57 AM, Fujii Masao <masao.fujii@gmail.com> wrote:

On Fri, Jun 11, 2010 at 10:22 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Jun 11, 2010 at 9:14 AM, Fujii Masao <masao.fujii@gmail.com> wrote:

Thought? Comment? Objection?

What happens if the WAL is streamed to the standby and then the master
crashes without writing that WAL to disk?

What are you concerned about?

I think that the situation would be the same as 9.0 from users' perspective.
After failover, the transaction which a client regards as aborted (because
of the crash) might be visible or invisible on new master (i.e., original
standby). For now, we cannot control that.

I think the failover case might be OK. But if the master crashes and
restarts, the slave might be left thinking its xlog position is ahead
of the xlog position on the master.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Fujii Masao (#1)
Re: Proposal for 9.1: WAL streaming from WAL buffers

Fujii Masao <masao.fujii@gmail.com> writes:

In 9.0, walsender reads WAL always from the disk and sends it to the standby.
That is, we cannot send WAL until it has been written (and flushed) to the disk.

I believe the above statement to be incorrect: walsender does *not* wait
for an fsync to occur.

I agree with the idea of trying to read from WAL buffers instead of the
file system, but the main reason why is that the current behavior makes
FADVISE_DONTNEED for WAL pretty dubious. It'd be a good idea to still
(artificially) limit replication to not read ahead of the written-out
data.

... Since we can write and send WAL simultaneously, in synchronous
replication, a transaction commit has only to wait for either of them. So the
performance would significantly increase.

That performance claim, frankly, is ludicrous. There is no way that
round trip network delay plus write+fsync on the slave is faster than
local write+fsync. Furthermore, I would say that you are thinking
exactly backwards about the requirements for synchronous replication:
what that would mean is that transaction commit waits for *both*,
not whichever one finishes first.

regards, tom lane

#6Stefan Kaltenbrunner
stefan@kaltenbrunner.cc
In reply to: Tom Lane (#5)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On 06/11/2010 04:31 PM, Tom Lane wrote:

Fujii Masao<masao.fujii@gmail.com> writes:

In 9.0, walsender reads WAL always from the disk and sends it to the standby.
That is, we cannot send WAL until it has been written (and flushed) to the disk.

I believe the above statement to be incorrect: walsender does *not* wait
for an fsync to occur.

I agree with the idea of trying to read from WAL buffers instead of the
file system, but the main reason why is that the current behavior makes
FADVISE_DONTNEED for WAL pretty dubious. It'd be a good idea to still
(artificially) limit replication to not read ahead of the written-out
data.

... Since we can write and send WAL simultaneously, in synchronous
replication, a transaction commit has only to wait for either of them. So the
performance would significantly increase.

That performance claim, frankly, is ludicrous. There is no way that
round trip network delay plus write+fsync on the slave is faster than
local write+fsync. Furthermore, I would say that you are thinking
exactly backwards about the requirements for synchronous replication:
what that would mean is that transaction commit waits for *both*,
not whichever one finishes first.

hmm not sure that is what fujii tried to say - I think his point was
that in the original case we would have serialized all the operations
(first write+sync on the master, network afterwards and write+sync on
the slave) and now we could try parallelizing by sending the wal before
we have synced locally.

Stefan

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Stefan Kaltenbrunner (#6)
Re: Proposal for 9.1: WAL streaming from WAL buffers

Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:

hmm not sure that is what fujii tried to say - I think his point was
that in the original case we would have serialized all the operations
(first write+sync on the master, network afterwards and write+sync on
the slave) and now we could try parallelizing by sending the wal before
we have synced locally.

Well, we're already not waiting for fsync, which is the slowest part.
If there's a performance problem, it may be because FADVISE_DONTNEED
disables kernel buffering so that we're forced to actually read the data
back from disk before sending it on down the wire.

regards, tom lane

#8Stefan Kaltenbrunner
stefan@kaltenbrunner.cc
In reply to: Tom Lane (#7)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On 06/11/2010 04:47 PM, Tom Lane wrote:

Stefan Kaltenbrunner<stefan@kaltenbrunner.cc> writes:

hmm not sure that is what fujii tried to say - I think his point was
that in the original case we would have serialized all the operations
(first write+sync on the master, network afterwards and write+sync on
the slave) and now we could try parallelizing by sending the wal before
we have synced locally.

Well, we're already not waiting for fsync, which is the slowest part.
If there's a performance problem, it may be because FADVISE_DONTNEED
disables kernel buffering so that we're forced to actually read the data
back from disk before sending it on down the wire.

hmm ok - but assuming sync rep we would end up with something like the
following(hypotetically assuming each operation takes 1 time unit):

originally:

write 1
sync 1
network 1
write 1
sync 1

total: 5

whereas in the new case we would basically have the write+sync compete
with network+write+sync in parallel(total 3 units) and we would only
have to wait for the slower of those two sets of operations instead of
the total time of both or am I missing something.

Stefan

#9Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#7)
Re: Proposal for 9.1: WAL streaming from WAL buffers

Well, we're already not waiting for fsync, which is the slowest part.
If there's a performance problem, it may be because FADVISE_DONTNEED
disables kernel buffering so that we're forced to actually read the data
back from disk before sending it on down the wire.

Well, that's fairly direct to solve, no? Just disable FADVISE_DONTNEED
if walsenders > 0.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com

#10Florian Pflug
fgp@phlo.org
In reply to: Tom Lane (#5)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On Jun 11, 2010, at 16:31 , Tom Lane wrote:

Fujii Masao <masao.fujii@gmail.com> writes:

In 9.0, walsender reads WAL always from the disk and sends it to the standby.
That is, we cannot send WAL until it has been written (and flushed) to the disk.

I believe the above statement to be incorrect: walsender does *not* wait
for an fsync to occur.

Hm, but then Robert's failure case is real, and streaming replication might break due to an OS-level crash of the master. Or am I missing something?

best regards,
Florian Pflug

#11Josh Berkus
josh@agliodbs.com
In reply to: Florian Pflug (#10)
Re: Proposal for 9.1: WAL streaming from WAL buffers

Hm, but then Robert's failure case is real, and streaming replication might break due to an OS-level crash of the master. Or am I missing something?

Well, in the failover case this isn't a problem, it's a benefit: the
standby gets a transaction which you would have lost off the master.
However, I can see this as a problem in the event of a server-room
powerout with very bad timing where there isn't a failover to the standby:

1) Master goes out
2) "floating" transaction applied to standby.
3) Standby goes out
4) Power back on
5) master comes up
6) standby comes up

It seems like, in that sequence, the standby would have one transaction
which the master doesn't have, yet the standby thinks it can continue
getting WAL from the master. Or did I miss something which makes this
impossible?

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com

#12Florian Pflug
fgp@phlo.org
In reply to: Josh Berkus (#11)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On Jun 12, 2010, at 3:10 , Josh Berkus wrote:

Hm, but then Robert's failure case is real, and streaming replication might break due to an OS-level crash of the master. Or am I missing something?

1) Master goes out
2) "floating" transaction applied to standby.
3) Standby goes out
4) Power back on
5) master comes up
6) standby comes up

It seems like, in that sequence, the standby would have one transaction
which the master doesn't have, yet the standby thinks it can continue
getting WAL from the master. Or did I miss something which makes this
impossible?

I did indeed miss something - with wal_sync_method set to either open_datasync or open_sync, all written WAL is also synced. Since open_datasync is the preferred setting according to http://www.postgresql.org/docs/9.0/static/runtime-config-wal.html#GUC-WAL-SYNC-METHOD, systems supporting open_datasync should be safe.

My Ubuntu 10.04 box running postgres 8.4.4 doesn't support open_datasync though, and hence defaults to fdatasync. Probably because of this fragment in xlogdefs.h
#if O_DSYNC != BARE_OPEN_SYNC_FLAG
#define OPEN_DATASYNC_FLAG (O_DSYNC | PG_O_DIRECT)
#endif

glibc defines O_DSYNC as an alias for O_SYNC and warrants that with
"Most Linux filesystems don't actually implement the POSIX O_SYNC semantics, which require all metadata updates of a write to be on disk on returning to userspace, but only the O_DSYNC semantics, which require only actual file data and metadata necessary to retrieve it to be on disk by the time the system call returns."

If that is true, I believe we should default to open_sync, not fdatasync if open_datasync isn't available, at least on linux.

best regards,
Florian Pflug

#13Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Josh Berkus (#9)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On 12/06/10 01:16, Josh Berkus wrote:

Well, we're already not waiting for fsync, which is the slowest part.
If there's a performance problem, it may be because FADVISE_DONTNEED
disables kernel buffering so that we're forced to actually read the data
back from disk before sending it on down the wire.

Well, that's fairly direct to solve, no? Just disable FADVISE_DONTNEED
if walsenders> 0.

We already do that.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#14Greg Smith
gsmith@gregsmith.com
In reply to: Florian Pflug (#12)
Re: Proposal for 9.1: WAL streaming from WAL buffers

Florian Pflug wrote:

glibc defines O_DSYNC as an alias for O_SYNC and warrants that with
"Most Linux filesystems don't actually implement the POSIX O_SYNC semantics, which require all metadata updates of a write to be on disk on returning to userspace, but only the O_DSYNC semantics, which require only actual file data and metadata necessary to retrieve it to be on disk by the time the system call returns."

If that is true, I believe we should default to open_sync, not fdatasync if open_datasync isn't available, at least on linux.

It's not true, because Linux O_SYNC semantics are basically that it's
never worked reliably on ext3. See
http://archives.postgresql.org/pgsql-hackers/2007-10/msg01310.php for
example of how terrible the situation would be if O_SYNC were the
default on Linux.

We just got a report that a better O_DSYNC is now properly exposed
starting on kernel 2.6.33+glibc 2.12:
http://archives.postgresql.org/message-id/201006041539.03868.cousinmarc@gmail.com
and it's possible they may have finally fixed it so it work like it's
supposed to. PostgreSQL versions compiled against the right
prerequisites will default to O_DSYNC by themselves. Whether or not
this is a good thing has yet to be determined. The last thing we'd want
to do at this point is make the old and usually broken O_SYNC behavior
suddenly preferred, when the new and possibly fixed O_DSYNC one will be
automatically selected when available without any code changes on the
database side.

--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com www.2ndQuadrant.us

#15Fujii Masao
masao.fujii@gmail.com
In reply to: Robert Haas (#4)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On Fri, Jun 11, 2010 at 11:24 PM, Robert Haas <robertmhaas@gmail.com> wrote:

I think the failover case might be OK.  But if the master crashes and
restarts, the slave might be left thinking its xlog position is ahead
of the xlog position on the master.

Right. Unless we perform a failover in this case, the standby might go down
because of inconsistency of WAL after restarting the master. To avoid this
problem, walsender must wait for WAL to be not only written but also *fsynced*
on the master before sending it as 9.0 does. Though this would degrade the
performance, this might be useful for some cases. We should provide the knob
to specify whether to allow the standby to go ahead of the master or not?

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

#16Fujii Masao
masao.fujii@gmail.com
In reply to: Tom Lane (#7)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On Fri, Jun 11, 2010 at 11:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:

hmm not sure that is what fujii tried to say - I think his point was
that in the original case we would have serialized all the operations
(first write+sync on the master, network afterwards and write+sync on
the slave) and now we could try parallelizing by sending the wal before
we have synced locally.

Well, we're already not waiting for fsync, which is the slowest part.

No, currently walsender waits for fsync.

Walsender tries to send WAL up to xlogctl->LogwrtResult.Write. OTOH,
xlogctl->LogwrtResult.Write is updated after XLogWrite() performs fsync.
As the result, walsender cannot send WAL not fsynced yet. We should
update xlogctl->LogwrtResult.Write before XLogWrite() performs fsync
for 9.0?

But that change would cause the problem that Robert pointed out.
http://archives.postgresql.org/pgsql-hackers/2010-06/msg00670.php

If there's a performance problem, it may be because FADVISE_DONTNEED
disables kernel buffering so that we're forced to actually read the data
back from disk before sending it on down the wire.

Currently, if max_wal_senders > 0, POSIX_FADV_DONTNEED is not used for
WAL files at all.

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

#17Fujii Masao
masao.fujii@gmail.com
In reply to: Stefan Kaltenbrunner (#8)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On Sat, Jun 12, 2010 at 12:15 AM, Stefan Kaltenbrunner
<stefan@kaltenbrunner.cc> wrote:

hmm ok - but assuming sync rep we would end up with something like the
following(hypotetically assuming each operation takes 1 time unit):

originally:

write 1
sync 1
network 1
write 1
sync 1

total: 5

whereas in the new case we would basically have the write+sync compete with
network+write+sync in parallel(total 3 units) and we would only have to wait
for the slower of those two sets of operations instead of the total time of
both or am I missing something.

Yeah, this is what I'd like to say. Thanks!

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

#18Robert Haas
robertmhaas@gmail.com
In reply to: Fujii Masao (#15)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On Mon, Jun 14, 2010 at 4:14 AM, Fujii Masao <masao.fujii@gmail.com> wrote:

On Fri, Jun 11, 2010 at 11:24 PM, Robert Haas <robertmhaas@gmail.com> wrote:

I think the failover case might be OK.  But if the master crashes and
restarts, the slave might be left thinking its xlog position is ahead
of the xlog position on the master.

Right. Unless we perform a failover in this case, the standby might go down
because of inconsistency of WAL after restarting the master. To avoid this
problem, walsender must wait for WAL to be not only written but also *fsynced*
on the master before sending it as 9.0 does. Though this would degrade the
performance, this might be useful for some cases. We should provide the knob
to specify whether to allow the standby to go ahead of the master or not?

Maybe. That sounds like a pretty enormous foot-gun to me, considering
that we have no way of recovering from the situation where the standby
gets ahead of the master. Right now, I believe we're still in the
situation where the standby goes into an infinite CPU-chewing,
log-spewing loop, but even after we fix that it's not going to be good
enough to really handle that case sensibly, which we probably need to
do if we want to make this change.

Come to think of it, can this happen already? Can the master stream
WAL to the standby after it's written but before it's fsync'd?

We should get the open item fixed for 9.0 here before we start
worrying about 9.1.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

#19Simon Riggs
simon@2ndQuadrant.com
In reply to: Fujii Masao (#16)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On Mon, 2010-06-14 at 17:39 +0900, Fujii Masao wrote:

On Fri, Jun 11, 2010 at 11:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:

hmm not sure that is what fujii tried to say - I think his point was
that in the original case we would have serialized all the operations
(first write+sync on the master, network afterwards and write+sync on
the slave) and now we could try parallelizing by sending the wal before
we have synced locally.

Well, we're already not waiting for fsync, which is the slowest part.

No, currently walsender waits for fsync.

Walsender tries to send WAL up to xlogctl->LogwrtResult.Write. OTOH,
xlogctl->LogwrtResult.Write is updated after XLogWrite() performs fsync.
As the result, walsender cannot send WAL not fsynced yet. We should
update xlogctl->LogwrtResult.Write before XLogWrite() performs fsync
for 9.0?

But that change would cause the problem that Robert pointed out.
http://archives.postgresql.org/pgsql-hackers/2010-06/msg00670.php

ISTM you just defined some clear objectives for next work.

Copying the data from WAL buffers is mostly irrelevant. The majority of
time is lost waiting for fsync. The biggest issue is about how to allow
WAL write and WALSender to act concurrently and have backend wait for
both.

Sure, copying data from wal_buffers will be faster still, but it will
cause you to address some subtle data structure locking operations that
we could solve at a later time. And it still gives the problem of how
the master resets itself if the standby really is ahead.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Training and Services

#20Simon Riggs
simon@2ndQuadrant.com
In reply to: Fujii Masao (#16)
Re: Proposal for 9.1: WAL streaming from WAL buffers

On Mon, 2010-06-14 at 17:39 +0900, Fujii Masao wrote:

No, currently walsender waits for fsync.
...

But that change would cause the problem that Robert pointed out.
http://archives.postgresql.org/pgsql-hackers/2010-06/msg00670.php

Presumably this means that if synchronous_commit = off on primary that
SR in 9.0 will no longer work correctly if the primary crashes?

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Training and Services

#21Fujii Masao
masao.fujii@gmail.com
In reply to: Robert Haas (#18)
#22Robert Haas
robertmhaas@gmail.com
In reply to: Fujii Masao (#21)
#23Tom Lane
tgl@sss.pgh.pa.us
In reply to: Fujii Masao (#16)
#24Fujii Masao
masao.fujii@gmail.com
In reply to: Robert Haas (#22)
#25Fujii Masao
masao.fujii@gmail.com
In reply to: Tom Lane (#23)
#26Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Fujii Masao (#25)
#27Fujii Masao
masao.fujii@gmail.com
In reply to: Heikki Linnakangas (#26)
#28Robert Haas
robertmhaas@gmail.com
In reply to: Fujii Masao (#24)
#29Florian Pflug
fgp@phlo.org
In reply to: Fujii Masao (#27)
#30Josh Berkus
josh@agliodbs.com
In reply to: Robert Haas (#28)
#31Robert Haas
robertmhaas@gmail.com
In reply to: Josh Berkus (#30)
#32Josh Berkus
josh@agliodbs.com
In reply to: Robert Haas (#31)
#33Josh Berkus
josh@agliodbs.com
In reply to: Josh Berkus (#32)
#34Robert Haas
robertmhaas@gmail.com
In reply to: Josh Berkus (#32)
#35Fujii Masao
masao.fujii@gmail.com
In reply to: Robert Haas (#31)
#36Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Fujii Masao (#35)
#37Bruce Momjian
bruce@momjian.us
In reply to: Heikki Linnakangas (#36)
#38Simon Riggs
simon@2ndQuadrant.com
In reply to: Fujii Masao (#35)
#39Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#38)
#40Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#39)
#41Fujii Masao
masao.fujii@gmail.com
In reply to: Robert Haas (#40)
#42Robert Haas
robertmhaas@gmail.com
In reply to: Fujii Masao (#41)
#43Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#42)
#44Robert Haas
robertmhaas@gmail.com
In reply to: Fujii Masao (#1)
#45Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Robert Haas (#44)
#46Robert Haas
robertmhaas@gmail.com
In reply to: Dimitri Fontaine (#45)
#47Tom Lane
tgl@sss.pgh.pa.us
In reply to: Dimitri Fontaine (#45)
#48Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Tom Lane (#47)
#49Josh Berkus
josh@agliodbs.com
In reply to: Robert Haas (#44)
#50Robert Haas
robertmhaas@gmail.com
In reply to: Josh Berkus (#49)
#51marcin mank
marcin.mank@gmail.com
In reply to: Robert Haas (#44)
#52Fujii Masao
masao.fujii@gmail.com
In reply to: Robert Haas (#50)