WIP patch for parallel pg_dump

Started by Joachim Wielandover 15 years ago80 messageshackers
Jump to latest
#1Joachim Wieland
joe@mcknight.de

This is the second patch for parallel pg_dump, now the actual part that
parallelizes the whole thing. More precisely, it adds parallel
backup/restore
to pg_dump/pg_restore for the directory archive format and keeps the
parallel
restore part of the custom archive format. Combined with my archive format
directory patch, which also includes a prototype of the liblzf compression
you
can combine this compression with any of the just mentioned backup/restore
scenarios. This patch is on top of the previous directory patch.

You would add a regular parallel dump with

$ pg_dump -j 4 -Fd -f out.dir dbname

In previous discussions there was a request to add support for multiple
directories, which I have done as well, so that you can also run

$ pg_dump -j 4 -Fd -f dir1:dir2:dir3 dbname

to equally distribute the data among those three directories (we can still
discuss the syntax, I am not all that happy with the colon either...)

The dump would always start with the largest objects, by looking at the
relpages column of pg_class which should give a good estimate. The order of
the
objects to restore is determined by the dependencies among the objects
(which
is already used in the parallel restore of the custom archivetype).

The file test.sh includes some example commands that I have run here as a
kind
of regression test that should give you an impression of how to call it from
the
command line.

One thing that is currently missing is proper support for Windows, this is
the next
thing that I will be working on. Also this version still gives quite a bunch
of debug
information about what the processes are doing, so don't try to pipe the
pg_dump output anywhere (even when not run in parallel), it will probably
just
not work...

The missing part that would make parallel pg_dump work with no strings
attached
is snapshot synchronization. As long as there are no synchronized snapshots,
you would need to stop writing to your database before starting the parallel
pg_dump. However it turns out that most often when you are especially
concerned
about a fast dump, you have shut down your applications anyway (which is the
reason why you are so concerned about speed in the first place). These cases
are typically database migrations from one host/platform to another or
database
upgrades without pg_migrator.

Joachim

Attachments:

pg_dump-parallel.difftext/x-patch; charset=US-ASCII; name=pg_dump-parallel.diffDownload+2202-874
#2Joachim Wieland
joe@mcknight.de
In reply to: Joachim Wieland (#1)
Re: WIP patch for parallel pg_dump

On Sun, Nov 14, 2010 at 6:52 PM, Joachim Wieland <joe@mcknight.de> wrote:

You would add a regular parallel dump with

$ pg_dump -j 4 -Fd -f out.dir dbname

So this is an updated series of patches for my parallel pg_dump WIP
patch. Most importantly it now runs on Windows once you get it to
compile there (I have added the new files to the respective project of
Mkvcbuild.pm but I wondered why the other archive formats do not need
to be defined in that file...).

So far nobody has volunteered to review this patch. It would be great
if people could at least check it out, run it and let me know if it
works and if they have any comments.

I have put all four patches in a tar archive, the patches must be
applied sequentially:

1. pg_dump_compression-refactor.diff
2. pg_dump_directory.diff
3. pg_dump_directory_parallel.diff
4. pg_dump_directory_parallel_lzf.diff

The compression-refactor patch does not include Heikki's latest changes yet.

And the last of the four patches adds LZF compression for whoever
wants to try that out. You need to link against an already installed
liblzf and call it with --compress-lzf.

Joachim

Attachments:

pg_dump_parallel.tar.gzapplication/x-gzip; name=pg_dump_parallel.tar.gzDownload+0-1
#3Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Joachim Wieland (#2)
Re: WIP patch for parallel pg_dump

On 02.12.2010 07:39, Joachim Wieland wrote:

On Sun, Nov 14, 2010 at 6:52 PM, Joachim Wieland<joe@mcknight.de> wrote:

You would add a regular parallel dump with

$ pg_dump -j 4 -Fd -f out.dir dbname

So this is an updated series of patches for my parallel pg_dump WIP
patch. Most importantly it now runs on Windows once you get it to
compile there (I have added the new files to the respective project of
Mkvcbuild.pm but I wondered why the other archive formats do not need
to be defined in that file...).

So far nobody has volunteered to review this patch. It would be great
if people could at least check it out, run it and let me know if it
works and if they have any comments.

That's a big patch..

I don't see the point of the sort-by-relpages code. The order the
objects are dumped should be irrelevant, as long as you obey the
restrictions dictated by dependencies. Or is it only needed for the
multiple-target-dirs feature? Frankly I don't see the point of that, so
it would be good to cull it out at least in this first stage.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

#4Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Heikki Linnakangas (#3)
Re: WIP patch for parallel pg_dump

Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:

I don't see the point of the sort-by-relpages code. The order the objects
are dumped should be irrelevant, as long as you obey the restrictions
dictated by dependencies. Or is it only needed for the multiple-target-dirs
feature? Frankly I don't see the point of that, so it would be good to cull
it out at least in this first stage.

From the talk at CHAR(10), and provided memory serves, it's an
optimisation so that you're doing largest file in a process and all the
little file in other processes. In lots of case the total pg_dump
duration is then reduced to about the time to dump the biggest files.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

#5Joachim Wieland
joe@mcknight.de
In reply to: Heikki Linnakangas (#3)
Re: WIP patch for parallel pg_dump

On Thu, Dec 2, 2010 at 6:19 AM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:

I don't see the point of the sort-by-relpages code. The order the objects
are dumped should be irrelevant, as long as you obey the restrictions
dictated by dependencies. Or is it only needed for the multiple-target-dirs
feature? Frankly I don't see the point of that, so it would be good to cull
it out at least in this first stage.

A guy called Dimitri Fontaine actually proposed the
serveral-directories feature here and other people liked the idea.

http://archives.postgresql.org/pgsql-hackers/2008-02/msg01061.php :-)

The code doesn't change much with or without it, and if people are no
longer in favour of it, I have no problem with taking it out.

As Dimitri has already pointed out, the relpage sorting thing is there
to start with the largest table(s) first.

Joachim

#6Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Joachim Wieland (#5)
Re: WIP patch for parallel pg_dump

Joachim Wieland <joe@mcknight.de> writes:

A guy called Dimitri Fontaine actually proposed the
serveral-directories feature here and other people liked the idea.

Hehe :)

Reading that now, it could be that I didn't know at the time that given
a powerful enough subsystem disk there's no way to saturate it with one
CPU. So the use case of parralel dump in a bunch or user given locations
would be to use different mount points (disk subsystems) at the same
time. Not sure how releveant it is.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

#7Josh Berkus
josh@agliodbs.com
In reply to: Dimitri Fontaine (#6)
Re: WIP patch for parallel pg_dump

On 12/02/2010 05:50 AM, Dimitri Fontaine wrote:

So the use case of parralel dump in a bunch or user given locations
would be to use different mount points (disk subsystems) at the same
time. Not sure how releveant it is.

I think it will complicate this feature unnecessarily for 9.1.
Personally, I need this patch so much I'm thinking of backporting it.
However, having all the data go to one directory/mount wouldn't trouble
me at all.

Now, if only I could think of some way to write a parallel dump to a set
of pipes, I'd be in heaven.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com

#8Andrew Dunstan
andrew@dunslane.net
In reply to: Josh Berkus (#7)
Re: WIP patch for parallel pg_dump

On 12/02/2010 12:56 PM, Josh Berkus wrote:

On 12/02/2010 05:50 AM, Dimitri Fontaine wrote:

So the use case of parralel dump in a bunch or user given locations
would be to use different mount points (disk subsystems) at the same
time. Not sure how releveant it is.

I think it will complicate this feature unnecessarily for 9.1.
Personally, I need this patch so much I'm thinking of backporting it.
However, having all the data go to one directory/mount wouldn't
trouble me at all.

Now, if only I could think of some way to write a parallel dump to a
set of pipes, I'd be in heaven.

The only way I can see that working sanely would be to have a program
gathering stuff at the other end of the pipes, and ensuring it was all
coherent. That would be a huge growth in scope for this, and I seriously
doubt it's worth it.

cheers

andrew

#9Josh Berkus
josh@agliodbs.com
In reply to: Andrew Dunstan (#8)
Re: WIP patch for parallel pg_dump

Now, if only I could think of some way to write a parallel dump to a
set of pipes, I'd be in heaven.

The only way I can see that working sanely would be to have a program
gathering stuff at the other end of the pipes, and ensuring it was all
coherent. That would be a huge growth in scope for this, and I seriously
doubt it's worth it.

Oh, no question. And there's workarounds ... sshfs, for example. I'm
just thinking of the ad-hoc parallel backup I'm running today, which
relies heavily on pipes.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com

#10Joachim Wieland
joe@mcknight.de
In reply to: Josh Berkus (#7)
Re: WIP patch for parallel pg_dump

On Thu, Dec 2, 2010 at 12:56 PM, Josh Berkus <josh@agliodbs.com> wrote:

Now, if only I could think of some way to write a parallel dump to a set of
pipes, I'd be in heaven.

What exactly are you trying to accomplish with the pipes?

Joachim

#11Tom Lane
tgl@sss.pgh.pa.us
In reply to: Heikki Linnakangas (#3)
Re: WIP patch for parallel pg_dump

Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:

That's a big patch..

Not nearly big enough :-(

In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I fail to understand
why that consideration can be allowed to go by the wayside now.

regards, tom lane

#12Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#11)
Re: WIP patch for parallel pg_dump

On 12/02/2010 05:01 PM, Tom Lane wrote:

Heikki Linnakangas<heikki.linnakangas@enterprisedb.com> writes:

That's a big patch..

Not nearly big enough :-(

In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I fail to understand
why that consideration can be allowed to go by the wayside now.

Well, snapshot cloning should allow that objection to be overcome, no?

cheers

andrew

#13Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#12)
Re: WIP patch for parallel pg_dump

Andrew Dunstan <andrew@dunslane.net> writes:

On 12/02/2010 05:01 PM, Tom Lane wrote:

In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I fail to understand
why that consideration can be allowed to go by the wayside now.

Well, snapshot cloning should allow that objection to be overcome, no?

Possibly, but we need to see that patch first not second.

(I'm not actually convinced that snapshot cloning is the only problem
here; locking could be an issue too, if there are concurrent processes
trying to take locks that will conflict with pg_dump's. But the
snapshot issue is definitely a showstopper.)

regards, tom lane

#14Bruce Momjian
bruce@momjian.us
In reply to: Dimitri Fontaine (#4)
Re: WIP patch for parallel pg_dump

Dimitri Fontaine wrote:

Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:

I don't see the point of the sort-by-relpages code. The order the objects
are dumped should be irrelevant, as long as you obey the restrictions
dictated by dependencies. Or is it only needed for the multiple-target-dirs
feature? Frankly I don't see the point of that, so it would be good to cull
it out at least in this first stage.

From the talk at CHAR(10), and provided memory serves, it's an

optimisation so that you're doing largest file in a process and all the
little file in other processes. In lots of case the total pg_dump
duration is then reduced to about the time to dump the biggest files.

Seems there should be a comment in the code explaining why this is being
done.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#15Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#13)
Re: WIP patch for parallel pg_dump

On 12/02/2010 05:32 PM, Tom Lane wrote:

Andrew Dunstan<andrew@dunslane.net> writes:

On 12/02/2010 05:01 PM, Tom Lane wrote:

In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I fail to understand
why that consideration can be allowed to go by the wayside now.

Well, snapshot cloning should allow that objection to be overcome, no?

Possibly, but we need to see that patch first not second.

Yes, I agree with that.

(I'm not actually convinced that snapshot cloning is the only problem
here; locking could be an issue too, if there are concurrent processes
trying to take locks that will conflict with pg_dump's. But the
snapshot issue is definitely a showstopper.)

Why is that more an issue with parallel pg_dump?

cheers

andrew

#16Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#13)
Re: WIP patch for parallel pg_dump

On Thu, Dec 2, 2010 at 5:32 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Andrew Dunstan <andrew@dunslane.net> writes:

On 12/02/2010 05:01 PM, Tom Lane wrote:

In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables.  I fail to understand
why that consideration can be allowed to go by the wayside now.

Well, snapshot cloning should allow that objection to be overcome, no?

Possibly, but we need to see that patch first not second.

Yes, by all means let's allow the perfect to be the enemy of the good.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#17Andrew Dunstan
andrew@dunslane.net
In reply to: Robert Haas (#16)
Re: WIP patch for parallel pg_dump

On 12/02/2010 07:13 PM, Robert Haas wrote:

On Thu, Dec 2, 2010 at 5:32 PM, Tom Lane<tgl@sss.pgh.pa.us> wrote:

Andrew Dunstan<andrew@dunslane.net> writes:

On 12/02/2010 05:01 PM, Tom Lane wrote:

In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I fail to understand
why that consideration can be allowed to go by the wayside now.

Well, snapshot cloning should allow that objection to be overcome, no?

Possibly, but we need to see that patch first not second.

Yes, by all means let's allow the perfect to be the enemy of the good.

That seems like a bit of an easy shot. Requiring that parallel pg_dump
produce a dump that is as consistent as non-parallel pg_dump currently
produces isn't unreasonable. It's not stopping us moving forward, it's
just not wanting to go backwards.

And it shouldn't be terribly hard. IIRC Joachim has already done some
work on it.

cheers

andrew

#18Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Dunstan (#17)
Re: WIP patch for parallel pg_dump

On Thu, Dec 2, 2010 at 7:21 PM, Andrew Dunstan <andrew@dunslane.net> wrote:

In the past, proposals for this have always been rejected on the
grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables.  I fail to understand
why that consideration can be allowed to go by the wayside now.

Well, snapshot cloning should allow that objection to be overcome, no?

Possibly, but we need to see that patch first not second.

Yes, by all means let's allow the perfect to be the enemy of the good.

That seems like a bit of an easy shot. Requiring that parallel pg_dump
produce a dump that is as consistent as non-parallel pg_dump currently
produces isn't unreasonable. It's not stopping us moving forward, it's just
not wanting to go backwards.

I certainly agree that would be nice. But if Joachim thought the
patch were useless without that, perhaps he wouldn't have bothered
writing it at this point. In fact, he doesn't think that, and he
mentioned the use cases he sees in his original post. But even
supposing you wouldn't personally find this useful in those
situations, how can you possibly say that HE wouldn't find it useful
in those situations? I understand that people sometimes show up here
and ask for ridiculous things, but I don't think we should be too
quick to attribute ridiculousness to regular contributors.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#19Andrew Dunstan
andrew@dunslane.net
In reply to: Robert Haas (#18)
Re: WIP patch for parallel pg_dump

On 12/02/2010 07:48 PM, Robert Haas wrote:

On Thu, Dec 2, 2010 at 7:21 PM, Andrew Dunstan<andrew@dunslane.net> wrote:

In the past, proposals for this have always been rejected on the
grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I fail to understand
why that consideration can be allowed to go by the wayside now.

Well, snapshot cloning should allow that objection to be overcome, no?

Possibly, but we need to see that patch first not second.

Yes, by all means let's allow the perfect to be the enemy of the good.

That seems like a bit of an easy shot. Requiring that parallel pg_dump
produce a dump that is as consistent as non-parallel pg_dump currently
produces isn't unreasonable. It's not stopping us moving forward, it's just
not wanting to go backwards.

I certainly agree that would be nice. But if Joachim thought the
patch were useless without that, perhaps he wouldn't have bothered
writing it at this point. In fact, he doesn't think that, and he
mentioned the use cases he sees in his original post. But even
supposing you wouldn't personally find this useful in those
situations, how can you possibly say that HE wouldn't find it useful
in those situations? I understand that people sometimes show up here
and ask for ridiculous things, but I don't think we should be too
quick to attribute ridiculousness to regular contributors.

Umm, nobody has attributed ridiculousness to anyone. Please don't put
words in my mouth. But I think this is a perfectly reasonable discussion
to have. Nobody gets to come along and get the features they want
without some sort of consensus, not me, not you, not Joachim, not Tom.

cheers

andrew

#20Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Dunstan (#19)
Re: WIP patch for parallel pg_dump

On Dec 2, 2010, at 8:11 PM, Andrew Dunstan <andrew@dunslane.net> wrote:

Umm, nobody has attributed ridiculousness to anyone. Please don't put words in my mouth. But I think this is a perfectly reasonable discussion to have. Nobody gets to come along and get the features they want without some sort of consensus, not me, not you, not Joachim, not Tom.

I'm not disputing that we COULD reject the patch. I AM disputing that we've made a cogent argument for doing so.

...Robert

#21Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#15)
#22Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#21)
#23Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#19)
#24Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#22)
#25Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#24)
#26Joachim Wieland
joe@mcknight.de
In reply to: Tom Lane (#23)
#27Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#23)
#28Andrew Dunstan
andrew@dunslane.net
In reply to: Joachim Wieland (#26)
#29Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Dunstan (#28)
#30Andrew Dunstan
andrew@dunslane.net
In reply to: Robert Haas (#29)
#31Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Dunstan (#30)
#32Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Robert Haas (#31)
#33Andrew Dunstan
andrew@dunslane.net
In reply to: Alvaro Herrera (#32)
#34Greg Smith
gsmith@gregsmith.com
In reply to: Joachim Wieland (#26)
#35Tom Lane
tgl@sss.pgh.pa.us
In reply to: Greg Smith (#34)
#36Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#35)
#37Andrew Dunstan
andrew@dunslane.net
In reply to: Robert Haas (#36)
#38Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Dunstan (#37)
#39Joachim Wieland
joe@mcknight.de
In reply to: Robert Haas (#38)
#40Koichi Suzuki
koichi.szk@gmail.com
In reply to: Joachim Wieland (#39)
#41Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Robert Haas (#36)
#42Robert Haas
robertmhaas@gmail.com
In reply to: Heikki Linnakangas (#41)
#43Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Robert Haas (#42)
#44Robert Haas
robertmhaas@gmail.com
In reply to: Heikki Linnakangas (#43)
#45Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Robert Haas (#44)
#46Robert Haas
robertmhaas@gmail.com
In reply to: Heikki Linnakangas (#45)
#47Andrew Dunstan
andrew@dunslane.net
In reply to: Robert Haas (#46)
#48Robert Haas
robertmhaas@gmail.com
In reply to: Andrew Dunstan (#47)
#49Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#44)
#50Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#49)
#51Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#49)
#52Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#51)
#53Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#37)
#54Andrew Dunstan
andrew@dunslane.net
In reply to: Tom Lane (#52)
#55Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#52)
#56Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#55)
#57Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#56)
#58Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kevin Grittner (#57)
#59Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#58)
#60marcin mank
marcin.mank@gmail.com
In reply to: Tom Lane (#35)
#61Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: marcin mank (#60)
#62Tom Lane
tgl@sss.pgh.pa.us
In reply to: marcin mank (#60)
#63Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#62)
#64Tom Lane
tgl@sss.pgh.pa.us
In reply to: Josh Berkus (#63)
#65Koichi Suzuki
koichi.szk@gmail.com
In reply to: Tom Lane (#62)
#66Stefan Kaltenbrunner
stefan@kaltenbrunner.cc
In reply to: Tom Lane (#64)
#67Tatsuo Ishii
t-ishii@sra.co.jp
In reply to: Stefan Kaltenbrunner (#66)
#68Koichi Suzuki
koichi.szk@gmail.com
In reply to: Stefan Kaltenbrunner (#66)
#69Stefan Kaltenbrunner
stefan@kaltenbrunner.cc
In reply to: Koichi Suzuki (#68)
#70Robert Haas
robertmhaas@gmail.com
In reply to: Koichi Suzuki (#68)
#71Koichi Suzuki
koichi.szk@gmail.com
In reply to: Robert Haas (#70)
#72Robert Haas
robertmhaas@gmail.com
In reply to: Koichi Suzuki (#71)
#73Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#27)
#74Joshua D. Drake
jd@commandprompt.com
In reply to: Bruce Momjian (#73)
#75Aidan Van Dyk
aidan@highrise.ca
In reply to: Joshua D. Drake (#74)
#76Andrew Dunstan
andrew@dunslane.net
In reply to: Aidan Van Dyk (#75)
#77Joshua D. Drake
jd@commandprompt.com
In reply to: Aidan Van Dyk (#75)
#78Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#73)
#79David Fetter
david@fetter.org
In reply to: Andrew Dunstan (#76)
#80Gurjeet Singh
singh.gurjeet@gmail.com
In reply to: Tom Lane (#64)