Re: Postgres performance comments from a MySQL user

Started by Darko Prenosilalmost 23 years ago46 messagesgeneral
Jump to latest
#1Darko Prenosil
darko.prenosil@finteh.hr

----- Original Message -----
From: "Tom Lane" <tgl@sss.pgh.pa.us>
To: "Bruce Momjian" <pgman@candle.pha.pa.us>
Cc: "Kaarel" <kaarel@future.ee>; "Justin Clift" <justin@postgresql.org>;
"Jay O'Connor" <joconnor@cybermesa.com>; <pgsql-general@postgresql.org>
Sent: Wednesday, June 11, 2003 9:37 PM
Subject: Re: [GENERAL] Postgres performance comments from a MySQL user

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Tom Lane wrote:

Having said that, though, I do wonder whether we couldn't bump them up

a

little. Are there still any live platforms where the default SHMMAX is
less than 4 MB? (There was discussion of this a month or two back on
pghackers, but no conclusion.)

Could we have per-platforms defaults?

Hmm, that might be a nice solution. Could we arrange for the template
file to supply defaults for shared_buffers and so forth? They'd have
to be inserted into postgresql.conf during installation or initdb, but
we already do some editing of postgresql.conf during initdb ...

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

There is project on gborg(or somewhere else), written as deamon that
monitors server and calculates the best configuration based not only on
platform, but also on server load.(damn, I can't remember the project
name!). I think that monitoring is done using pg_stat.
If group think that this is important isue, we can try to do something like
this, or even include that project in contrib ? I know that I did downlad
the source, but never tried if it works.
Maybe I can try it I let You know ? Even beter if author is reading this
list !

Regards !

#2Jonathan Bartlett
johnnyb@eskimo.com
In reply to: Darko Prenosil (#1)

If we were to start again
from scratch now, I'd still use InnoDB over postgres unless the
performance picked up with postgres recently.

Keep in mind our application is very write-heavy so your numbers may
be different. Does postgres still keep the old row versions in the
primary-key B-Tree? If it does I doubt performance improved much for
write-heavy apps, that was a very poor design decision by them. InnoDB
takes the Oracle route of moving old row versions to a seperate
on-disk data structure.

1) A write-heavy app will almost certainly be faster with Postgres.

2) Postgres has had many speed improvements over the last few years

3) If your point was to move to a relational database, then you should
choose Postgres. MySQL, although it's SQL, hardly qualifies as relational

For example, MySQL does not have:

* Views
* Triggers
* Stored Procedures
* Subselects (although they may have added this one)

And then when you want to do real transactions, MySQLs speed slows way
down.

If you are moving _to_ a database system, it seems pointless to stop
halfway and go with MySQL, when you can go the full way with Postgres.

Jon

Show quoted text

Does what he say make sense? If so, has the situation changed? BNasically,
I need something intelligent to say to my boss to either counter or
mitigate his perception.

Thanks

Take care,
Jay

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

#3Justin Clift
justin@postgresql.org
In reply to: Jonathan Bartlett (#2)

Hi Jay,

From his comments about PostgreSQL's performance, I'd extremely
strongly suspect he has only tested it with untuned (default) memory
settings. The default memory settings in all of the previous releases
of PostgreSQL have been extremely conservative to maintain the widest
compatibility. However, they _significantly_ reduce the throughput and
transaction speed of PostgreSQL.

The _very first_ thing to do with any PostgreSQL installation is bump up
the memory settings (at least "sort_mem" and "shared_buffers") in the
postgresql.conf and restart it.

Tell him to test it with decent settings (try about 4000 for each as an
initial start), and he'll find that a decently tuned PostgreSQL matches
the speed of a MySQL installation with any table type. An in
write-intensive applications, the MySQL server will always fall behind.
_Especially_ as the number of simultaneous clients rises. MySQL falls
behind, as does Oracle 8i a bit further on (not sure about 9i and 9iR2),
and PostgreSQL keeps on performing at pretty much the same throughput
for far higher numbers of client connections.

And _that_, is for real.

Regards and best wishes,

Justin Clift

Jay O'Connor wrote:

Show quoted text

Some backs-story. I'm in the process of converting our internal file based
data storage to an RDBMS. After looking a bit at PostgreSQL and MySQL, I
chose Postgresql. My boss has heard of MySQL and has not heard of
PostgreSQL and every now and then ahe make allusions that we shuold be
using MySQL.

One comment he got from the architect of another web site is as follows

If we were to start again
from scratch now, I'd still use InnoDB over postgres unless the
performance picked up with postgres recently.

Keep in mind our application is very write-heavy so your numbers may
be different. Does postgres still keep the old row versions in the
primary-key B-Tree? If it does I doubt performance improved much for
write-heavy apps, that was a very poor design decision by them. InnoDB
takes the Oracle route of moving old row versions to a seperate
on-disk data structure.

Does what he say make sense? If so, has the situation changed? BNasically,
I need something intelligent to say to my boss to either counter or
mitigate his perception.

Thanks

Take care,
Jay

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

#4nolan
nolan@celery.tssi.com
In reply to: Jonathan Bartlett (#2)

3) If your point was to move to a relational database, then you should
choose Postgres. MySQL, although it's SQL, hardly qualifies as relational

For example, MySQL does not have:

* Views
* Triggers
* Stored Procedures
* Subselects (although they may have added this one)

And then when you want to do real transactions, MySQLs speed slows way
down.

If you are moving _to_ a database system, it seems pointless to stop
halfway and go with MySQL, when you can go the full way with Postgres.

According to the MySQL website, subqueries are in 4.1, which is is still
in alpha release.

I can speak with some first-hand experience about both databases.

A project I became involved with this spring was originally begun using
MySQL, but my recommendation as the incoming DB architect/DBA was that
we switch to pgsql, in large measure because of the features listed above.

I also have Oracle 9i, MySQL and pgsql running on the same Redhat 8
platform with similar datasets, so I can address some performance issues,
and when I get past a deadline this month I may be able to run some
three-way performance tests on some moderately large tables (10 million
rows).

Based on some ad-hoc work I've been doing with that data, it appears to me
that Oracle beats pgsql in most update situations, though perhaps that
will change with 7.4.
--
Mike Nolan

#5Kaarel
kaarel@future.ee
In reply to: Justin Clift (#3)

The _very first_ thing to do with any PostgreSQL installation is bump up
the memory settings (at least "sort_mem" and "shared_buffers") in the
postgresql.conf and restart it.

This makes me wonder why is the default configuration so conservative?
If this is the very first thing with any postgre installation then
wouldn't it make more sense to have higher default memory settings and
these who then have problems could lower the memory settings.

#6Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kaarel (#5)

Kaarel <kaarel@future.ee> writes:

This makes me wonder why is the default configuration so conservative?

There are still a lot of platforms where desirable settings will cause
the database to fail to start, because the default kernel limits on
shared memory are still set for 1980s-vintage hardware.

We have had a policy for a long time that we'd rather the out-of-the-box
settings be guaranteed to start than that they be optimal for
performance. No one is going to get as far as testing performance
if the system won't start for them.

Having said that, though, I do wonder whether we couldn't bump them up a
little. Are there still any live platforms where the default SHMMAX is
less than 4 MB? (There was discussion of this a month or two back on
pghackers, but no conclusion.)

regards, tom lane

#7Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#6)

Tom Lane wrote:

Kaarel <kaarel@future.ee> writes:

This makes me wonder why is the default configuration so conservative?

There are still a lot of platforms where desirable settings will cause
the database to fail to start, because the default kernel limits on
shared memory are still set for 1980s-vintage hardware.

We have had a policy for a long time that we'd rather the out-of-the-box
settings be guaranteed to start than that they be optimal for
performance. No one is going to get as far as testing performance
if the system won't start for them.

Having said that, though, I do wonder whether we couldn't bump them up a
little. Are there still any live platforms where the default SHMMAX is
less than 4 MB? (There was discussion of this a month or two back on
pghackers, but no conclusion.)

Could we have per-platforms defaults?

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073
#8Dann Corbit
DCorbit@connx.com
In reply to: Bruce Momjian (#7)

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Wednesday, June 11, 2003 12:28 PM
To: Kaarel
Cc: Justin Clift; Jay O'Connor; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Postgres performance comments from a
MySQL user

Kaarel <kaarel@future.ee> writes:

This makes me wonder why is the default configuration so

conservative?

There are still a lot of platforms where desirable settings
will cause the database to fail to start, because the default
kernel limits on shared memory are still set for
1980s-vintage hardware.

We have had a policy for a long time that we'd rather the
out-of-the-box settings be guaranteed to start than that they
be optimal for performance. No one is going to get as far as
testing performance if the system won't start for them.

Having said that, though, I do wonder whether we couldn't
bump them up a little. Are there still any live platforms
where the default SHMMAX is less than 4 MB? (There was
discussion of this a month or two back on pghackers, but no
conclusion.)

I think this would be very, very nice:
Config_tool.exe runs, and examines:
Operating system, available memory, disk, cpu speed, etc. (whatever it
can figure out).
Then it makes good guesses for what PostgreSQL parameters to use and
reconfigures PostgreSQL.

If it was part of the installation, then even better.

#9Tony Grant
tony@tgds.net
In reply to: Tom Lane (#6)

On Wed, 2003-06-11 at 21:28, Tom Lane wrote:

There are still a lot of platforms where desirable settings will cause
the database to fail to start, because the default kernel limits on
shared memory are still set for 1980s-vintage hardware.

We have had a policy for a long time that we'd rather the out-of-the-box
settings be guaranteed to start than that they be optimal for
performance. No one is going to get as far as testing performance
if the system won't start for them.

How about providing several .conf files with an indication of what
hardware profiles they correspond to?

For example a low end, a mid range and a high end machine. Plus a SMP
.conf file.

Cheers

Tony Grant
--
www.tgds.net Library management software toolkit,
redhat linux on Sony Vaio C1XD,
Dreamweaver MX with Tomcat and PostgreSQL

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#7)

Bruce Momjian <pgman@candle.pha.pa.us> writes:

Tom Lane wrote:

Having said that, though, I do wonder whether we couldn't bump them up a
little. Are there still any live platforms where the default SHMMAX is
less than 4 MB? (There was discussion of this a month or two back on
pghackers, but no conclusion.)

Could we have per-platforms defaults?

Hmm, that might be a nice solution. Could we arrange for the template
file to supply defaults for shared_buffers and so forth? They'd have
to be inserted into postgresql.conf during installation or initdb, but
we already do some editing of postgresql.conf during initdb ...

regards, tom lane

#11Kaarel
kaarel@future.ee
In reply to: Tom Lane (#6)

We have had a policy for a long time that we'd rather the out-of-the-box
settings be guaranteed to start than that they be optimal for
performance. No one is going to get as far as testing performance
if the system won't start for them.

I get the point here and I respect that. I'm just afraid that as long as
the default settings reduce performance all benchamrks using the default
install will show postgre in slow light.

#12Steve Lane
slane@moyergroup.com
In reply to: Justin Clift (#3)

On 6/11/03 10:39 AM, "Justin Clift" <justin@postgresql.org> wrote:

Tell him to test it with decent settings (try about 4000 for each as an
initial start), and he'll find that a decently tuned PostgreSQL matches
the speed of a MySQL installation with any table type. An in
write-intensive applications, the MySQL server will always fall behind.
_Especially_ as the number of simultaneous clients rises. MySQL falls
behind, as does Oracle 8i a bit further on (not sure about 9i and 9iR2),
and PostgreSQL keeps on performing at pretty much the same throughput
for far higher numbers of client connections.

And _that_, is for real.

Are there *any* recent benchmarks that show all this? The most recent ones I
can find are a couple of years old. Now, eWeek did run a database benchmark
some time in the last year, in which they compared a variety of commercial
engines and an OS engine. We can guess which was the open source db. MySQL
of course.

If anyone in the advocacy area wants to write to the eWeek author who did
the report, Timothy Dyck, it'd be good to push for a benchmark that includes
postgres. Of course, since we're all involved in advocacy in some way, I
could do it myself...

Anyway, original question ... Any recent benchmarks that show how postgres
performs against others, especially under load?

-- sgl

=======================================================
Steve Lane

Vice President
The Moyer Group
14 North Peoria St Suite 2H
Chicago, IL 60607

Voice: (312) 433-2421 Email: slane@moyergroup.com
Fax: (312) 850-3930 Web: http://www.moyergroup.com
=======================================================

#13Joseph Shraibman
jks@selectacast.net
In reply to: Justin Clift (#3)

Justin Clift wrote:

The _very first_ thing to do with any PostgreSQL installation is bump up
the memory settings (at least "sort_mem" and "shared_buffers") in the
postgresql.conf and restart it.

Tell him to test it with decent settings (try about 4000 for each as an
initial start),

Perhaps that should be in the message that 'make install' echoes and in comments in the
conf file itself?

#14Dennis Gearon
gearond@cvc.net
In reply to: Steve Lane (#12)

I could really use some links to these performance comparisons. Oracle performance statistics are problematic though,as all contracts to use Oracle require the user to NEVER share performance information to the public. Someone should sue them for that. Or figure out a way to influence the market to start being pissed off about that.

Anyway, I'm going to write a paper for my masters on using PGSL vs. Oracle vs. SQL Server vs. DB2. Any and all sources of performance, usage, configurations, etc of these would be most welcome.

Steve Lane wrote:

Show quoted text

On 6/11/03 10:39 AM, "Justin Clift" <justin@postgresql.org> wrote:

Tell him to test it with decent settings (try about 4000 for each as an
initial start), and he'll find that a decently tuned PostgreSQL matches
the speed of a MySQL installation with any table type. An in
write-intensive applications, the MySQL server will always fall behind.
_Especially_ as the number of simultaneous clients rises. MySQL falls
behind, as does Oracle 8i a bit further on (not sure about 9i and 9iR2),
and PostgreSQL keeps on performing at pretty much the same throughput
for far higher numbers of client connections.

And _that_, is for real.

Are there *any* recent benchmarks that show all this? The most recent ones I
can find are a couple of years old. Now, eWeek did run a database benchmark
some time in the last year, in which they compared a variety of commercial
engines and an OS engine. We can guess which was the open source db. MySQL
of course.

If anyone in the advocacy area wants to write to the eWeek author who did
the report, Timothy Dyck, it'd be good to push for a benchmark that includes
postgres. Of course, since we're all involved in advocacy in some way, I
could do it myself...

Anyway, original question ... Any recent benchmarks that show how postgres
performs against others, especially under load?

-- sgl

=======================================================
Steve Lane

Vice President
The Moyer Group
14 North Peoria St Suite 2H
Chicago, IL 60607

Voice: (312) 433-2421 Email: slane@moyergroup.com
Fax: (312) 850-3930 Web: http://www.moyergroup.com
=======================================================

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

#15Matthew Nuzum
cobalt@bearfruit.org
In reply to: Tom Lane (#6)

Some databases (MySQL I think) ship several example configurations sized for
different installations. The default is very safe, but it's simply a matter
of choosing between "small.conf", "medium.conf", "big.conf", "huge.conf" and
copying it over the standard "tiny.conf" file.

Each config file contains comments near the top of the file that specify
suggested hardware requirements for using it.

Providing a similar series of config files for postgres would probably cut
the traffic to the performance mailing list significantly and end the need
for discussions such as this. (not that I mind the discussion)

--
Matthew Nuzum
www.bearfruit.org
cobalt@bearfruit.org

Show quoted text

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Wednesday, June 11, 2003 3:28 PM
To: Kaarel
Cc: Justin Clift; Jay O'Connor; pgsql-general@postgresql.org
Subject: Re: Postgres performance comments from a MySQL user

Kaarel <kaarel@future.ee> writes:

This makes me wonder why is the default configuration so conservative?

There are still a lot of platforms where desirable settings will cause
the database to fail to start, because the default kernel limits on
shared memory are still set for 1980s-vintage hardware.

We have had a policy for a long time that we'd rather the out-of-the-box
settings be guaranteed to start than that they be optimal for
performance. No one is going to get as far as testing performance
if the system won't start for them.

Having said that, though, I do wonder whether we couldn't bump them up a
little. Are there still any live platforms where the default SHMMAX is
less than 4 MB? (There was discussion of this a month or two back on
pghackers, but no conclusion.)

regards, tom lane

#16Martín Marqués
martin@bugs.unl.edu.ar
In reply to: Jonathan Bartlett (#2)

On Mi� 11 Jun 2003 12:29, Jonathan Bartlett wrote:

3) If your point was to move to a relational database, then you should
choose Postgres. MySQL, although it's SQL, hardly qualifies as
relational

MySQL doesn't have relations at all, unless you put the InnoDB module,
which stamps down performance.

An example I tried to do on a MySQL without InnoDB was:

CREATE TABLE testing (
id INT,
word VARCHAR(20) REFERENCES other_table("word")
);

(knowing that other_table exists (I prefiously created it) and has word as
a VARCHAR(20) field).

An error is what I got.

--
Porqu� usar una base de datos relacional cualquiera,
si pod�s usar PostgreSQL?
-----------------------------------------------------------------
Mart�n Marqu�s | mmarques@unl.edu.ar
Programador, Administrador, DBA | Centro de Telematica
Universidad Nacional
del Litoral
-----------------------------------------------------------------

#17Martín Marqués
martin@bugs.unl.edu.ar
In reply to: Kaarel (#5)

On Mi� 11 Jun 2003 16:01, Kaarel wrote:

The _very first_ thing to do with any PostgreSQL installation is bump
up the memory settings (at least "sort_mem" and "shared_buffers") in
the postgresql.conf and restart it.

This makes me wonder why is the default configuration so conservative?
If this is the very first thing with any postgre installation then
wouldn't it make more sense to have higher default memory settings and
these who then have problems could lower the memory settings.

You can't expect to have a good DB server if you didn't take the time to
read the performance tips on the PostgreSQL Administration Guide, and
gone through the postgresql.conf file.
The same goes for any server, httpd, MTA, etc. For small load a default
Apache, or default Postfix will work OK, but on heavy load you have to
make tuning, or the system will fall to pieces.

--
Porqu� usar una base de datos relacional cualquiera,
si pod�s usar PostgreSQL?
-----------------------------------------------------------------
Mart�n Marqu�s | mmarques@unl.edu.ar
Programador, Administrador, DBA | Centro de Telematica
Universidad Nacional
del Litoral
-----------------------------------------------------------------

#18Justin Clift
justin@postgresql.org
In reply to: Joseph Shraibman (#13)

Joseph Shraibman wrote:

Justin Clift wrote:

The _very first_ thing to do with any PostgreSQL installation is bump
up the memory settings (at least "sort_mem" and "shared_buffers") in
the postgresql.conf and restart it.

Tell him to test it with decent settings (try about 4000 for each as
an initial start),

Perhaps that should be in the message that 'make install' echoes and in
comments in the conf file itself?

Probably it's a good idea to have some mention of this, as even though we should alter the source to higher defaults for our next release, there are potentially
people that would read a message like this and go "wow, didn't know that", then tune their existing installations as well.

With Solaris, 4000 for those memory settings is a good place to start and it's generally fine to just leave them at that unless there's a definite reason for
performance tuning. No idea with other OS's.

Anyone feel like submitting a patch to alter the default settings to a higher mark? Don't think it's been done yet, and it'd be a shame to forget it before
feature freeze time of the next release.

Regards and best wishes,

Justin Clift

--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi

#19Keith C. Perry
netadmin@vcsn.com
In reply to: Dennis Gearon (#14)

For what its worth, a while ago, I was on http://www.tpc.org trying to find some
useful comparisons of at least a test swing of code & and sample data that I
could use in my lab for my own tests.

I don't remember getting that far with it- I simply gave the guy the article
about PG being used to serve .org :)

Maybe there is more information the now...

Quoting Dennis Gearon <gearond@cvc.net>:

I could really use some links to these performance comparisons. Oracle
performance statistics are problematic though,as all contracts to use Oracle
require the user to NEVER share performance information to the public.
Someone should sue them for that. Or figure out a way to influence the market
to start being pissed off about that.

Anyway, I'm going to write a paper for my masters on using PGSL vs. Oracle
vs. SQL Server vs. DB2. Any and all sources of performance, usage,
configurations, etc of these would be most welcome.

Steve Lane wrote:

On 6/11/03 10:39 AM, "Justin Clift" <justin@postgresql.org> wrote:

Tell him to test it with decent settings (try about 4000 for each as an
initial start), and he'll find that a decently tuned PostgreSQL matches
the speed of a MySQL installation with any table type. An in
write-intensive applications, the MySQL server will always fall behind.
_Especially_ as the number of simultaneous clients rises. MySQL falls
behind, as does Oracle 8i a bit further on (not sure about 9i and 9iR2),
and PostgreSQL keeps on performing at pretty much the same throughput
for far higher numbers of client connections.

And _that_, is for real.

Are there *any* recent benchmarks that show all this? The most recent ones

I

can find are a couple of years old. Now, eWeek did run a database

benchmark

some time in the last year, in which they compared a variety of commercial
engines and an OS engine. We can guess which was the open source db. MySQL
of course.

If anyone in the advocacy area wants to write to the eWeek author who did
the report, Timothy Dyck, it'd be good to push for a benchmark that

includes

postgres. Of course, since we're all involved in advocacy in some way, I
could do it myself...

Anyway, original question ... Any recent benchmarks that show how postgres
performs against others, especially under load?

-- sgl

=======================================================
Steve Lane

Vice President
The Moyer Group
14 North Peoria St Suite 2H
Chicago, IL 60607

Voice: (312) 433-2421 Email: slane@moyergroup.com
Fax: (312) 850-3930 Web: http://www.moyergroup.com
=======================================================

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

____________________________________
This email account is being host by:
VCSN, Inc : http://vcsn.com

#20Ron Johnson
ron.l.johnson@cox.net
In reply to: Martín Marqués (#16)

On Thu, 2003-06-12 at 08:05, Martin Marques wrote:

On Mi� 11 Jun 2003 12:29, Jonathan Bartlett wrote:

3) If your point was to move to a relational database, then you should
choose Postgres. MySQL, although it's SQL, hardly qualifies as
relational

MySQL doesn't have relations at all, unless you put the InnoDB module,
which stamps down performance.

An example I tried to do on a MySQL without InnoDB was:

CREATE TABLE testing (
id INT,
word VARCHAR(20) REFERENCES other_table("word")
);

(knowing that other_table exists (I prefiously created it) and has word as
a VARCHAR(20) field).

An error is what I got.

A table *is* a relation. You seem to be referring to foreign keys.
Maybe MySQL has a different syntax?

-- 
+-----------------------------------------------------------+
| Ron Johnson, Jr.     Home: ron.l.johnson@cox.net          |
| Jefferson, LA  USA   http://members.cox.net/ron.l.johnson |
|                                                           |
| Regarding war zones: "There's nothing sacrosanct about a  |
| hotel with a bunch of journalists in it."                 |
|     Marine Lt. Gen. Bernard E. Trainor (Retired)          |
+-----------------------------------------------------------+
#21Richard Welty
rwelty@averillpark.net
In reply to: Ron Johnson (#20)
#22Ian Lawrence Barwick
barwick@gmail.com
In reply to: Ron Johnson (#20)
#23Martín Marqués
martin@bugs.unl.edu.ar
In reply to: Ron Johnson (#20)
#24Ernest E Vogelsinger
ernest@vogelsinger.at
In reply to: Justin Clift (#18)
#25Dann Corbit
DCorbit@connx.com
In reply to: Ernest E Vogelsinger (#24)
#26Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Martín Marqués (#17)
#27Dave Page
dpage@pgadmin.org
In reply to: Jim Nasby (#26)
#28Martín Marqués
martin@bugs.unl.edu.ar
In reply to: Jim Nasby (#26)
#29Lincoln Yeoh
lyeoh@pop.jaring.my
In reply to: Matthew Nuzum (#15)
#30scott.marlowe
scott.marlowe@ihs.com
In reply to: Ernest E Vogelsinger (#24)
#31Ernest E Vogelsinger
ernest@vogelsinger.at
In reply to: scott.marlowe (#30)
#32Matthew Nuzum
cobalt@bearfruit.org
In reply to: Lincoln Yeoh (#29)
#33Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Ernest E Vogelsinger (#31)
#34Arjen van der Meijden
acm@tweakers.net
In reply to: Dave Page (#27)
#35scott.marlowe
scott.marlowe@ihs.com
In reply to: Ernest E Vogelsinger (#31)
#36Arjen van der Meijden
acm@tweakers.net
In reply to: scott.marlowe (#35)
#37Tom Lane
tgl@sss.pgh.pa.us
In reply to: Arjen van der Meijden (#36)
#38Arjen van der Meijden
acm@tweakers.net
In reply to: Tom Lane (#37)
#39Tom Lane
tgl@sss.pgh.pa.us
In reply to: Arjen van der Meijden (#38)
#40Ernest E Vogelsinger
ernest@vogelsinger.at
In reply to: Tom Lane (#39)
#41Tom Lane
tgl@sss.pgh.pa.us
In reply to: Ernest E Vogelsinger (#40)
#42Ernest E Vogelsinger
ernest@vogelsinger.at
In reply to: Tom Lane (#41)
#43Tom Lane
tgl@sss.pgh.pa.us
In reply to: Ernest E Vogelsinger (#42)
#44Ernest E Vogelsinger
ernest@vogelsinger.at
In reply to: Tom Lane (#43)
#45Kaarel
kaarel@future.ee
In reply to: Tom Lane (#43)
#46Paul Thomas
paul@tmsl.demon.co.uk
In reply to: Kaarel (#45)