performance of insert/delete/update

Started by Wei Wengover 23 years ago48 messageshackers
Jump to latest
#1Wei Weng
wweng@kencast.com

There had been a great deal of discussion of how to improve the
performance of select/sorting on this list, what about
insert/delete/update?

Is there any rules of thumb we need to follow? What are the parameters
we should tweak to whip the horse to go faster?

Thanks

--
Wei Weng
Network Software Engineer
KenCast Inc.

#2Josh Berkus
josh@agliodbs.com
In reply to: Wei Weng (#1)
Re: performance of insert/delete/update

Wei,

There had been a great deal of discussion of how to improve the
performance of select/sorting on this list, what about
insert/delete/update?

Is there any rules of thumb we need to follow? What are the
parameters
we should tweak to whip the horse to go faster?

yes, lots of rules. Wanna be more specific? You wondering about
query structure, hardware, memory config, what?

-Josh Berkus

#3scott.marlowe
scott.marlowe@ihs.com
In reply to: Josh Berkus (#2)
Re: performance of insert/delete/update

On 21 Nov 2002, Wei Weng wrote:

On Thu, 2002-11-21 at 16:23, Josh Berkus wrote:

Wei,

There had been a great deal of discussion of how to improve the
performance of select/sorting on this list, what about
insert/delete/update?

Is there any rules of thumb we need to follow? What are the
parameters
we should tweak to whip the horse to go faster?

yes, lots of rules. Wanna be more specific? You wondering about
query structure, hardware, memory config, what?

I am most concerned about the software side, that is query structures
and postgresql config.

The absolutely most important thing to do to speed up inserts and updates
is to squeeze as many as you can into one transaction. Within reason, of
course. There's no great gain in putting more than a few thousand
together at a time. If your application is only doing one or two updates
in a transaction, it's going to be slower in terms of records written per
second than an application that is updating 100 rows in a transaction.

Reducing triggers and foreign keys on the inserted tables to a minimum
helps.

Inserting into temporary holding tables and then having a regular process
that migrates the data into the main tables is sometimes necessary if
you're putting a lot of smaller inserts into a very large dataset.
Then using a unioned view to show the two tables as one.

Putting WAL (e.g. $PGDATA/pg_xlog directory) on it's own drive(s).

Putting indexes that have to be updated during inserts onto their own
drive(s).

Performing regular vacuums on heavily updated tables.

Also, if your hardware is reliable, you can turn off fsync in
postgresql.conf. That can increase performance by anywhere from 2 to 10
times, depending on your application.

#4Wei Weng
wweng@kencast.com
In reply to: Josh Berkus (#2)
Re: performance of insert/delete/update

On Thu, 2002-11-21 at 16:23, Josh Berkus wrote:

Wei,

There had been a great deal of discussion of how to improve the
performance of select/sorting on this list, what about
insert/delete/update?

Is there any rules of thumb we need to follow? What are the
parameters
we should tweak to whip the horse to go faster?

yes, lots of rules. Wanna be more specific? You wondering about
query structure, hardware, memory config, what?

I am most concerned about the software side, that is query structures
and postgresql config.

Thanks

--
Wei Weng
Network Software Engineer
KenCast Inc.

#5Josh Berkus
josh@agliodbs.com
In reply to: scott.marlowe (#3)
Re: performance of insert/delete/update

Scott,

The absolutely most important thing to do to speed up inserts and
updates
is to squeeze as many as you can into one transaction. Within
reason, of
course. There's no great gain in putting more than a few thousand
together at a time. If your application is only doing one or two
updates
in a transaction, it's going to be slower in terms of records written
per
second than an application that is updating 100 rows in a
transaction.

This only works up to the limit of the memory you have available for
Postgres. If the updates in one transaction exceed your available
memory, you'll see a lot of swaps to disk log that will slow things
down by a factor of 10-50 times.

Reducing triggers and foreign keys on the inserted tables to a
minimum
helps.

... provided that this will not jeapordize your data integrity. If you
have indispensable triggers in PL/pgSQL, re-qriting them in C will make
them, and thus updates on their tables, faster.

Also, for foriegn keys, it speeds up inserts and updates on parent
tables with many child records if the foriegn key column in the child
table is indexed.

Putting WAL (e.g. $PGDATA/pg_xlog directory) on it's own drive(s).

Putting indexes that have to be updated during inserts onto their own

drive(s).

Performing regular vacuums on heavily updated tables.

Also, if your hardware is reliable, you can turn off fsync in
postgresql.conf. That can increase performance by anywhere from 2 to
10
times, depending on your application.

It can be dangerous though ... in the event of a power outage, for
example, your database could be corrupted and difficult to recover. So
... "at your own risk".

I've found that switching from fsync to fdatasync on Linux yields
marginal performance gain ... about 10-20%.

Also, if you are doing large updates (many records at once) you may
want to increase WAL_FILES and CHECKPOINT_BUFFER in postgresql.conf to
allow for large transactions.

Finally, you want to structure your queries so that you do the minimum
number of update writes possible, or insert writes. For example, a
procedure that inserts a row, does some calculations, and then modifies
several fields in that row is going to slow stuff down significantly
compared to doing the calculations as variables and only a single
insert. Certainly don't hit a table with 8 updates, each updating one
field instead of a single update statement.

-Josh Berkus

#6scott.marlowe
scott.marlowe@ihs.com
In reply to: Josh Berkus (#5)
Re: performance of insert/delete/update

On Thu, 21 Nov 2002, Josh Berkus wrote:

Scott,

The absolutely most important thing to do to speed up inserts and
updates
is to squeeze as many as you can into one transaction. Within
reason, of
course. There's no great gain in putting more than a few thousand
together at a time. If your application is only doing one or two
updates
in a transaction, it's going to be slower in terms of records written
per
second than an application that is updating 100 rows in a
transaction.

This only works up to the limit of the memory you have available for
Postgres. If the updates in one transaction exceed your available
memory, you'll see a lot of swaps to disk log that will slow things
down by a factor of 10-50 times.

Sorry, but that isn't true. MVCC means we don't have to hold all the data
in memory, we can have multiple versions of the same tuples on disk, and
use memory for what it's meant for, buffering.

The performance gain
comes from the fact that postgresql doesn't have to perform the data
consistency checks needed during an insert until after all the rows are
inserted, and it can "gang check" them/

Reducing triggers and foreign keys on the inserted tables to a
minimum
helps.

... provided that this will not jeapordize your data integrity. If you
have indispensable triggers in PL/pgSQL, re-qriting them in C will make
them, and thus updates on their tables, faster.

Agreed. But you've probably seen the occasional "I wasn't sure if we
needed that check or not, so I threw it in just in case" kind of database
design. :-)

I definitely don't advocate just tossing all your FKs to make it run
faster.

Also note that many folks have replaced foreign keys with triggers and
gained in performance, as fks in pgsql still have some deadlock issues to
be worked out.

Also, for foriegn keys, it speeds up inserts and updates on parent
tables with many child records if the foriegn key column in the child
table is indexed.

Absolutely.

Putting WAL (e.g. $PGDATA/pg_xlog directory) on it's own drive(s).

Putting indexes that have to be updated during inserts onto their own

drive(s).

Performing regular vacuums on heavily updated tables.

Also, if your hardware is reliable, you can turn off fsync in
postgresql.conf. That can increase performance by anywhere from 2 to
10
times, depending on your application.

It can be dangerous though ... in the event of a power outage, for
example, your database could be corrupted and difficult to recover. So
... "at your own risk".

No, the database will not be corrupted, at least not in my experience.
however, you MAY lose data from transactions that you thought were
committed. I think Tom posted something about this a few days back.

I've found that switching from fsync to fdatasync on Linux yields
marginal performance gain ... about 10-20%.

I'll have to try that.

Also, if you are doing large updates (many records at once) you may
want to increase WAL_FILES and CHECKPOINT_BUFFER in postgresql.conf to
allow for large transactions.

Actually, postgresql will create more WAL files if it needs to to handle
the size of a transaction. BUT, it won't create extra ones for heavier
parallel load without being told to. I've inserted 100,000 rows at a
time with no problem on a machine with only 1 WAL file specified, and it
didn't burp. It does run faster having multiple wal files when under
parallel load.

Finally, you want to structure your queries so that you do the minimum
number of update writes possible, or insert writes. For example, a
procedure that inserts a row, does some calculations, and then modifies
several fields in that row is going to slow stuff down significantly
compared to doing the calculations as variables and only a single
insert. Certainly don't hit a table with 8 updates, each updating one
field instead of a single update statement.

This is critical, and bites many people coming from a row level locking
database to an MVCC database. In MVCC every update creates a new on disk
tuple. I think someone on the list a while back was updating their
database something like this:

update table set field1='abc' where id=1;
update table set field2='def' where id=1;
update table set field3='ghi' where id=1;
update table set field4='jkl' where id=1;
update table set field5='mno' where id=1;
update table set field6='pqr' where id=1;

and they had to vacuum something like every 5 minutes.

Also, things like:

update table set field1=field1+1

are killers in an MVCC database as well.

#7Josh Berkus
josh@agliodbs.com
In reply to: scott.marlowe (#6)
Re: performance of insert/delete/update

Scott,

This only works up to the limit of the memory you have available for
Postgres. If the updates in one transaction exceed your available
memory, you'll see a lot of swaps to disk log that will slow things
down by a factor of 10-50 times.

Sorry, but that isn't true. MVCC means we don't have to hold all the data
in memory, we can have multiple versions of the same tuples on disk, and
use memory for what it's meant for, buffering.

Sorry, you're absolutely correct. I don't know what I was thinking of; 's the
problem with an off-the-cuff response.

Please disregard the previous quote. Instead:

Doing several large updates in a single transaction can lower performance if
the number of updates is sufficient to affect index usability and a VACUUM is
really needed between them. For example, a series of large data
transformation statements on a single table or set of related tables should
have VACCUUM statements between them, thus preventing you from putting them
in a single transaction.

Example, the series:
1. INSERT 10,000 ROWS INTO table_a;
2. UPDATE 100,000 ROWS IN table_a WHERE table_b;
3. UPDATE 100,000 ROWS IN table_c WHERE table_a;

WIll almost certainly need a VACUUM or even VACUUM FULL table_a after 2),
requiring you to split the update series into 2 transactions. Otherwise, the
"where table_a" condition in step 3) will be extremely slow.

Also note that many folks have replaced foreign keys with triggers and
gained in performance, as fks in pgsql still have some deadlock issues to
be worked out.

Yeah. I think Neil Conway is overhauling FKs, which everyone considers a bit
of a hack in the current implementation, including Jan who wrote it.

It can be dangerous though ... in the event of a power outage, for
example, your database could be corrupted and difficult to recover. So
... "at your own risk".

No, the database will not be corrupted, at least not in my experience.
however, you MAY lose data from transactions that you thought were
committed. I think Tom posted something about this a few days back.

Hmmm ... have you done this? I'd like the performance gain, but I don't want
to risk my data integrity. I've seen some awful things in databases (such as
duplicate primary keys) from yanking a power cord repeatedly.

update table set field1=field1+1

are killers in an MVCC database as well.

Yeah -- don't I know it.

--
-Josh Berkus
Aglio Database Solutions
San Francisco

#8scott.marlowe
scott.marlowe@ihs.com
In reply to: Josh Berkus (#7)
Re: performance of insert/delete/update

On Thu, 21 Nov 2002, Josh Berkus wrote:

Doing several large updates in a single transaction can lower performance if
the number of updates is sufficient to affect index usability and a VACUUM is
really needed between them. For example, a series of large data
transformation statements on a single table or set of related tables should
have VACCUUM statements between them, thus preventing you from putting them
in a single transaction.

Example, the series:
1. INSERT 10,000 ROWS INTO table_a;
2. UPDATE 100,000 ROWS IN table_a WHERE table_b;
3. UPDATE 100,000 ROWS IN table_c WHERE table_a;

WIll almost certainly need a VACUUM or even VACUUM FULL table_a after 2),
requiring you to split the update series into 2 transactions. Otherwise, the
"where table_a" condition in step 3) will be extremely slow.

Very good point. One that points out the different mind set one needs
when dealing with pgsql.

It can be dangerous though ... in the event of a power outage, for
example, your database could be corrupted and difficult to recover. So
... "at your own risk".

No, the database will not be corrupted, at least not in my experience.
however, you MAY lose data from transactions that you thought were
committed. I think Tom posted something about this a few days back.

Hmmm ... have you done this? I'd like the performance gain, but I don't want
to risk my data integrity. I've seen some awful things in databases (such as
duplicate primary keys) from yanking a power cord repeatedly.

I have, with killall -9 postmaster, on several occasions during testing
under heavy parallel load. I've never had 7.2.x fail because of this.

#9Josh Berkus
josh@agliodbs.com
In reply to: scott.marlowe (#6)
Re: performance of insert/delete/update

Scott,

The absolutely most important thing to do to speed up inserts and
updates
is to squeeze as many as you can into one transaction.

I was discussing this on IRC, and nobody could verify this assertion.
Do you have an example of bunlding multiple writes into a transaction
giving a performance gain?

-Josh

#10Ron Johnson
ron.l.johnson@cox.net
In reply to: Josh Berkus (#9)
Re: performance of insert/delete/update

On Fri, 2002-11-22 at 22:18, Josh Berkus wrote:

Scott,

The absolutely most important thing to do to speed up inserts and
updates
is to squeeze as many as you can into one transaction.

I was discussing this on IRC, and nobody could verify this assertion.
Do you have an example of bunlding multiple writes into a transaction
giving a performance gain?

Unfortunately, I missed the beginning of this thread, but I do
know that eliminating as many indexes as possible is the answer.
If I'm going to insert "lots" of rows in an off-line situation,
then I'll drop *all* of the indexes, load the data, then re-index.
If deleting "lots", then I'll drop all but the 1 relevant index,
then re-index afterwards.

As for bundling multiple statements into a transaction to increase
performance, I think the questions are:
- how much disk IO does one BEGIN TRANSACTION do? If it *does*
do disk IO, then "bundling" *will* be more efficient, since
less disk IO will be performed.
- are, for example, 500 COMMITs of small amounts of data more or
less efficient than 1 COMMIT of a large chunk of data? On the
proprietary database that I use at work, efficiency goes up,
then levels off at ~100 inserts per transaction.

Ron
-- 
+------------------------------------------------------------+
| Ron Johnson, Jr.     mailto:ron.l.johnson@cox.net          |
| Jefferson, LA  USA   http://members.cox.net/ron.l.johnson  |
|                                                            |
| "they love our milk and honey, but preach about another    |
|  way of living"                                            |
|    Merle Haggard, "The Fighting Side Of Me"                |
+------------------------------------------------------------+
#11Josh Berkus
josh@agliodbs.com
In reply to: Ron Johnson (#10)
Re: performance of insert/delete/update

Ron,

As for bundling multiple statements into a transaction to increase
performance, I think the questions are:
- how much disk IO does one BEGIN TRANSACTION do? If it *does*
do disk IO, then "bundling" *will* be more efficient, since
less disk IO will be performed.
- are, for example, 500 COMMITs of small amounts of data more or
less efficient than 1 COMMIT of a large chunk of data? On the
proprietary database that I use at work, efficiency goes up,
then levels off at ~100 inserts per transaction.

That's because some commercial databases (MS SQL, Sybase) use an "unwinding
transaction log" method of updating. That is, during a transaction, changes
are written only to the transaction log, and those changes are "played" to
the database only on a COMMIT. It's an approach that is more efficient for
large transactions, but has the unfortuate side effect of *requiring* read
and write row locks for the duration of the transaction.

In Postgres, with MVCC, changes are written to the database immediately with a
new transaction ID and the new rows are "activated" on COMMIT. So the
changes are written to the database as the statements are executed,
regardless. This is less efficient for large transactions than the
"unwinding log" method, but has the advantage of eliminating read locks
entirely and most deadlock situations.

Under MVCC, then, I am not convinced that bundling a bunch of writes into one
transaction is faster until I see it demonstrated. I certainly see no
performance gain on my system.

--
-Josh Berkus
Aglio Database Solutions
San Francisco

#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Josh Berkus (#11)
Re: performance of insert/delete/update

Josh Berkus <josh@agliodbs.com> writes:

Under MVCC, then, I am not convinced that bundling a bunch of writes into one
transaction is faster until I see it demonstrated. I certainly see no
performance gain on my system.

Are you running with fsync off?

The main reason for bundling updates into larger transactions is that
each transaction commit requires an fsync on the WAL log. If you have
fsync enabled, it is physically impossible to commit transactions faster
than one per revolution of the WAL disk, no matter how small the
transactions. (*) So it pays to make the transactions larger, not smaller.

On my machine I see a sizable difference (more than 2x) in the rate at
which simple INSERT statements are processed as separate transactions
and as large batches --- if I have fsync on. With fsync off, nearly no
difference.

regards, tom lane

(*) See recent proposals from Curtis Faith in pgsql-hackers about how
we might circumvent that limit ... but it's there today.

#13Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#12)
Re: performance of insert/delete/update

Tom,

On my machine I see a sizable difference (more than 2x) in the rate at
which simple INSERT statements are processed as separate transactions
and as large batches --- if I have fsync on. With fsync off, nearly no
difference.

I'm using fdatasych, which *does* perform faster than fsych on my system.
Could this make the difference?

--
-Josh Berkus
Aglio Database Solutions
San Francisco

#14Tom Lane
tgl@sss.pgh.pa.us
In reply to: Josh Berkus (#13)
Re: performance of insert/delete/update

Josh Berkus <josh@agliodbs.com> writes:

On my machine I see a sizable difference (more than 2x) in the rate at
which simple INSERT statements are processed as separate transactions
and as large batches --- if I have fsync on. With fsync off, nearly no
difference.

I'm using fdatasych, which *does* perform faster than fsych on my system.
Could this make the difference?

No; you still have to write the data and wait for the disk to spin.
(FWIW, PG defaults to wal_sync_method = open_datasync on my system,
and that's what I used in checking the speed just now. So I wasn't
actually executing any fsync() calls either.)

On lots of PC hardware, the disks are configured to lie and report write
complete as soon as they've accepted the data into their internal
buffers. If you see very little difference between fsync on and fsync
off, or if you are able to benchmark transaction rates in excess of your
disk's RPM, you should suspect that your disk drive is lying to you.

As an example: in testing INSERT speed on my old HP box just now,
I got measured rates of about 16000 inserts/minute with fsync off, and
5700/min with fsync on (for 1 INSERT per transaction). Knowing that my
disk drive is 6000 RPM, the latter number is credible. On my PC I get
numbers way higher than the disk rotation rate :-(

regards, tom lane

#15Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#14)
Re: performance of insert/delete/update

Tom,

As an example: in testing INSERT speed on my old HP box just now,
I got measured rates of about 16000 inserts/minute with fsync off, and
5700/min with fsync on (for 1 INSERT per transaction). Knowing that my
disk drive is 6000 RPM, the latter number is credible. On my PC I get
numbers way higher than the disk rotation rate :-(

Thanks for the info. As long as I have your ear, what's your opinion on the
risk level of running with fsynch off on a production system? I've seen a
lot of posts on this list opining the lack of danger, but I'm a bit paranoid.

--
-Josh Berkus
Aglio Database Solutions
San Francisco

#16Tom Lane
tgl@sss.pgh.pa.us
In reply to: Josh Berkus (#15)
Re: performance of insert/delete/update

Josh Berkus <josh@agliodbs.com> writes:

Thanks for the info. As long as I have your ear, what's your opinion on the
risk level of running with fsynch off on a production system?

Depends on how much you trust your hardware, kernel, and power source.

Fsync off does not introduce any danger from Postgres crashes --- we
always write data out of userspace to the kernel before committing.
The question is whether writes can be relied on to get to disk once
the kernel has 'em.

There is a definite risk of data corruption (not just lost transactions,
but actively inconsistent database contents) if you suffer a
system-level crash while running with fsync off. The theory of WAL
(which remember means write *ahead* log) is that it protects you from
data corruption as long as WAL records always hit disk before the
associated changes in database data files do. Then after a crash you
can replay the WAL to make sure you have actually done all the changes
described by each readable WAL record, and presto you're consistent up
to the end of the readable WAL. But if data file writes can get to disk
in advance of their WAL record, you could have a situation where some
but not all changes described by a WAL record are in the database after
a system crash and recovery. This could mean incompletely applied
transactions, broken indexes, or who knows what.

When you get right down to it, what we use fsync for is to force write
ordering --- Unix kernels do not guarantee write ordering any other way.
We use it to ensure WAL records hit disk before data file changes do.

Bottom line: I wouldn't run with fsync off in a mission-critical
database. If you're prepared to accept a risk of having to restore from
your last backup after a system crash, maybe it's okay.

regards, tom lane

#17Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#16)
Re: performance of insert/delete/update

Tom,

When you get right down to it, what we use fsync for is to force write
ordering --- Unix kernels do not guarantee write ordering any other way.
We use it to ensure WAL records hit disk before data file changes do.

Bottom line: I wouldn't run with fsync off in a mission-critical
database. If you're prepared to accept a risk of having to restore from
your last backup after a system crash, maybe it's okay.

Thanks for that overview. Sadly, even with fsynch on, I was forced to restore
from backup because the data needs to be 100% reliable and the crash was due
to a disk lockup on a checkpoint ... beyond the ability of WAL to deal with,
I think.

One last, last question: I was just asked a question on IRC, and I can't find
docs defining fsynch, fdatasynch, opensynch, and opendatasynch beyond section
11.3 which just says that they are all synch methods. Are there docs?

--
-Josh Berkus
Aglio Database Solutions
San Francisco

#18Tom Lane
tgl@sss.pgh.pa.us
In reply to: Josh Berkus (#17)
Re: performance of insert/delete/update

Josh Berkus <josh@agliodbs.com> writes:

One last, last question: I was just asked a question on IRC, and I
can't find docs defining fsynch, fdatasynch, opensynch, and
opendatasynch beyond section 11.3 which just says that they are all
synch methods. Are there docs?

Section 11.3 of what?

The only mention of open_datasync that I see in the docs is in the
Admin Guide chapter 3:
http://developer.postgresql.org/docs/postgres/runtime-config.html#RUNTIME-CONFIG-WAL

which saith

WAL_SYNC_METHOD (string)

Method used for forcing WAL updates out to disk. Possible values
are FSYNC (call fsync() at each commit), FDATASYNC (call
fdatasync() at each commit), OPEN_SYNC (write WAL files with open()
option O_SYNC), or OPEN_DATASYNC (write WAL files with open()
option O_DSYNC). Not all of these choices are available on all
platforms. This option can only be set at server start or in the
postgresql.conf file.

This may not help you much to decide which to use :-(, but it does tell
you what they are.

regards, tom lane

#19scott.marlowe
scott.marlowe@ihs.com
In reply to: Josh Berkus (#9)
Re: performance of insert/delete/update

On Fri, 22 Nov 2002, Josh Berkus wrote:

Scott,

The absolutely most important thing to do to speed up inserts and
updates
is to squeeze as many as you can into one transaction.

I was discussing this on IRC, and nobody could verify this assertion.
Do you have an example of bunlding multiple writes into a transaction
giving a performance gain?

Yes, my own experience.

It's quite easy to test if you have a database with a large table to play
with, use pg_dump to dump a table with the -d switch (makes the dump use
insert statements.) Then, make two versions of the dump, one which has a
begin;end; pair around all the inserts and one that doesn't, then use psql
-e to restore both dumps. The difference is HUGE. Around 10 to 20 times
faster with the begin end pairs.

I'd think that anyone who's used postgresql for more than a few months
could corroborate my experience.

#20Josh Berkus
josh@agliodbs.com
In reply to: scott.marlowe (#19)
Re: performance of insert/delete/update

Scott,

It's quite easy to test if you have a database with a large table to play
with, use pg_dump to dump a table with the -d switch (makes the dump use
insert statements.) Then, make two versions of the dump, one which has a
begin;end; pair around all the inserts and one that doesn't, then use psql
-e to restore both dumps. The difference is HUGE. Around 10 to 20 times
faster with the begin end pairs.

I'd think that anyone who's used postgresql for more than a few months
could corroborate my experience.

Ouch!

No need to get testy about it.

Your test works as you said; the way I tried testing it before was different.
Good to know. However, this approach is only useful if you are doing
rapidfire updates or inserts coming off a single connection. But then it is
*very* useful.

--
-Josh Berkus
Aglio Database Solutions
San Francisco

#21scott.marlowe
scott.marlowe@ihs.com
In reply to: Josh Berkus (#20)
#22Tim Gardner
tgardner@codeHorse.com
In reply to: scott.marlowe (#21)
#23Rod Taylor
rbt@rbt.ca
In reply to: Tim Gardner (#22)
#24scott.marlowe
scott.marlowe@ihs.com
In reply to: Tim Gardner (#22)
#25scott.marlowe
scott.marlowe@ihs.com
In reply to: Rod Taylor (#23)
#26Tim Gardner
tgardner@codeHorse.com
In reply to: scott.marlowe (#24)
#27scott.marlowe
scott.marlowe@ihs.com
In reply to: scott.marlowe (#24)
#28Ron Johnson
ron.l.johnson@cox.net
In reply to: scott.marlowe (#24)
#29Josh Berkus
josh@agliodbs.com
In reply to: scott.marlowe (#21)
#30Tom Lane
tgl@sss.pgh.pa.us
In reply to: Ron Johnson (#28)
#31Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tim Gardner (#26)
#32Rod Taylor
rbt@rbt.ca
In reply to: scott.marlowe (#25)
#33Ron Johnson
ron.l.johnson@cox.net
In reply to: Tom Lane (#30)
#34Curtis Faith
curtis@galtair.com
In reply to: Tom Lane (#30)
#35Andrew Sullivan
andrew@libertyrms.info
In reply to: scott.marlowe (#25)
#36Andrew Sullivan
andrew@libertyrms.info
In reply to: Ron Johnson (#28)
#37scott.marlowe
scott.marlowe@ihs.com
In reply to: Andrew Sullivan (#36)
#38Andrew Sullivan
andrew@libertyrms.info
In reply to: scott.marlowe (#37)
#39scott.marlowe
scott.marlowe@ihs.com
In reply to: Andrew Sullivan (#38)
#40Bruce Momjian
bruce@momjian.us
In reply to: Curtis Faith (#34)
#41Robert Treat
xzilla@users.sourceforge.net
In reply to: Ron Johnson (#33)
#42Tom Lane
tgl@sss.pgh.pa.us
In reply to: Curtis Faith (#34)
#43Nicolai Tufar
ntufar@apb.com.tr
In reply to: Bruce Momjian (#40)
#44Jim Beckstrom
jrbeckstrom@sbcglobal.net
In reply to: Bruce Momjian (#40)
#45Dave Page
dpage@pgadmin.org
In reply to: Jim Beckstrom (#44)
#46Tommi Maekitalo
t.maekitalo@epgmbh.de
In reply to: Nicolai Tufar (#43)
#47Merlin Moncure
merlin@rcsonline.com
In reply to: Curtis Faith (#34)
#48scott.marlowe
scott.marlowe@ihs.com
In reply to: Jim Beckstrom (#44)