vacuum, performance, and MVCC

Started by Mark Woodwardalmost 20 years ago205 messageshackers
Jump to latest
#1Mark Woodward
pgsql@mohawksoft.com

We all know that PostgreSQL suffers performance problems when rows are
updated frequently prior to a vacuum. The most serious example can be seen
by using PostgreSQL as a session handler for a busy we site. You may have
thousands or millions of active sessions, each being updated per page hit.

Each time the record is updated, a new version is created, thus
lengthening the "correct" version search each time row is accessed, until,
of course, the next vacuum comes along and corrects the index to point to
the latest version of the record.

Is that a fair explanation?

If my assertion is fundimentally true, then PostgreSQL will always suffer
performance penalties under a heavy modification load. Of course, tables
with many inserts are not an issue, it is mainly updates. The problem is
that there are classes of problems where updates are the primary
operation.

I was thinking, just as a hypothetical, what if we reversed the problem,
and always referenced the newest version of a row and scanned backwards
across the versions to the first that has a lower transacton number?

One possible implementation: PostgreSQL could keep an indirection array of
index to table ref for use by all the indexes on a table. The various
indexes return offsets into the array, not direct table refs. Because the
table refs are separate from the index, they can be updated each time a
transaction is commited.

This way, the newest version of a row is always the first row found. Also,
on a heavily updated site, the most used rows would always be at the end
of the table, reducing amount of disk reads or cache memory required to
find the correct row version for each query.

#2Chris Browne
cbbrowne@acm.org
In reply to: Mark Woodward (#1)
Re: vacuum, performance, and MVCC

Clinging to sanity, pgsql@mohawksoft.com ("Mark Woodward") mumbled into her beard:

We all know that PostgreSQL suffers performance problems when rows are
updated frequently prior to a vacuum. The most serious example can be seen
by using PostgreSQL as a session handler for a busy we site. You may have
thousands or millions of active sessions, each being updated per page hit.

Each time the record is updated, a new version is created, thus
lengthening the "correct" version search each time row is accessed, until,
of course, the next vacuum comes along and corrects the index to point to
the latest version of the record.

Is that a fair explanation?

No, it's not.

1. The index points to all the versions, until they get vacuumed out.

2. There may simultaneously be multiple "correct" versions. The
notion that there is one version that is The Correct One is wrong, and
you need to get rid of that thought.

If my assertion is fundimentally true, then PostgreSQL will always suffer
performance penalties under a heavy modification load. Of course, tables
with many inserts are not an issue, it is mainly updates. The problem is
that there are classes of problems where updates are the primary
operation.

The trouble with your assertion is that it is true for *all* database
systems except for those whose only transaction mode is READ
UNCOMMITTED, where the only row visible is the "Latest" version.

I was thinking, just as a hypothetical, what if we reversed the
problem, and always referenced the newest version of a row and
scanned backwards across the versions to the first that has a lower
transacton number?

That would require an index on transaction number, which is an
additional data structure not in place now. That would presumably
worsen things.

One possible implementation: PostgreSQL could keep an indirection array of
index to table ref for use by all the indexes on a table. The various
indexes return offsets into the array, not direct table refs. Because the
table refs are separate from the index, they can be updated each time a
transaction is commited.

You mean, this index would be "VACUUMed" as a part of each transaction
COMMIT? I can't see that turning out well...

This way, the newest version of a row is always the first row
found. Also, on a heavily updated site, the most used rows would
always be at the end of the table, reducing amount of disk reads or
cache memory required to find the correct row version for each
query.

I can't see how it follows that most-used rows would migrate to the
end of the table. That would only be true in a database that is never
VACUUMed; as soon as a VACUUM is done, free space opens up in the
interior, so that new tuples may be placed in the "interior."
--
If this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne&gt; rate me
http://linuxdatabases.info/info/lisp.html
"On a normal ascii line, the only safe condition to detect is a
'BREAK' - everything else having been assigned functions by Gnu
EMACS." -- Tarl Neustaedter

#3Mark Woodward
pgsql@mohawksoft.com
In reply to: Chris Browne (#2)
Re: vacuum, performance, and MVCC

Clinging to sanity, pgsql@mohawksoft.com ("Mark Woodward") mumbled into
her beard:

We all know that PostgreSQL suffers performance problems when rows are
updated frequently prior to a vacuum. The most serious example can be
seen
by using PostgreSQL as a session handler for a busy we site. You may
have
thousands or millions of active sessions, each being updated per page
hit.

Each time the record is updated, a new version is created, thus
lengthening the "correct" version search each time row is accessed,
until,
of course, the next vacuum comes along and corrects the index to point
to
the latest version of the record.

Is that a fair explanation?

No, it's not.

1. The index points to all the versions, until they get vacuumed out.

It can't point to "all" versions, it points to the last "current" version
as updated by vacuum, or the first version of the row.

2. There may simultaneously be multiple "correct" versions. The
notion that there is one version that is The Correct One is wrong, and
you need to get rid of that thought.

Sorry, this is misunderstanding. By "correct version search" it was
implied "for this transaction." Later I mention finding the first row with
a transaction lower than the current.

If my assertion is fundimentally true, then PostgreSQL will always
suffer
performance penalties under a heavy modification load. Of course, tables
with many inserts are not an issue, it is mainly updates. The problem is
that there are classes of problems where updates are the primary
operation.

The trouble with your assertion is that it is true for *all* database
systems except for those whose only transaction mode is READ
UNCOMMITTED, where the only row visible is the "Latest" version.

Not true. Oracle does not seem to exhibit this problem.

I was thinking, just as a hypothetical, what if we reversed the
problem, and always referenced the newest version of a row and
scanned backwards across the versions to the first that has a lower
transacton number?

That would require an index on transaction number, which is an
additional data structure not in place now. That would presumably
worsen things.

All things being equal, perhaps not. It would proably be a loser if you
have a static database, but in a database that undergoes modification, it
would be the same or less work if the average row has two versions.
(assuming nothing else changes)

One possible implementation: PostgreSQL could keep an indirection array
of
index to table ref for use by all the indexes on a table. The various
indexes return offsets into the array, not direct table refs. Because
the
table refs are separate from the index, they can be updated each time a
transaction is commited.

You mean, this index would be "VACUUMed" as a part of each transaction
COMMIT? I can't see that turning out well...

No, it would not be vacuumed!!!

Right now, the indexes point to the lowest row version. When an index
returns the row ID, it is checked if there are newer versions, if so, the
newer versions are searched until the last one is found or exceeds the
current TID.

This way, the newest version of a row is always the first row
found. Also, on a heavily updated site, the most used rows would
always be at the end of the table, reducing amount of disk reads or
cache memory required to find the correct row version for each
query.

I can't see how it follows that most-used rows would migrate to the
end of the table.

Sorry, OK, as assumtion it ignores the FSM, but the idea is that there is
only one lookup.

That would only be true in a database that is never
VACUUMed; as soon as a VACUUM is done, free space opens up in the
interior, so that new tuples may be placed in the "interior."

Regardless, the point is that you have to search the [N] versions of a row
to find the latest correct version of the row for your transacation. This
is done, AFAICT, from first to last version, meaning that the work
required to find a row increases with every update prior to vacuum.

PostgreSQL fails miserably as a web session handler because of this
behavior and it requires too frequent vacuums and inconsistent
performance.

OK, forget the version array, it was just an off the top idea. How about
this:

Currently a row does this:

row_TID[0] -> row_TID[1] ->row_TID[2] ./. row_TID[LAST-1] -> row_TID[LAST]

Pointing to each subsequent row. What if it did this:

row_TID[0] -> row_TID[LAST] -> row_TID[LAST-1] ./. -> row_TID[2] ->
row_TID[1]

The base tuple of a version chain gets updated to point to the latest
commited row. It should be fairly low impact on performance on a static
database, but REALLY speed up PostgreSQL on a heavily modified database
and provide more consistent performance between vacuums and require fewer
vacuums to maintain performance.

#4Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Mark Woodward (#3)
Re: vacuum, performance, and MVCC

Each time the record is updated, a new version is created, thus
lengthening the "correct" version search each time row is accessed,

until, of course, the next vacuum comes along and corrects the

index

to point to the latest version of the record.

Is that a fair explanation?

No, it's not.

1. The index points to all the versions, until they get vacuumed

out.

it points to the last "current" version as updated by vacuum, or the

first version

of the row.

no, the index has one entry for each version of the row.
This is why updating only non-indexed columns is relatively expensive
in pg.

Andreas

#5Chris Browne
cbbrowne@acm.org
In reply to: Mark Woodward (#1)
Re: vacuum, performance, and MVCC

After a long battle with technology, pgsql@mohawksoft.com ("Mark Woodward"), an earthling, wrote:

Clinging to sanity, pgsql@mohawksoft.com ("Mark Woodward") mumbled into
her beard:

We all know that PostgreSQL suffers performance problems when rows are
updated frequently prior to a vacuum. The most serious example can be
seen
by using PostgreSQL as a session handler for a busy we site. You may
have
thousands or millions of active sessions, each being updated per page
hit.

Each time the record is updated, a new version is created, thus
lengthening the "correct" version search each time row is accessed,
until,
of course, the next vacuum comes along and corrects the index to point
to
the latest version of the record.

Is that a fair explanation?

No, it's not.

1. The index points to all the versions, until they get vacuumed out.

It can't point to "all" versions, it points to the last "current" version
as updated by vacuum, or the first version of the row.

No, it points to *all* the versions.

Suppose I take a table with two rows:

INFO: analyzing "public.test"
INFO: "test": 1 pages, 2 rows sampled, 2 estimated total rows
VACUUM

Then, over and over, I remove and insert one entry with the same PK:

sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1
INSERT 4842550 1
sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1
INSERT 4842551 1
sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1
INSERT 4842552 1
sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1
INSERT 4842553 1
sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1
INSERT 4842554 1
sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1
INSERT 4842555 1
sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1
INSERT 4842556 1
sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1
INSERT 4842557 1
sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1
INSERT 4842558 1
sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1
INSERT 4842559 1

Now, I vacuum it.

sample=# vacuum verbose analyze test;
INFO: vacuuming "public.test"
INFO: index "test_id_key" now contains 2 row versions in 2 pages
DETAIL: 10 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "test": removed 10 row versions in 1 pages
DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "test": found 10 removable, 2 nonremovable row versions in 1 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: analyzing "public.test"
INFO: "test": 1 pages, 2 rows sampled, 2 estimated total rows
VACUUM

Notice that the index contained 10 versions of that one row.

It pointed to *ALL* the versions.

2. There may simultaneously be multiple "correct" versions. The
notion that there is one version that is The Correct One is wrong, and
you need to get rid of that thought.

Sorry, this is misunderstanding. By "correct version search" it was
implied "for this transaction." Later I mention finding the first row with
a transaction lower than the current.

Ah. Then you need for each transaction to spawn an index for each
table that excludes non-current values.

If my assertion is fundimentally true, then PostgreSQL will always
suffer performance penalties under a heavy modification load. Of
course, tables with many inserts are not an issue, it is mainly
updates. The problem is that there are classes of problems where
updates are the primary operation.

The trouble with your assertion is that it is true for *all* database
systems except for those whose only transaction mode is READ
UNCOMMITTED, where the only row visible is the "Latest" version.

Not true. Oracle does not seem to exhibit this problem.

Oracle suffers a problem in this regard that PostgreSQL doesn't; in
Oracle, rollbacks are quite expensive, as "recovery" requires doing
extra work that PostgreSQL doesn't do.
--
output = ("cbbrowne" "@" "gmail.com")
http://cbbrowne.com/info/
Marriage means commitment. Of course, so does insanity.

#6Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Mark Woodward (#1)
Re: vacuum, performance, and MVCC

Mark Woodward wrote:

Hmm, OK, then the problem is more serious than I suspected.
This means that every index on a row has to be updated on every
transaction that modifies that row. Is that correct?

Add an index entry, yes.

I am attaching some code that shows the problem with regard to
applications such as web server session management, when run, each second
the system can handle fewer and fewer connections. Here is a brief output:
[...]
There has to be a more linear way of handling this scenario.

So vacuum the table often.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#7Mark Woodward
pgsql@mohawksoft.com
In reply to: Chris Browne (#5)
Re: vacuum, performance, and MVCC

After a long battle with technology, pgsql@mohawksoft.com ("Mark
Woodward"), an earthling, wrote:

Clinging to sanity, pgsql@mohawksoft.com ("Mark Woodward") mumbled into
her beard:

[snip]

1. The index points to all the versions, until they get vacuumed out.

It can't point to "all" versions, it points to the last "current"
version
as updated by vacuum, or the first version of the row.

No, it points to *all* the versions.

Suppose I take a table with two rows:

INFO: analyzing "public.test"
INFO: "test": 1 pages, 2 rows sampled, 2 estimated total rows
VACUUM

Then, over and over, I remove and insert one entry with the same PK:

sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1

[snip]

Now, I vacuum it.

sample=# vacuum verbose analyze test;
INFO: vacuuming "public.test"
INFO: index "test_id_key" now contains 2 row versions in 2 pages
DETAIL: 10 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "test": removed 10 row versions in 1 pages
DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "test": found 10 removable, 2 nonremovable row versions in 1 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: analyzing "public.test"
INFO: "test": 1 pages, 2 rows sampled, 2 estimated total rows
VACUUM

Notice that the index contained 10 versions of that one row.

It pointed to *ALL* the versions.

Hmm, OK, then the problem is more serious than I suspected.
This means that every index on a row has to be updated on every
transaction that modifies that row. Is that correct?

I am attaching some code that shows the problem with regard to
applications such as web server session management, when run, each second
the system can handle fewer and fewer connections. Here is a brief output:

markw@ent:~/pgfoo$ ./footest
1307 sessions per second, elapsed: 1
1292 sessions per second, elapsed: 2
1287 sessions per second, elapsed: 3
....
1216 sessions per second, elapsed: 25
1213 sessions per second, elapsed: 26
1208 sessions per second, elapsed: 27
....
1192 sessions per second, elapsed: 36
1184 sessions per second, elapsed: 37
1183 sessions per second, elapsed: 38
....
1164 sessions per second, elapsed: 58
1170 sessions per second, elapsed: 59
1168 sessions per second, elapsed: 60

As you can see, in about a minute at high load, this very simple table
lost about 10% of its performance, and I've seen worse based on update
frequency. Before you say this is an obscure problem, I can tell you it
isn't. I have worked with more than a few projects that had to switch away
from PostgreSQL because of this behavior.

Obviously this is not a problem with small sites, but this is a real
problem with an enterprise level web site with millions of visitors and
actions a day. Quite frankly it is a classic example of something that
does not scale. The more and more updates there are, the higher the load
becomes. You can see it on "top" as the footest program runs.

There has to be a more linear way of handling this scenario.

Attachments:

footest.ctext/x-csrc; name=footest.cDownload
#8Csaba Nagy
nagy@ecircle-ag.com
In reply to: Alvaro Herrera (#6)
Re: vacuum, performance, and MVCC

[...]
There has to be a more linear way of handling this scenario.

So vacuum the table often.

Good advice, except if the table is huge :-)

Here we have for example some tables which are frequently updated but
contain >100 million rows. Vacuuming that takes hours. And the dead row
candidates are the ones which are updated again and again and looked up
frequently...

A good solution would be a new type of vacuum which does not need to do
a full table scan but can clean the pending dead rows without that... I
guess then I could vacuum really frequently those tables.

Cheers,
Csaba.

#9Jonah H. Harris
jonah.harris@gmail.com
In reply to: Alvaro Herrera (#6)
Re: vacuum, performance, and MVCC

On 6/22/06, Alvaro Herrera <alvherre@commandprompt.com> wrote:

Hmm, OK, then the problem is more serious than I suspected.
This means that every index on a row has to be updated on every
transaction that modifies that row. Is that correct?

Add an index entry, yes.

Again, this is a case for update-in-place. No need to write an extra
index entry and incur the WAL associated with it. Imagine a table
with 3 indexes on it... I would estimate that we perform at least 3 to
6 times more overhead than any commercial database on such an update.

There has to be a more linear way of handling this scenario.

So vacuum the table often.

It's easy to say VACUUM often... but I'd bet that vacuuming is going
to lessen the throughput in his tests even more; no matter how it's
tuned.

--
Jonah H. Harris, Software Architect | phone: 732.331.1300
EnterpriseDB Corporation | fax: 732.331.1301
33 Wood Ave S, 2nd Floor | jharris@enterprisedb.com
Iselin, New Jersey 08830 | http://www.enterprisedb.com/

#10Mario Weilguni
mweilguni@sime.com
In reply to: Csaba Nagy (#8)
Re: vacuum, performance, and MVCC

Am Donnerstag, 22. Juni 2006 16:09 schrieb Csaba Nagy:

[...]
There has to be a more linear way of handling this scenario.

So vacuum the table often.

Good advice, except if the table is huge :-)

Here we have for example some tables which are frequently updated but
contain >100 million rows. Vacuuming that takes hours. And the dead row
candidates are the ones which are updated again and again and looked up
frequently...

A good solution would be a new type of vacuum which does not need to do
a full table scan but can clean the pending dead rows without that... I
guess then I could vacuum really frequently those tables.

Now that there is autovaccum, why not think of something like continous
vacuum? A background process that gets info about potential changed tuples,
and vacuums them (only those tuples), possibly with honouring I/O needs of
backgrounds (not steealing I/O from busy backends).

For sure not that easy as autovacuum. I'm pretty sure I've read something
about partial vacuum lately, is somebody working on something like this?

Regards,
Mario

#11Hannu Krosing
hannu@tm.ee
In reply to: Mark Woodward (#7)
Re: vacuum, performance, and MVCC

Ühel kenal päeval, N, 2006-06-22 kell 09:59, kirjutas Mark Woodward:

After a long battle with technology, pgsql@mohawksoft.com ("Mark
Woodward"), an earthling, wrote:

Clinging to sanity, pgsql@mohawksoft.com ("Mark Woodward") mumbled into

It pointed to *ALL* the versions.

Hmm, OK, then the problem is more serious than I suspected.
This means that every index on a row has to be updated on every
transaction that modifies that row. Is that correct?

Yes.

I am attaching some code that shows the problem with regard to
applications such as web server session management, when run, each second
the system can handle fewer and fewer connections. Here is a brief output:

markw@ent:~/pgfoo$ ./footest
1307 sessions per second, elapsed: 1
1292 sessions per second, elapsed: 2
1287 sessions per second, elapsed: 3
....
1216 sessions per second, elapsed: 25
1213 sessions per second, elapsed: 26
1208 sessions per second, elapsed: 27
....
1192 sessions per second, elapsed: 36
1184 sessions per second, elapsed: 37
1183 sessions per second, elapsed: 38
....
1164 sessions per second, elapsed: 58
1170 sessions per second, elapsed: 59
1168 sessions per second, elapsed: 60

As you can see, in about a minute at high load, this very simple table
lost about 10% of its performance, and I've seen worse based on update
frequency. Before you say this is an obscure problem, I can tell you it
isn't. I have worked with more than a few projects that had to switch away
from PostgreSQL because of this behavior.

You mean systems that are designed so exactly, that they can't take 10%
performance change ?

Or just that they did not vacuum for so long, that performance was less
than needed in the end?

btw, what did they switch to ?

Obviously this is not a problem with small sites, but this is a real
problem with an enterprise level web site with millions of visitors and
actions a day.

On such site you should design so that db load stays below 50% and run
vacuum "often", that may even mean that you run vacuum continuously with
no wait between runs. If you run vacuum with right settings,

Quite frankly it is a classic example of something that
does not scale. The more and more updates there are, the higher the load
becomes. You can see it on "top" as the footest program runs.

Yes, you understood correctly - the more updates, the higher the load :)

--
----------------
Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia

Skype me: callto:hkrosing
Get Skype for free: http://www.skype.com

#12Hannu Krosing
hannu@tm.ee
In reply to: Jonah H. Harris (#9)
Re: vacuum, performance, and MVCC

Ühel kenal päeval, N, 2006-06-22 kell 10:20, kirjutas Jonah H. Harris:

On 6/22/06, Alvaro Herrera <alvherre@commandprompt.com> wrote:

Hmm, OK, then the problem is more serious than I suspected.
This means that every index on a row has to be updated on every
transaction that modifies that row. Is that correct?

Add an index entry, yes.

Again, this is a case for update-in-place. No need to write an extra
index entry and incur the WAL associated with it.

I guess that MySQL on its original storage does that, but they allow
only one concurrent update per table and no transactions.

Imagine a table
with 3 indexes on it... I would estimate that we perform at least 3 to
6 times more overhead than any commercial database on such an update.

One way to describe what "commercial databases" do to keep constant
update rates is saying that they do either vacuuming as part of
update, or they just use locks anf force some transactions to wait or
fail/retry.

Depending on exact details and optimisations done, this can be either
slower or faster than postgresql's way, but they still need to do
something to get transactional visibility rules implemented.

There has to be a more linear way of handling this scenario.

So vacuum the table often.

It's easy to say VACUUM often... but I'd bet that vacuuming is going
to lessen the throughput in his tests even more; no matter how it's
tuned.

Running VACUUM often/continuously will likely keep his update rate
fluctuatons within a corridor of maybe 5-10%, at the cost of 1-2% extra
load. At least if vacuum is configured right and the server is not
already running at 100% IO saturation, in which case it will be worse.

The max throughput figure is not something you actually need very often
in production. What is interesting is setting up the server so that you
can service your loads comfortably. Running the server at 100% lead is
not anything you want to do on production server. There will be things
you need to do anyway and you need some headroom for that.

--
----------------
Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia

Skype me: callto:hkrosing
Get Skype for free: http://www.skype.com

#13Chris Browne
cbbrowne@acm.org
In reply to: Mark Woodward (#1)
Re: vacuum, performance, and MVCC

nagy@ecircle-ag.com (Csaba Nagy) writes:

[...]
There has to be a more linear way of handling this scenario.

So vacuum the table often.

Good advice, except if the table is huge :-)

... Then the table shouldn't be designed to be huge. That represents
a design error.

Here we have for example some tables which are frequently updated but
contain >100 million rows. Vacuuming that takes hours. And the dead row
candidates are the ones which are updated again and again and looked up
frequently...

This demonstrates that "archival" material and "active" data should be
kept separately.

They have different access patterns; kludging them into the same table
turns out badly.

A good solution would be a new type of vacuum which does not need to
do a full table scan but can clean the pending dead rows without
that... I guess then I could vacuum really frequently those tables.

That's yet another feature that's on the ToDo list; the "Vacuum Space
Map."

The notion is to have lists of recently modified pages, and to
restrict VACUUM to those pages. (Probably a special version of
VACUUM...)
--
output = reverse("moc.enworbbc" "@" "enworbbc")
http://www.ntlug.org/~cbbrowne/lisp.html
"As I've gained more experience with Perl it strikes me that it
resembles Lisp in many ways, albeit Lisp as channeled by an awk script
on acid." -- Tim Moore (on comp.lang.lisp)

#14Jonah H. Harris
jonah.harris@gmail.com
In reply to: Hannu Krosing (#12)
Re: vacuum, performance, and MVCC

On 6/22/06, Hannu Krosing <hannu@skype.net> wrote:

I guess that MySQL on its original storage does that, but they allow
only one concurrent update per table and no transactions.

More like practically every commercial database. As ~97% of
transactions commit (yes, some can argue that number), most commercial
systems prefer optimistic storage managers; whereas PostgreSQL opts
for the ~3% of pessimistic cases.

Let's see, if I had a 97% chance of winning the lottery... I'd
probably play a lot more than if I only had a 3% chance.

One way to describe what "commercial databases" do to keep constant
update rates is saying that they do either vacuuming as part of
update, or they just use locks anf force some transactions to wait or
fail/retry.

Not exactly... there are several ways to handle UNDO without locking.
Certainly the other systems have to perform background cleanup, but
I'd hardly compare that to vacuuming.

Depending on exact details and optimisations done, this can be either
slower or faster than postgresql's way, but they still need to do
something to get transactional visibility rules implemented.

No argument there... but I have yet to find proof that
straight-out-of-CVS PostgreSQL (including tuning) is faster than any
commercial system on almost any large workload. Without a doubt, our
MVCC is in most cases, much nicer than other OSS databases. However,
it hasn't yet proven itself to be better than the concepts employed by
the database vendors with billions of dollars. The reasoning behind
PostgreSQL's storage and MVCC architecture made sense for its design,
but this had nothing to do with creating a super-highly-concurrent
database.

Running the server at 100% lead is not anything you want to
do on production server. There will be things you need to
do anyway and you need some headroom for that.

No argument there.

--
Jonah H. Harris, Software Architect | phone: 732.331.1300
EnterpriseDB Corporation | fax: 732.331.1301
33 Wood Ave S, 2nd Floor | jharris@enterprisedb.com
Iselin, New Jersey 08830 | http://www.enterprisedb.com/

#15Mark Woodward
pgsql@mohawksoft.com
In reply to: Hannu Krosing (#11)
Re: vacuum, performance, and MVCC

Ühel kenal päeval, N, 2006-06-22 kell 09:59, kirjutas Mark Woodward:

After a long battle with technology, pgsql@mohawksoft.com ("Mark
Woodward"), an earthling, wrote:

Clinging to sanity, pgsql@mohawksoft.com ("Mark Woodward") mumbled

into

It pointed to *ALL* the versions.

Hmm, OK, then the problem is more serious than I suspected.
This means that every index on a row has to be updated on every
transaction that modifies that row. Is that correct?

Yes.

I am attaching some code that shows the problem with regard to
applications such as web server session management, when run, each
second
the system can handle fewer and fewer connections. Here is a brief
output:

markw@ent:~/pgfoo$ ./footest
1307 sessions per second, elapsed: 1
1292 sessions per second, elapsed: 2
1287 sessions per second, elapsed: 3
....
1216 sessions per second, elapsed: 25
1213 sessions per second, elapsed: 26
1208 sessions per second, elapsed: 27
....
1192 sessions per second, elapsed: 36
1184 sessions per second, elapsed: 37
1183 sessions per second, elapsed: 38
....
1164 sessions per second, elapsed: 58
1170 sessions per second, elapsed: 59
1168 sessions per second, elapsed: 60

As you can see, in about a minute at high load, this very simple table
lost about 10% of its performance, and I've seen worse based on update
frequency. Before you say this is an obscure problem, I can tell you it
isn't. I have worked with more than a few projects that had to switch
away
from PostgreSQL because of this behavior.

You mean systems that are designed so exactly, that they can't take 10%
performance change ?

No, that's not really the point, performance degrades over time, in one
minute it degraded 10%.

The update to session ratio has a HUGE impact on PostgreSQL. If you have a
thousand active sessions, it may take a minute to degrade 10% assuming
some level of active vs operations per session per action.

If an active user causes a session update once a second, that is not too
bad, but if an active user updates a session more often, then it is worse.

Generally speaking, sessions aren't updated when they change, they are
usually updated per HTTP request. The data in a session may not change,
but the session handling code doesn't know this and simply updates anyway.

In a heavily AJAX site, you may have many smaller HTTP requests returning
items in a page. So, a single page may consist of multiple HTTP requests.
Worse yet, as a user drags an image around, there are lots of background
requests being made. Each request typically means a session lookup and a
session update. This is compounded by the number of active users. Since
the object of a site is to have many active users, this is always a
problem. It is less intrusive now that non-locking vacuum is there, but
that doesn't mean it isn't a problem.

Or just that they did not vacuum for so long, that performance was less
than needed in the end?

In an active site or application, vacuuming often enough to prevent this
often is, itself, a load on the system.

btw, what did they switch to ?

One switched to oracle and one is using a session handler I wrote for PHP.
One company I did work for tried to maintain a table with a single row
that indicated state, this single row would sometimes take more than a
second to query. It was horrible. I'm not sure what they ended up using,
but I wrote a shared memory variable C function got rid of that specific
problem. They were trying to use PostgreSQL as the database to implement a
HUGE redundent networked file system. My personal opinion was that there
biggest problem was that they decided to use Java as the programming
environment, but that's another issue.

Obviously this is not a problem with small sites, but this is a real
problem with an enterprise level web site with millions of visitors and
actions a day.

On such site you should design so that db load stays below 50% and run
vacuum "often", that may even mean that you run vacuum continuously with
no wait between runs. If you run vacuum with right settings,

Yea, but that, at least in my opinion, is a poor design.

Quite frankly it is a classic example of something that
does not scale. The more and more updates there are, the higher the load
becomes. You can see it on "top" as the footest program runs.

Yes, you understood correctly - the more updates, the higher the load :)

Imagine this:

Each row in a table has a single entry that represents that row. Lets call
it the "key" entry. Whether or not the key entry maintains data is an
implementation detail.

When indexing a table, the index always points to the key entry for a row.

When a row is updated, in the spirit of MVCC, a new data row is created.
The key entry is then updated to point to the new version of the row. The
new row points to the previous version of the row, and the previous entry
continues to point to its previous entry, etc.

When a row is found by the index, the key entry is found first. Much more
often than not, the latest entry in the table for a row is the correct row
for a query.

Now this works in the simplest cases, and there are edge conditions where
index keys change, of course, but that's why its called software
development and not typing.

However, this accomplishes a few things, updates become cheaper because
indexes do not need to be updated if the keys don't change, and
performance doesn't degrade based on the number of updates a row has had.

#16Mark Woodward
pgsql@mohawksoft.com
In reply to: Hannu Krosing (#12)
Re: vacuum, performance, and MVCC

Ühel kenal päeval, N, 2006-06-22 kell 10:20, kirjutas Jonah H. Harris:

On 6/22/06, Alvaro Herrera <alvherre@commandprompt.com> wrote:

Hmm, OK, then the problem is more serious than I suspected.
This means that every index on a row has to be updated on every
transaction that modifies that row. Is that correct?

Add an index entry, yes.

Again, this is a case for update-in-place. No need to write an extra
index entry and incur the WAL associated with it.

I guess that MySQL on its original storage does that, but they allow
only one concurrent update per table and no transactions.

Imagine a table
with 3 indexes on it... I would estimate that we perform at least 3 to
6 times more overhead than any commercial database on such an update.

One way to describe what "commercial databases" do to keep constant
update rates is saying that they do either vacuuming as part of
update, or they just use locks anf force some transactions to wait or
fail/retry.

Depending on exact details and optimisations done, this can be either
slower or faster than postgresql's way, but they still need to do
something to get transactional visibility rules implemented.

I think they have a different strategy. I think they maintain the notion
of "current version" of a row, and hunt for previous versions when needed,
at least that's how I suspect Oracle does it with redo logs.

There has to be a more linear way of handling this scenario.

So vacuum the table often.

It's easy to say VACUUM often... but I'd bet that vacuuming is going
to lessen the throughput in his tests even more; no matter how it's
tuned.

Running VACUUM often/continuously will likely keep his update rate
fluctuatons within a corridor of maybe 5-10%, at the cost of 1-2% extra
load. At least if vacuum is configured right and the server is not
already running at 100% IO saturation, in which case it will be worse.

Assuming the table is a reasonable size, the I/O required for vacuum
doesn't kill everything else!

The max throughput figure is not something you actually need very often
in production.

No, but you need to have some degree of certainty and predictability in
the system you are developing.

What is interesting is setting up the server so that you
can service your loads comfortably. Running the server at 100% lead is
not anything you want to do on production server. There will be things
you need to do anyway and you need some headroom for that.

Of course, you design it so peaks are easily managed, but unless you run
vacuum continuously, and that has its own set of problems, you run into
this problem, and it can get really really bad.

Show quoted text

--
----------------
Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia

Skype me: callto:hkrosing
Get Skype for free: http://www.skype.com

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

http://archives.postgresql.org

#17Tom Lane
tgl@sss.pgh.pa.us
In reply to: Chris Browne (#5)
Re: vacuum, performance, and MVCC

Christopher Browne <cbbrowne@acm.org> writes:

After a long battle with technology, pgsql@mohawksoft.com ("Mark Woodward"), an earthling, wrote:

Not true. Oracle does not seem to exhibit this problem.

Oracle suffers a problem in this regard that PostgreSQL doesn't; in
Oracle, rollbacks are quite expensive, as "recovery" requires doing
extra work that PostgreSQL doesn't do.

The Oracle design has got other drawbacks: if you need to access a row
version other than than the very latest, you need to go searching in the
rollback segments for it. This is slow (no index help) and creates
significant amounts of contention (since lots of processes are competing
to touch the rollback segments). Plus there's the old bugaboo that
long-running transactions require indefinite amounts of rollback space,
and Oracle is apparently unable to enlarge that space on-the-fly.
(This last seems like a surmountable problem, but maybe there is some
non-obvious reason why it's hard.)

Basically there's no free lunch: if you want the benefits of MVCC it's
going to cost you somewhere. In the Postgres design you pay by having
to do VACUUM pretty often for heavily-updated tables. I don't think
that decision is fundamentally wrong --- the attractive thing about it
is that the overhead is pushed out of the foreground query-processing
code paths. We still have lots of work to do in making autovacuum
smarter, avoiding vacuuming parts of relations that have not changed,
and so on. But I have no desire to go over to an Oracle-style solution
instead. We can't beat them by trying to be like them, and we run no
small risk of falling foul of some of their patents if we do.

regards, tom lane

#18Jochem van Dieten
jochemd@gmail.com
In reply to: Mark Woodward (#15)
Re: vacuum, performance, and MVCC

On 6/22/06, Mark Woodward wrote:
(..)

thousand active sessions

(..)

If an active user causes a session update once a second

(..)

Generally speaking, sessions aren't updated when they change, they are
usually updated per HTTP request. The data in a session may not change,
but the session handling code doesn't know this and simply updates anyway.

So what you are saying is that you are doing hundreds of unnecessary
updates per second and as a result of those unnecessary problems you
have a performance problem. Why not attack the root of the problem and
make the session handler smarter? And if you can't do that, put some
logic in the session table that turns an update without changes into a
no-op?

Jochem

#19Lukas Kahwe Smith
smith@pooteeweet.org
In reply to: Tom Lane (#17)
Re: vacuum, performance, and MVCC

Tom Lane wrote:

Basically there's no free lunch: if you want the benefits of MVCC it's
going to cost you somewhere. In the Postgres design you pay by having
to do VACUUM pretty often for heavily-updated tables. I don't think
that decision is fundamentally wrong --- the attractive thing about it
is that the overhead is pushed out of the foreground query-processing
code paths. We still have lots of work to do in making autovacuum
smarter, avoiding vacuuming parts of relations that have not changed,
and so on. But I have no desire to go over to an Oracle-style solution
instead. We can't beat them by trying to be like them, and we run no
small risk of falling foul of some of their patents if we do.

The question is just if it makes sense to give people the option of
running some tables with a different approach where the drawbacks of the
current approach are significant. This would let them stick to
PostgreSQL as their one stop solution.

The MySQL storage engine plugin architecture does have some merit in
general (even if you consider the rest of the RDBMS along with the
available storage engines to be inferior). Some problems simply require
different algorithms.

regards,
Lukas

#20Lukas Kahwe Smith
smith@pooteeweet.org
In reply to: Jochem van Dieten (#18)
Re: vacuum, performance, and MVCC

Jochem van Dieten wrote:

make the session handler smarter? And if you can't do that, put some
logic in the session table that turns an update without changes into a
no-op?

err isnt that one the job of the database?

regards,
Lukas

#21Jonah H. Harris
jonah.harris@gmail.com
In reply to: Tom Lane (#17)
#22D'Arcy J.M. Cain
darcy@druid.net
In reply to: Lukas Kahwe Smith (#20)
#23Rod Taylor
rbt@rbt.ca
In reply to: Chris Browne (#13)
#24Rod Taylor
rbt@rbt.ca
In reply to: Mark Woodward (#15)
#25Jonah H. Harris
jonah.harris@gmail.com
In reply to: Rod Taylor (#24)
#26Rod Taylor
rbt@rbt.ca
In reply to: Jonah H. Harris (#25)
#27Mark Woodward
pgsql@mohawksoft.com
In reply to: Tom Lane (#17)
#28Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Jonah H. Harris (#25)
#29Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#17)
#30Mark Woodward
pgsql@mohawksoft.com
In reply to: Rod Taylor (#24)
#31Lukas Kahwe Smith
smith@pooteeweet.org
In reply to: Bruce Momjian (#29)
#32Tom Lane
tgl@sss.pgh.pa.us
In reply to: Lukas Kahwe Smith (#20)
#33Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#29)
#34Rod Taylor
rbt@rbt.ca
In reply to: Mark Woodward (#30)
#35PFC
lists@peufeu.com
In reply to: Mark Woodward (#7)
#36Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Mark Woodward (#1)
#37Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#33)
#38Mark Woodward
pgsql@mohawksoft.com
In reply to: PFC (#35)
#39David Fetter
david@fetter.org
In reply to: Lukas Kahwe Smith (#20)
#40PFC
lists@peufeu.com
In reply to: Mark Woodward (#38)
#41Andrew Dunstan
andrew@dunslane.net
In reply to: PFC (#40)
#42Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Bruce Momjian (#29)
#43Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Mark Woodward (#38)
#44Mark Woodward
pgsql@mohawksoft.com
In reply to: PFC (#40)
#45Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Tom Lane (#32)
#46Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jim Nasby (#42)
#47Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jim Nasby (#45)
#48A.M.
agentm@themactionfaction.com
In reply to: Mark Woodward (#1)
#49Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Mark Woodward (#30)
#50Steve Atkins
steve@blighty.com
In reply to: A.M. (#48)
#51Gavin Sherry
swm@linuxworld.com.au
In reply to: A.M. (#48)
#52Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#47)
#53Chris Browne
cbbrowne@acm.org
In reply to: Mark Woodward (#1)
#54Jonah H. Harris
jonah.harris@gmail.com
In reply to: Gavin Sherry (#51)
#55Gavin Sherry
swm@linuxworld.com.au
In reply to: Jonah H. Harris (#54)
#56Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Jonah H. Harris (#54)
#57Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Jim Nasby (#42)
#58Csaba Nagy
nagy@ecircle-ag.com
In reply to: Chris Browne (#13)
#59PFC
lists@peufeu.com
In reply to: Csaba Nagy (#58)
#60Csaba Nagy
nagy@ecircle-ag.com
In reply to: PFC (#59)
#61Luke Lonergan
llonergan@greenplum.com
In reply to: Csaba Nagy (#60)
#62Mark Woodward
pgsql@mohawksoft.com
In reply to: Christopher Kings-Lynne (#49)
#63Csaba Nagy
nagy@ecircle-ag.com
In reply to: Mark Woodward (#1)
#64Mark Woodward
pgsql@mohawksoft.com
In reply to: Csaba Nagy (#60)
#65Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Mark Woodward (#64)
#66Csaba Nagy
nagy@ecircle-ag.com
In reply to: Zeugswetter Andreas SB SD (#65)
#67Mark Woodward
pgsql@mohawksoft.com
In reply to: Csaba Nagy (#63)
#68Csaba Nagy
nagy@ecircle-ag.com
In reply to: Mark Woodward (#67)
#69Hannu Krosing
hannu@tm.ee
In reply to: Mark Woodward (#16)
#70Martijn van Oosterhout
kleptog@svana.org
In reply to: Csaba Nagy (#68)
#71Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Csaba Nagy (#66)
#72Jonah H. Harris
jonah.harris@gmail.com
In reply to: Alvaro Herrera (#56)
#73Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Chris Browne (#2)
#74A.M.
agentm@themactionfaction.com
In reply to: Martijn van Oosterhout (#70)
#75Mark Woodward
pgsql@mohawksoft.com
In reply to: Hannu Krosing (#69)
#76Csaba Nagy
nagy@ecircle-ag.com
In reply to: Alvaro Herrera (#73)
#77Hannu Krosing
hannu@tm.ee
In reply to: Mark Woodward (#27)
#78Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Hannu Krosing (#77)
#79Mark Woodward
pgsql@mohawksoft.com
In reply to: Alvaro Herrera (#73)
#80Csaba Nagy
nagy@ecircle-ag.com
In reply to: Zeugswetter Andreas SB SD (#78)
#81Mark Mielke
mark@mark.mielke.cc
In reply to: Alvaro Herrera (#73)
#82Hannu Krosing
hannu@tm.ee
In reply to: Mark Woodward (#75)
#83Csaba Nagy
nagy@ecircle-ag.com
In reply to: Hannu Krosing (#82)
#84Hannu Krosing
hannu@tm.ee
In reply to: Csaba Nagy (#83)
#85Tom Lane
tgl@sss.pgh.pa.us
In reply to: Csaba Nagy (#83)
#86David Fetter
david@fetter.org
In reply to: Alvaro Herrera (#6)
#87Jonah H. Harris
jonah.harris@gmail.com
In reply to: Mark Woodward (#1)
#88Mark Woodward
pgsql@mohawksoft.com
In reply to: Tom Lane (#85)
#89Jochem van Dieten
jochemd@gmail.com
In reply to: Mark Woodward (#88)
#90Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#85)
#91Florian Weimer
fw@deneb.enyo.de
In reply to: Gavin Sherry (#51)
#92Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#90)
#93Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#92)
#94Mark Woodward
pgsql@mohawksoft.com
In reply to: Jonah H. Harris (#87)
#95Jonah H. Harris
jonah.harris@gmail.com
In reply to: Mark Woodward (#94)
#96Mark Woodward
pgsql@mohawksoft.com
In reply to: Bruce Momjian (#90)
#97Jonah H. Harris
jonah.harris@gmail.com
In reply to: Mark Woodward (#96)
#98Stefan Kaltenbrunner
stefan@kaltenbrunner.cc
In reply to: Tom Lane (#92)
#99Jonah H. Harris
jonah.harris@gmail.com
In reply to: Tom Lane (#92)
#100Bruce Momjian
bruce@momjian.us
In reply to: Jonah H. Harris (#99)
#101Rick Gigger
rick@alpinenetworking.com
In reply to: Mark Woodward (#7)
#102Rick Gigger
rick@alpinenetworking.com
In reply to: Mark Woodward (#15)
#103Rick Gigger
rick@alpinenetworking.com
In reply to: Mark Woodward (#44)
#104Mark Mielke
mark@mark.mielke.cc
In reply to: Bruce Momjian (#93)
#105Mark Woodward
pgsql@mohawksoft.com
In reply to: Rick Gigger (#101)
#106Jan Wieck
JanWieck@Yahoo.com
In reply to: Mark Woodward (#94)
#107Jan Wieck
JanWieck@Yahoo.com
In reply to: Mark Mielke (#104)
#108Mark Mielke
mark@mark.mielke.cc
In reply to: Jan Wieck (#107)
#109Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Jonah H. Harris (#95)
#110Mark Woodward
pgsql@mohawksoft.com
In reply to: Jan Wieck (#106)
In reply to: Mark Woodward (#110)
In reply to: Mark Woodward (#94)
#113Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Mark Woodward (#110)
#114Jonah H. Harris
jonah.harris@gmail.com
In reply to: Mark Woodward (#1)
#115Jonah H. Harris
jonah.harris@gmail.com
In reply to: Jonah H. Harris (#114)
#116Mark Woodward
pgsql@mohawksoft.com
In reply to: Heikki Linnakangas (#113)
#117Jochem van Dieten
jochemd@gmail.com
In reply to: Mark Woodward (#116)
In reply to: Mark Woodward (#116)
#119Jonah H. Harris
jonah.harris@gmail.com
In reply to: Mark Woodward (#1)
#120Mark Woodward
pgsql@mohawksoft.com
In reply to: Jonah H. Harris (#114)
#121Mark Woodward
pgsql@mohawksoft.com
In reply to: Jonah H. Harris (#119)
#122Jonah H. Harris
jonah.harris@gmail.com
In reply to: Mark Woodward (#121)
#123PFC
lists@peufeu.com
In reply to: Tom Lane (#92)
#124Bruce Momjian
bruce@momjian.us
In reply to: PFC (#123)
#125Mark Woodward
pgsql@mohawksoft.com
In reply to: Jonah H. Harris (#122)
#126Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Bruce Momjian (#124)
In reply to: Tom Lane (#85)
In reply to: Bruce Momjian (#100)
#129Jan Wieck
JanWieck@Yahoo.com
In reply to: Mark Woodward (#116)
#130Jan Wieck
JanWieck@Yahoo.com
In reply to: Alvaro Herrera (#36)
In reply to: Jan Wieck (#130)
In reply to: Hannu Krosing (#127)
#133Daniel Xavier de Sousa
danielucg@yahoo.com.br
In reply to: Martijn van Oosterhout (#132)
#134Chris Browne
cbbrowne@acm.org
In reply to: Mark Woodward (#1)
#135Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Daniel Xavier de Sousa (#133)
#136Bruce Momjian
bruce@momjian.us
In reply to: Heikki Linnakangas (#126)
#137Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Mark Woodward (#105)
#138Bruce Momjian
bruce@momjian.us
In reply to: Hannu Krosing (#127)
In reply to: Bruce Momjian (#138)
#140Mark Woodward
pgsql@mohawksoft.com
In reply to: Jan Wieck (#129)
#141Daniel Xavier de Sousa
danielucg@yahoo.com.br
In reply to: Alvaro Herrera (#135)
#142Bruce Momjian
bruce@momjian.us
In reply to: Hannu Krosing (#139)
#143Jan Wieck
JanWieck@Yahoo.com
In reply to: Mark Woodward (#140)
#144Jan Wieck
JanWieck@Yahoo.com
In reply to: Bruce Momjian (#142)
#145Jan Wieck
JanWieck@Yahoo.com
In reply to: Hannu Krosing (#131)
#146Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Bruce Momjian (#136)
#147Bruce Momjian
bruce@momjian.us
In reply to: Jan Wieck (#144)
#148Bruce Momjian
bruce@momjian.us
In reply to: Heikki Linnakangas (#146)
#149Jan Wieck
JanWieck@Yahoo.com
In reply to: Bruce Momjian (#147)
In reply to: Bruce Momjian (#147)
In reply to: Mark Woodward (#140)
#152Bruce Momjian
bruce@momjian.us
In reply to: Jan Wieck (#149)
#153Bruce Momjian
bruce@momjian.us
In reply to: Hannu Krosing (#150)
#154Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#153)
#155Jan Wieck
JanWieck@Yahoo.com
In reply to: Bruce Momjian (#152)
#156Bruce Momjian
bruce@momjian.us
In reply to: Jan Wieck (#155)
#157Jan Wieck
JanWieck@Yahoo.com
In reply to: Bruce Momjian (#156)
#158Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Jan Wieck (#157)
#159Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Jan Wieck (#157)
#160Bruce Momjian
bruce@momjian.us
In reply to: Jan Wieck (#157)
#161Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#158)
#162Bruce Momjian
bruce@momjian.us
In reply to: Heikki Linnakangas (#159)
#163Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Alvaro Herrera (#158)
#164Bruce Momjian
bruce@momjian.us
In reply to: Zeugswetter Andreas SB SD (#163)
In reply to: Bruce Momjian (#162)
#166Bruce Momjian
bruce@momjian.us
In reply to: Martijn van Oosterhout (#165)
#167Mark Woodward
pgsql@mohawksoft.com
In reply to: Hannu Krosing (#128)
#168Mark Woodward
pgsql@mohawksoft.com
In reply to: Bruce Momjian (#162)
#169Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#166)
#170Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#169)
In reply to: Martijn van Oosterhout (#165)
In reply to: Mark Woodward (#167)
#173Bruce Momjian
bruce@momjian.us
In reply to: Hannu Krosing (#171)
In reply to: Bruce Momjian (#173)
In reply to: Bruce Momjian (#173)
In reply to: Martijn van Oosterhout (#174)
#177Bruce Momjian
bruce@momjian.us
In reply to: Martijn van Oosterhout (#174)
#178Bruce Momjian
bruce@momjian.us
In reply to: Hannu Krosing (#175)
#179PFC
lists@peufeu.com
In reply to: Bruce Momjian (#170)
#180Bruce Momjian
bruce@momjian.us
In reply to: PFC (#179)
#181PFC
lists@peufeu.com
In reply to: Bruce Momjian (#180)
#182Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Bruce Momjian (#170)
#183Bruce Momjian
bruce@momjian.us
In reply to: Zeugswetter Andreas SB SD (#182)
#184Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Mark Woodward (#62)
#185Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Heikki Linnakangas (#146)
#186Bruce Momjian
bruce@momjian.us
In reply to: Jim Nasby (#185)
#187Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Bruce Momjian (#186)
#188Bruce Momjian
bruce@momjian.us
In reply to: Jim Nasby (#187)
In reply to: Bruce Momjian (#188)
In reply to: Hannu Krosing (#189)
#191PFC
lists@peufeu.com
In reply to: Jim Nasby (#185)
In reply to: Bruce Momjian (#178)
#193Mark Woodward
pgsql@mohawksoft.com
In reply to: Hannu Krosing (#172)
#194Mark Woodward
pgsql@mohawksoft.com
In reply to: Jim Nasby (#184)
In reply to: Bruce Momjian (#177)
#196Bruce Momjian
bruce@momjian.us
In reply to: Hannu Krosing (#190)
#197Bruce Momjian
bruce@momjian.us
In reply to: PFC (#191)
#198Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Bruce Momjian (#188)
#199Bruce Momjian
bruce@momjian.us
In reply to: Martijn van Oosterhout (#195)
#200Bruce Momjian
bruce@momjian.us
In reply to: Jim Nasby (#198)
#201Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: PFC (#191)
#202Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#197)
#203Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#202)
In reply to: Bruce Momjian (#196)
#205Bruce Momjian
bruce@momjian.us
In reply to: Hannu Krosing (#204)