pg_upgrade problem

Started by hubert depesz lubaczewskiover 14 years ago53 messageshackersgeneral
Jump to latest
hackersgeneral

hi

I have 8.3.11 database, ~ 600GB in size.

I want to upgrade it to 9.0.

First, I tried with 9.0.4, and when I hit problem (the same) I tried
git, head of 9.0 branch.

So. I did pg_upgrade with -c, and it looked like this:

$ time pg_upgrade -c -v -b /opt/pgsql-8.3.11-int/bin/ -B /opt/pgsql-9.0.5a-int/bin/ -d /var/postgresql/6666/ -D /var/postgresql/6666-9.0 -k -l pg_upgrade.log -p 6666 -P 4329
Running in verbose mode
Performing Consistency Checks
-----------------------------
Checking old data directory (/var/postgresql/6666) ok
Checking old bin directory (/opt/pgsql-8.3.11-int/bin) ok
Checking new data directory (/var/postgresql/6666-9.0) ok
Checking new bin directory (/opt/pgsql-9.0.5a-int/bin) ok
"/opt/pgsql-8.3.11-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666" -o "-p 6666 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000" start >> "pg_upgrade.log" 2>&1
Checking for reg* system oid user data types ok
Checking for /contrib/isn with bigint-passing mismatch ok
Checking for invalid 'name' user columns ok
Checking for tsquery user columns ok
Checking for tsvector user columns ok
Checking for hash and gin indexes warning

| Your installation contains hash and/or gin
| indexes. These indexes have different
| internal formats between your old and new
| clusters so they must be reindexed with the
| REINDEX command. After migration, you will
| be given REINDEX instructions.

Checking for bpchar_pattern_ops indexes ok
Checking for large objects ok
"/opt/pgsql-8.3.11-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666" stop >> "pg_upgrade.log" 2>&1
"/opt/pgsql-9.0.5a-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666-9.0" -o "-p 4329 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000" start >> "pg_upgrade.log" 2>&1
Checking for presence of required libraries ok

*Clusters are compatible*
"/opt/pgsql-9.0.5a-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666-9.0" stop >> "pg_upgrade.log" 2>&1

real 0m6.417s
user 0m0.040s
sys 0m0.060s

All looks ok. So I ran the upgrade without -c:

$ time pg_upgrade -v -b /opt/pgsql-8.3.11-int/bin/ -B /opt/pgsql-9.0.5a-int/bin/ -d /var/postgresql/6666/ -D /var/postgresql/6666-9.0 -k -l pg_upgrade.log -p 6666 -P 4329
Running in verbose mode
Performing Consistency Checks
-----------------------------
Checking old data directory (/var/postgresql/6666) ok
Checking old bin directory (/opt/pgsql-8.3.11-int/bin) ok
Checking new data directory (/var/postgresql/6666-9.0) ok
Checking new bin directory (/opt/pgsql-9.0.5a-int/bin) ok
"/opt/pgsql-8.3.11-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666" -o "-p 6666 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000" start >> "pg_upgrade.log" 2>&1
Checking for reg* system oid user data types ok
Checking for /contrib/isn with bigint-passing mismatch ok
Checking for invalid 'name' user columns ok
Checking for tsquery user columns ok
Creating script to adjust sequences ok
Checking for large objects ok
Creating catalog dump "/opt/pgsql-9.0.5a-int/bin/pg_dumpall" --port 6666 --username "postgres" --schema-only --binary-upgrade > "/var/postgresql/pg_upgrade_dump_all.sql"
ok
"/opt/pgsql-8.3.11-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666" stop >> "pg_upgrade.log" 2>&1
"/opt/pgsql-9.0.5a-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666-9.0" -o "-p 4329 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000" start >> "pg_upgrade.log" 2>&1
Checking for presence of required libraries ok

| If pg_upgrade fails after this point, you must
| re-initdb the new cluster before continuing.
| You will also need to remove the ".old" suffix
| from /var/postgresql/6666/global/pg_control.old.

Performing Migration
--------------------
Adding ".old" suffix to old global/pg_control ok
Analyzing all rows in the new cluster "/opt/pgsql-9.0.5a-int/bin/vacuumdb" --port 4329 --username "postgres" --all --analyze >> "pg_upgrade.log" 2>&1
ok
Freezing all rows on the new cluster "/opt/pgsql-9.0.5a-int/bin/vacuumdb" --port 4329 --username "postgres" --all --freeze >> "pg_upgrade.log" 2>&1
ok
"/opt/pgsql-9.0.5a-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666-9.0" stop >> "pg_upgrade.log" 2>&1
Deleting new commit clogs ok
Copying old commit clogs to new server cp -Rf "/var/postgresql/6666/pg_clog" "/var/postgresql/6666-9.0/pg_clog"
ok
Setting next transaction id for new cluster "/opt/pgsql-9.0.5a-int/bin/pg_resetxlog" -f -x 3673553615 "/var/postgresql/6666-9.0" > /dev/null
ok
Resetting WAL archives "/opt/pgsql-9.0.5a-int/bin/pg_resetxlog" -l 1,26478,133 "/var/postgresql/6666-9.0" >> "pg_upgrade.log" 2>&1
ok
"/opt/pgsql-9.0.5a-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666-9.0" -o "-p 4329 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000" start >> "pg_upgrade.log" 2>&1
Setting frozenxid counters in new cluster ok
Creating databases in the new cluster "/opt/pgsql-9.0.5a-int/bin/psql" --set ON_ERROR_STOP=on --no-psqlrc --port 4329 --username "postgres" -f "/var/postgresql/pg_upgrade_dump_globals.sql" --dbname template1 >> "pg_upgrade.log"
psql:/var/postgresql/pg_upgrade_dump_globals.sql:26: NOTICE: schema "check_postgres" does not exist
psql:/var/postgresql/pg_upgrade_dump_globals.sql:26: NOTICE: schema "contrib" does not exist
psql:/var/postgresql/pg_upgrade_dump_globals.sql:57: NOTICE: schema "check_postgres" does not exist
psql:/var/postgresql/pg_upgrade_dump_globals.sql:57: NOTICE: schema "ltree" does not exist
psql:/var/postgresql/pg_upgrade_dump_globals.sql:57: NOTICE: schema "pgcrypto" does not exist
ok
"/opt/pgsql-9.0.5a-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666-9.0" stop >> "pg_upgrade.log" 2>&1
"/opt/pgsql-9.0.5a-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666-9.0" -o "-p 4329 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000" start >> "pg_upgrade.log" 2>&1
Adding support functions to new cluster ok
Restoring database schema to new cluster "/opt/pgsql-9.0.5a-int/bin/psql" --set ON_ERROR_STOP=on --no-psqlrc --port 4329 --username "postgres" -f "/var/postgresql/pg_upgrade_dump_db.sql" --dbname template1 >> "pg_upgrade.log"
ok
Removing support functions from new cluster ok
"/opt/pgsql-9.0.5a-int/bin/pg_ctl" -l "pg_upgrade.log" -D "/var/postgresql/6666-9.0" stop >> "pg_upgrade.log" 2>&1
Restoring user relation files
/var/postgresql/6666/base/113953649/2613 linking /var/postgresql/6666/base/113953649/2613 to /var/postgresql/6666-9.0/base/11826/11790
/var/postgresql/6666/base/113953649/2683 linking /var/postgresql/6666/base/113953649/2683 to /var/postgresql/6666-9.0/base/11826/11792

Could not find 71637071 in old cluster

real 0m53.065s
user 0m0.520s
sys 0m0.870s

What can be wrong? How can I fix it?

I don't care about current instance - it was just a test, but I need to
know how to make the upgrade actually work.

I did grep in generated log files for this value - 71637071, and found:

$ grep -C3 71637071 pg_upgrade*
pg_upgrade_dump_all.sql-
pg_upgrade_dump_all.sql--- For binary upgrade, must preserve relfilenodes
pg_upgrade_dump_all.sql-SELECT binary_upgrade.set_next_heap_relfilenode('71637068'::pg_catalog.oid);
pg_upgrade_dump_all.sql:SELECT binary_upgrade.set_next_toast_relfilenode('71637071'::pg_catalog.oid);
pg_upgrade_dump_all.sql-SELECT binary_upgrade.set_next_index_relfilenode('71637073'::pg_catalog.oid);
pg_upgrade_dump_all.sql-
pg_upgrade_dump_all.sql-CREATE TABLE actions (
--
pg_upgrade_dump_db.sql-
pg_upgrade_dump_db.sql--- For binary upgrade, must preserve relfilenodes
pg_upgrade_dump_db.sql-SELECT binary_upgrade.set_next_heap_relfilenode('71637068'::pg_catalog.oid);
pg_upgrade_dump_db.sql:SELECT binary_upgrade.set_next_toast_relfilenode('71637071'::pg_catalog.oid);
pg_upgrade_dump_db.sql-SELECT binary_upgrade.set_next_index_relfilenode('71637073'::pg_catalog.oid);
pg_upgrade_dump_db.sql-
pg_upgrade_dump_db.sql-CREATE TABLE actions (
--
pg_upgrade.log-linking /var/postgresql/6666/base/113953649/2613 to /var/postgresql/6666-9.0/base/11826/11790
pg_upgrade.log- /var/postgresql/6666/base/113953649/2683
pg_upgrade.log-linking /var/postgresql/6666/base/113953649/2683 to /var/postgresql/6666-9.0/base/11826/11792
pg_upgrade.log:Could not find 71637071 in old cluster

One more thing - one of earlier tests actually worked through
pg_upgrade, but when running vacuumdb -az on newly started 9.0.4, I got
error about missing transaction/clog - don't remember exactly what it
was, though.

Best regards,

depesz

--
The best thing about modern society is how easy it is to avoid contact with it.
http://depesz.com/

#2Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#1)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

hubert depesz lubaczewski wrote:

hi

I have 8.3.11 database, ~ 600GB in size.

I want to upgrade it to 9.0.

First, I tried with 9.0.4, and when I hit problem (the same) I tried
git, head of 9.0 branch.

Good.

pg_upgrade_dump_db.sql-
pg_upgrade_dump_db.sql--- For binary upgrade, must preserve relfilenodes
pg_upgrade_dump_db.sql-SELECT binary_upgrade.set_next_heap_relfilenode('71637068'::pg_catalog.oid);
pg_upgrade_dump_db.sql:SELECT binary_upgrade.set_next_toast_relfilenode('71637071'::pg_catalog.oid);
pg_upgrade_dump_db.sql-SELECT binary_upgrade.set_next_index_relfilenode('71637073'::pg_catalog.oid);
pg_upgrade_dump_db.sql-
pg_upgrade_dump_db.sql-CREATE TABLE actions (
--
pg_upgrade.log-linking /var/postgresql/6666/base/113953649/2613 to /var/postgresql/6666-9.0/base/11826/11790
pg_upgrade.log- /var/postgresql/6666/base/113953649/2683
pg_upgrade.log-linking /var/postgresql/6666/base/113953649/2683 to /var/postgresql/6666-9.0/base/11826/11792
pg_upgrade.log:Could not find 71637071 in old cluster

The problem appears to be that the Postgres catalogs think there is a
toast table for 'actions', while the file system doesn't seem to have
such a file. I can you look in pg_class and verify that?

SELECT reltoastrelid FROM pg_class WHERE relname = 'actions';

Then look in the file system to see if there is a matching file.

One more thing - one of earlier tests actually worked through
pg_upgrade, but when running vacuumdb -az on newly started 9.0.4, I got
error about missing transaction/clog - don't remember exactly what it
was, though.

THere was a bug in how how pg_upgrade worked in pre-9.0.4 --- could it
have been that?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

In reply to: Bruce Momjian (#2)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

On Thu, Aug 25, 2011 at 04:33:07PM -0400, Bruce Momjian wrote:

The problem appears to be that the Postgres catalogs think there is a
toast table for 'actions', while the file system doesn't seem to have
such a file. I can you look in pg_class and verify that?

SELECT reltoastrelid FROM pg_class WHERE relname = 'actions';

$ SELECT reltoastrelid FROM pg_class WHERE relname = 'actions';
reltoastrelid
---------------
(0 rows)

This is done not on the pg from backup, but on normal production, as the test
pg instance doesn't work anymore.

I can re-set the test instance, but extracting from backup, and making it apply
all xlogs usually takes 2-3 days.

One more thing - one of earlier tests actually worked through
pg_upgrade, but when running vacuumdb -az on newly started 9.0.4, I got
error about missing transaction/clog - don't remember exactly what it
was, though.

THere was a bug in how how pg_upgrade worked in pre-9.0.4 --- could it
have been that?

It was done definitely using 9.0.4.

Best regards,

depesz

#4Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#3)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

hubert depesz lubaczewski wrote:

On Thu, Aug 25, 2011 at 04:33:07PM -0400, Bruce Momjian wrote:

The problem appears to be that the Postgres catalogs think there is a
toast table for 'actions', while the file system doesn't seem to have
such a file. I can you look in pg_class and verify that?

SELECT reltoastrelid FROM pg_class WHERE relname = 'actions';

$ SELECT reltoastrelid FROM pg_class WHERE relname = 'actions';
reltoastrelid
---------------
(0 rows)

This is done not on the pg from backup, but on normal production, as the test
pg instance doesn't work anymore.

I can re-set the test instance, but extracting from backup, and making it apply
all xlogs usually takes 2-3 days.

If you remove the .old extension on pg_control, you can start the old
cluster and check it. This is explained by pg_upgrade output:

| If pg_upgrade fails after this point, you must
| re-initdb the new cluster before continuing.
| You will also need to remove the ".old" suffix
| from /var/postgresql/6666/global/pg_control.old.

Please check the old cluster.

One more thing - one of earlier tests actually worked through
pg_upgrade, but when running vacuumdb -az on newly started 9.0.4, I got
error about missing transaction/clog - don't remember exactly what it
was, though.

THere was a bug in how how pg_upgrade worked in pre-9.0.4 --- could it
have been that?

It was done definitely using 9.0.4.

Good.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

In reply to: Bruce Momjian (#4)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

On Thu, Aug 25, 2011 at 04:43:02PM -0400, Bruce Momjian wrote:

Please check the old cluster.

Sure:

=# SELECT reltoastrelid FROM pg_class WHERE relname = 'actions';
reltoastrelid
---------------
82510395
71637071
(2 rows)

=# SELECT oid::regclass, reltoastrelid FROM pg_class WHERE relname = 'actions';
oid | reltoastrelid
---------------+---------------
xxxxx.actions | 82510395
yyyyy.actions | 71637071
(2 rows)

=# select oid, relfilenode from pg_class where oid in (SELECT reltoastrelid FROM pg_class WHERE relname = 'actions');
oid | relfilenode
----------+-------------
82510395 | 82510395
71637071 | 71637071
(2 rows)

=# select oid from pg_database where datname = current_database();
oid
----------
71635381
(1 row)

$ ls -l 6666/base/71635381/{71637071,82510395}
-rw------- 1 postgres postgres 0 2009-10-12 06:49 6666/base/71635381/71637071
-rw------- 1 postgres postgres 0 2010-08-19 14:02 6666/base/71635381/82510395

One more thing - one of earlier tests actually worked through
pg_upgrade, but when running vacuumdb -az on newly started 9.0.4, I got
error about missing transaction/clog - don't remember exactly what it
was, though.

THere was a bug in how how pg_upgrade worked in pre-9.0.4 --- could it
have been that?

It was done definitely using 9.0.4.

Good.

Not sure if it's good, since it was after the clog error was fixed, and
I still got it :/

but anyway - the problem with 71637071 is more important now.

Best regards,

depesz

#6Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#5)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

OK, this was very helpful. I found out that there is a bug in current
9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
tables. (The bug is not in any released version of pg_upgrade.) The
attached, applied patches should fix it for you. I assume you are
running 9.0.X, and not 9.0.4.

---------------------------------------------------------------------------

hubert depesz lubaczewski wrote:

On Thu, Aug 25, 2011 at 04:43:02PM -0400, Bruce Momjian wrote:

Please check the old cluster.

Sure:

=# SELECT reltoastrelid FROM pg_class WHERE relname = 'actions';
reltoastrelid
---------------
82510395
71637071
(2 rows)

=# SELECT oid::regclass, reltoastrelid FROM pg_class WHERE relname = 'actions';
oid | reltoastrelid
---------------+---------------
xxxxx.actions | 82510395
yyyyy.actions | 71637071
(2 rows)

=# select oid, relfilenode from pg_class where oid in (SELECT reltoastrelid FROM pg_class WHERE relname = 'actions');
oid | relfilenode
----------+-------------
82510395 | 82510395
71637071 | 71637071
(2 rows)

=# select oid from pg_database where datname = current_database();
oid
----------
71635381
(1 row)

$ ls -l 6666/base/71635381/{71637071,82510395}
-rw------- 1 postgres postgres 0 2009-10-12 06:49 6666/base/71635381/71637071
-rw------- 1 postgres postgres 0 2010-08-19 14:02 6666/base/71635381/82510395

One more thing - one of earlier tests actually worked through
pg_upgrade, but when running vacuumdb -az on newly started 9.0.4, I got
error about missing transaction/clog - don't remember exactly what it
was, though.

THere was a bug in how how pg_upgrade worked in pre-9.0.4 --- could it
have been that?

It was done definitely using 9.0.4.

Good.

Not sure if it's good, since it was after the clog error was fixed, and
I still got it :/

but anyway - the problem with 71637071 is more important now.

Best regards,

depesz

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

Attachments:

/rtmp/pg_upgrade.9.0text/x-diffDownload+47-47
/rtmp/pg_upgrade.9.1text/x-diffDownload+35-35
In reply to: Bruce Momjian (#6)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:

OK, this was very helpful. I found out that there is a bug in current
9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
tables. (The bug is not in any released version of pg_upgrade.) The
attached, applied patches should fix it for you. I assume you are
running 9.0.X, and not 9.0.4.

pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

will keep you posted.

Best regards,

depesz

In reply to: hubert depesz lubaczewski (#7)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

On Fri, Aug 26, 2011 at 05:28:35PM +0200, hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:

OK, this was very helpful. I found out that there is a bug in current
9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
tables. (The bug is not in any released version of pg_upgrade.) The
attached, applied patches should fix it for you. I assume you are
running 9.0.X, and not 9.0.4.

pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

vacuumdb failed. The fail looks very similar to the one I had on 9.0.4.

After long vacuum I got:
INFO: vacuuming "pg_toast.pg_toast_106668498"
vacuumdb: vacuuming of database "etsy_v2" failed: ERROR: could not access status of transaction 3429738606
DETAIL: Could not open file "pg_clog/0CC6": No such file or directory.

Unfortunately at the moment, I no longer have the old (8.3) setup, but I do
have the 9.0.X and will be happy to provide any info you might need to help me
debug/fix the problem.

Best regards,

depesz

In reply to: hubert depesz lubaczewski (#8)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

On Mon, Aug 29, 2011 at 06:54:41PM +0200, hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 05:28:35PM +0200, hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:

OK, this was very helpful. I found out that there is a bug in current
9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
tables. (The bug is not in any released version of pg_upgrade.) The
attached, applied patches should fix it for you. I assume you are
running 9.0.X, and not 9.0.4.

pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

vacuumdb failed. The fail looks very similar to the one I had on 9.0.4.

After long vacuum I got:
INFO: vacuuming "pg_toast.pg_toast_106668498"
vacuumdb: vacuuming of database "etsy_v2" failed: ERROR: could not access status of transaction 3429738606
DETAIL: Could not open file "pg_clog/0CC6": No such file or directory.

Unfortunately at the moment, I no longer have the old (8.3) setup, but I do
have the 9.0.X and will be happy to provide any info you might need to help me
debug/fix the problem.

this pg_toast is related to table "transactions", which was vacuumed
like this:

INFO: vacuuming "public.transactions"
INFO: index "transaction_id_pkey" now contains 50141303 row versions in 144437 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 1.08s/0.13u sec elapsed 173.04 sec.
INFO: index "transactions_creation_tsz_idx" now contains 50141303 row versions in 162634 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 1.19s/0.23u sec elapsed 77.45 sec.
INFO: index "fki_transactions_xxxxxxxxxx_fkey" now contains 50141303 row versions in 163466 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 1.13s/0.29u sec elapsed 65.45 sec.
INFO: index "fki_transactions_xxxxxxxx_fkey" now contains 50141303 row versions in 146528 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 1.15s/0.24u sec elapsed 50.28 sec.
INFO: index "fki_transactions_xxxxxxxxxxxxx_fkey" now contains 50141303 row versions in 190914 pages
DETAIL: 0 index row versions were removed.
5 index pages have been deleted, 0 are currently reusable.
CPU 1.49s/0.17u sec elapsed 67.95 sec.
INFO: index "transactions_xxxxxxxxxxxxxxxxxxxxxxxxxx_id" now contains 50141303 row versions in 164669 pages
DETAIL: 0 index row versions were removed.
2 index pages have been deleted, 0 are currently reusable.
CPU 1.36s/0.18u sec elapsed 62.83 sec.
INFO: "transactions": found 0 removable, 39644831 nonremovable row versions in 5978240 out of 7312036 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 8209452 unused item pointers.
0 pages are entirely empty.
CPU 75.75s/18.57u sec elapsed 9268.19 sec.
INFO: vacuuming "pg_toast.pg_toast_106668498"
vacuumdb: vacuuming of database "etsy_v2" failed: ERROR: could not access status of transaction 3429738606
DETAIL: Could not open file "pg_clog/0CC6": No such file or directory.

Interestingly.

In old dir there is pg_clog directory with files:
0AC0 .. 0DAF (including 0CC6, size 262144)
but new pg_clog has only:
0D2F .. 0DB0

File content - nearly all files that exist in both places are the same, with exception of 2 newest ones in new datadir:
3c5122f3e80851735c19522065a2d12a 0DAF
8651fc2b9fa3d27cfb5b496165cead68 0DB0

0DB0 doesn't exist in old, and 0DAF has different md5sum: 7d48996c762d6a10f8eda88ae766c5dd

one more thing. I did select count(*) from transactions and it worked.

that's about it. I can probably copy over files from old datadir to new (in
pg_clog/), and will be happy to do it, but I'll wait for your call - retry with
copies files might destroy some evidence.

Best regards,

depesz

--
The best thing about modern society is how easy it is to avoid contact with it.
http://depesz.com/

#10Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: hubert depesz lubaczewski (#9)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

Excerpts from hubert depesz lubaczewski's message of lun ago 29 14:49:24 -0300 2011:

On Mon, Aug 29, 2011 at 06:54:41PM +0200, hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 05:28:35PM +0200, hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:

OK, this was very helpful. I found out that there is a bug in current
9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
tables. (The bug is not in any released version of pg_upgrade.) The
attached, applied patches should fix it for you. I assume you are
running 9.0.X, and not 9.0.4.

pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

vacuumdb failed. The fail looks very similar to the one I had on 9.0.4.

After long vacuum I got:
INFO: vacuuming "pg_toast.pg_toast_106668498"
vacuumdb: vacuuming of database "etsy_v2" failed: ERROR: could not access status of transaction 3429738606
DETAIL: Could not open file "pg_clog/0CC6": No such file or directory.

I don't understand the pg_upgrade code here. It is setting the
datfrozenxid and relfrozenxid values to the latest checkpoint's NextXID,

/* set pg_class.relfrozenxid */
PQclear(executeQueryOrDie(conn,
"UPDATE pg_catalog.pg_class "
"SET relfrozenxid = '%u' "
/* only heap and TOAST are vacuumed */
"WHERE relkind IN ('r', 't')",
old_cluster.controldata.chkpnt_nxtxid));

but I don't see why this is safe. I mean, surely the previous
vacuum might have been a lot earlier than that. Are these values reset
to more correct values (i.e. older ones) later somehow? My question is,
why isn't the new cluster completely screwed?

I wonder if pg_upgrade shouldn't be doing the conservative thing here,
which AFAICT would be to set all frozenxid values as furthest in the
past as possible (without causing a shutdown-due-to-wraparound, and
maybe without causing autovacuum to enter emergency mode either).

--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#11daveg
daveg@sonic.net
In reply to: hubert depesz lubaczewski (#9)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

On Mon, Aug 29, 2011 at 07:49:24PM +0200, hubert depesz lubaczewski wrote:

On Mon, Aug 29, 2011 at 06:54:41PM +0200, hubert depesz lubaczewski wrote:
vacuumdb: vacuuming of database "etsy_v2" failed: ERROR: could not access status of transaction 3429738606
DETAIL: Could not open file "pg_clog/0CC6": No such file or directory.

Interestingly.

In old dir there is pg_clog directory with files:
0AC0 .. 0DAF (including 0CC6, size 262144)
but new pg_clog has only:
0D2F .. 0DB0

File content - nearly all files that exist in both places are the same, with exception of 2 newest ones in new datadir:
3c5122f3e80851735c19522065a2d12a 0DAF
8651fc2b9fa3d27cfb5b496165cead68 0DB0

0DB0 doesn't exist in old, and 0DAF has different md5sum: 7d48996c762d6a10f8eda88ae766c5dd

one more thing. I did select count(*) from transactions and it worked.

that's about it. I can probably copy over files from old datadir to new (in
pg_clog/), and will be happy to do it, but I'll wait for your call - retry with
copies files might destroy some evidence.

I had this same thing happen this Saturday just past and my client had to
restore the whole 2+ TB instance from the previous days pg_dumps.
I had been thinking that perhaps I did something wrong in setting up or
running the upgrade, but had not found it yet. Now that I see Hubert has
the same problem it is starting to look like pg_upgrade can eat all your
data.

After running pg_upgrade apparently successfully and analyzeing all the
tables we restarted the production workload and started getting errors:

2011-08-27 04:18:34.015 12337 c06 postgres ERROR: could not access status of transaction 2923961093
2011-08-27 04:18:34.015 12337 c06 postgres DETAIL: Could not open file "pg_clog/0AE4": No such file or directory.
2011-08-27 04:18:34.015 12337 c06 postgres STATEMENT: analyze public.b_pxx;

On examination the pg_clog directory contained on two files timestamped
after the startup of the new cluster with 9.0.4. Other hosts that upgraded
successfully had numerous files in pg_clog dating back a few days. So it
appears that all the clog files went missing during the upgrade somehow.
a
This happened upgrading from 8.4.7 to 9.0.4, with a brief session in between
at 8.4.8. We have upgraded several hosts to 9.0.4 successfully previously.

-dg

--
David Gould daveg@sonic.net 510 536 1443 510 282 0869
If simplicity worked, the world would be overrun with insects.

#12Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#7)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:

OK, this was very helpful. I found out that there is a bug in current
9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
tables. (The bug is not in any released version of pg_upgrade.) The
attached, applied patches should fix it for you. I assume you are
running 9.0.X, and not 9.0.4.

pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

will keep you posted.

FYI, this pg_upgrade bug exists in PG 9.1RC1, but not in earlier betas.
Users can either wait for 9.1 RC2 or Final, or use the patch I posted.
The bug is not in 9.0.4 and will not be in 9.0.5.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#13Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#10)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

Alvaro Herrera wrote:

Excerpts from hubert depesz lubaczewski's message of lun ago 29 14:49:24 -0300 2011:

On Mon, Aug 29, 2011 at 06:54:41PM +0200, hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 05:28:35PM +0200, hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:

OK, this was very helpful. I found out that there is a bug in current
9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
tables. (The bug is not in any released version of pg_upgrade.) The
attached, applied patches should fix it for you. I assume you are
running 9.0.X, and not 9.0.4.

pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

vacuumdb failed. The fail looks very similar to the one I had on 9.0.4.

After long vacuum I got:
INFO: vacuuming "pg_toast.pg_toast_106668498"
vacuumdb: vacuuming of database "etsy_v2" failed: ERROR: could not access status of transaction 3429738606
DETAIL: Could not open file "pg_clog/0CC6": No such file or directory.

I don't understand the pg_upgrade code here. It is setting the
datfrozenxid and relfrozenxid values to the latest checkpoint's NextXID,

/* set pg_class.relfrozenxid */
PQclear(executeQueryOrDie(conn,
"UPDATE pg_catalog.pg_class "
"SET relfrozenxid = '%u' "
/* only heap and TOAST are vacuumed */
"WHERE relkind IN ('r', 't')",
old_cluster.controldata.chkpnt_nxtxid));

but I don't see why this is safe. I mean, surely the previous
vacuum might have been a lot earlier than that. Are these values reset
to more correct values (i.e. older ones) later somehow? My question is,
why isn't the new cluster completely screwed?

Have you looked at my pg_upgrade presentation?

http://momjian.us/main/presentations/features.html#pg_upgrade

This query happens after we have done a VACUUM FREEEZE on an empty
cluster.

pg_dump --binary-upgrade will dump out the proper relfrozen xids for
every object that gets its file system files copied or linked.

I wonder if pg_upgrade shouldn't be doing the conservative thing here,
which AFAICT would be to set all frozenxid values as furthest in the
past as possible (without causing a shutdown-due-to-wraparound, and
maybe without causing autovacuum to enter emergency mode either).

I already get complaints about requiring an "analyze" run after the
upgrade --- this would make it much worse. In fact I have to look into
upgrading optimizer statistics someday.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#14Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#12)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

On Wed, Aug 31, 2011 at 12:16 PM, Bruce Momjian <bruce@momjian.us> wrote:

hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:

OK, this was very helpful.  I found out that there is a bug in current
9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
tables.  (The bug is not in any released version of pg_upgrade.)  The
attached, applied patches should fix it for you.  I assume you are
running 9.0.X, and not 9.0.4.

pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

will keep you posted.

FYI, this pg_upgrade bug exists in PG 9.1RC1, but not in earlier betas.
Users can either wait for 9.1 RC2 or Final, or use the patch I posted.
The bug is not in 9.0.4 and will not be in 9.0.5.

Based on subsequent discussion on this thread, it sounds like
something is still broken.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In reply to: Bruce Momjian (#12)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

On Wed, Aug 31, 2011 at 12:16:03PM -0400, Bruce Momjian wrote:

hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:

OK, this was very helpful. I found out that there is a bug in current
9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
tables. (The bug is not in any released version of pg_upgrade.) The
attached, applied patches should fix it for you. I assume you are
running 9.0.X, and not 9.0.4.

pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

will keep you posted.

FYI, this pg_upgrade bug exists in PG 9.1RC1, but not in earlier betas.
Users can either wait for 9.1 RC2 or Final, or use the patch I posted.
The bug is not in 9.0.4 and will not be in 9.0.5.

I assume you mean the bug that caused pg_upgrade to fail.

But there still is (existing in 9.0.4 too) bug which causes vacuum to
fail.

Best regards,

depesz

#16Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#9)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

hubert depesz lubaczewski wrote:

INFO: vacuuming "pg_toast.pg_toast_106668498"
vacuumdb: vacuuming of database "etsy_v2" failed: ERROR: could not access status of transaction 3429738606
DETAIL: Could not open file "pg_clog/0CC6": No such file or directory.

Interestingly.

In old dir there is pg_clog directory with files:
0AC0 .. 0DAF (including 0CC6, size 262144)
but new pg_clog has only:
0D2F .. 0DB0

File content - nearly all files that exist in both places are the same, with exception of 2 newest ones in new datadir:
3c5122f3e80851735c19522065a2d12a 0DAF
8651fc2b9fa3d27cfb5b496165cead68 0DB0

0DB0 doesn't exist in old, and 0DAF has different md5sum: 7d48996c762d6a10f8eda88ae766c5dd

one more thing. I did select count(*) from transactions and it worked.

Count(*) worked because it didn't access any of the long/toasted values.

that's about it. I can probably copy over files from old datadir to new (in
pg_clog/), and will be happy to do it, but I'll wait for your call - retry with
copies files might destroy some evidence.

You can safely copy over any of the clog files that exist in the old
cluster but not in the new one, but another vacuum is likely to remove
those files again. :-(

This sure sounds like a variation on the pg_upgrade/toast bug we fixed
in 9.0.4:

http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix

Can you get me the 9.0.X pg_class.relfrozenxid for the toast and heap
tables involved?

FYI, this is what pg_dump --binary-upgrade does to preserve the
relfrozenxids:

-- For binary upgrade, set heap's relfrozenxid
UPDATE pg_catalog.pg_class
SET relfrozenxid = '702'
WHERE oid = 'test'::pg_catalog.regclass;

-- For binary upgrade, set toast's relfrozenxid
UPDATE pg_catalog.pg_class
SET relfrozenxid = '702'
WHERE oid = '16434';

We also preserve the pg_class oids with:

-- For binary upgrade, must preserve pg_class oids
SELECT binary_upgrade.set_next_heap_pg_class_oid('16431'::pg_catalog.oid);
SELECT binary_upgrade.set_next_toast_pg_class_oid('16434'::pg_catalog.oid);
SELECT binary_upgrade.set_next_index_pg_class_oid('16436'::pg_catalog.oid);

The question is whether this is working, and if not, why not?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#17Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#15)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

hubert depesz lubaczewski wrote:

On Wed, Aug 31, 2011 at 12:16:03PM -0400, Bruce Momjian wrote:

hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:

OK, this was very helpful. I found out that there is a bug in current
9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
tables. (The bug is not in any released version of pg_upgrade.) The
attached, applied patches should fix it for you. I assume you are
running 9.0.X, and not 9.0.4.

pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

will keep you posted.

FYI, this pg_upgrade bug exists in PG 9.1RC1, but not in earlier betas.
Users can either wait for 9.1 RC2 or Final, or use the patch I posted.
The bug is not in 9.0.4 and will not be in 9.0.5.

I assume you mean the bug that caused pg_upgrade to fail.

Yes.

But there still is (existing in 9.0.4 too) bug which causes vacuum to
fail.

Yes. We need to find the cause of that new bug.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#18Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#13)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

Excerpts from Bruce Momjian's message of mié ago 31 13:23:07 -0300 2011:

Alvaro Herrera wrote:

Excerpts from hubert depesz lubaczewski's message of lun ago 29 14:49:24 -0300 2011:

On Mon, Aug 29, 2011 at 06:54:41PM +0200, hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 05:28:35PM +0200, hubert depesz lubaczewski wrote:

On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:

OK, this was very helpful. I found out that there is a bug in current
9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
tables. (The bug is not in any released version of pg_upgrade.) The
attached, applied patches should fix it for you. I assume you are
running 9.0.X, and not 9.0.4.

pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

vacuumdb failed. The fail looks very similar to the one I had on 9.0.4.

After long vacuum I got:
INFO: vacuuming "pg_toast.pg_toast_106668498"
vacuumdb: vacuuming of database "etsy_v2" failed: ERROR: could not access status of transaction 3429738606
DETAIL: Could not open file "pg_clog/0CC6": No such file or directory.

I don't understand the pg_upgrade code here. It is setting the
datfrozenxid and relfrozenxid values to the latest checkpoint's NextXID,

/* set pg_class.relfrozenxid */
PQclear(executeQueryOrDie(conn,
"UPDATE pg_catalog.pg_class "
"SET relfrozenxid = '%u' "
/* only heap and TOAST are vacuumed */
"WHERE relkind IN ('r', 't')",
old_cluster.controldata.chkpnt_nxtxid));

but I don't see why this is safe. I mean, surely the previous
vacuum might have been a lot earlier than that. Are these values reset
to more correct values (i.e. older ones) later somehow? My question is,
why isn't the new cluster completely screwed?

Have you looked at my pg_upgrade presentation?

http://momjian.us/main/presentations/features.html#pg_upgrade

I just did, but it doesn't explain this in much detail. (In any case I
don't think we should be relying in a PDF presentation to explain the
inner pg_upgrade details. I think we should rely more on the
IMPLEMENTATION file rather than your PDF ... amusingly that file doesn't
mention the frozenxids.)

This query happens after we have done a VACUUM FREEEZE on an empty
cluster.

Oh, so it only affects the databases that initdb created, right?
The other ones are not even created yet.

pg_dump --binary-upgrade will dump out the proper relfrozen xids for
every object that gets its file system files copied or linked.

Okay. I assume that between the moment you copy the pg_clog files from
the old server, and the moment you do the UPDATEs on pg_class and
pg_database, there is no chance for vacuum to run and remove clog
segments.

Still, it seems to me that this coding makes Min(datfrozenxid) to go
backwards, and that's bad news.

I wonder if pg_upgrade shouldn't be doing the conservative thing here,
which AFAICT would be to set all frozenxid values as furthest in the
past as possible (without causing a shutdown-due-to-wraparound, and
maybe without causing autovacuum to enter emergency mode either).

I already get complaints about requiring an "analyze" run after the
upgrade --- this would make it much worse. In fact I have to look into
upgrading optimizer statistics someday.

Why would it make it worse at all? It doesn't look to me like it
wouldn't affect in any way. The only thing it does, is tell the system
to keep clog segments around.

--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#19Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#18)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

Alvaro Herrera wrote:

I don't understand the pg_upgrade code here. It is setting the
datfrozenxid and relfrozenxid values to the latest checkpoint's NextXID,

/* set pg_class.relfrozenxid */
PQclear(executeQueryOrDie(conn,
"UPDATE pg_catalog.pg_class "
"SET relfrozenxid = '%u' "
/* only heap and TOAST are vacuumed */
"WHERE relkind IN ('r', 't')",
old_cluster.controldata.chkpnt_nxtxid));

but I don't see why this is safe. I mean, surely the previous
vacuum might have been a lot earlier than that. Are these values reset
to more correct values (i.e. older ones) later somehow? My question is,
why isn't the new cluster completely screwed?

Have you looked at my pg_upgrade presentation?

http://momjian.us/main/presentations/features.html#pg_upgrade

I just did, but it doesn't explain this in much detail. (In any case I
don't think we should be relying in a PDF presentation to explain the
inner pg_upgrade details. I think we should rely more on the
IMPLEMENTATION file rather than your PDF ... amusingly that file doesn't
mention the frozenxids.)

This query happens after we have done a VACUUM FREEEZE on an empty
cluster.

Oh, so it only affects the databases that initdb created, right?
The other ones are not even created yet.

Right.

pg_dump --binary-upgrade will dump out the proper relfrozen xids for
every object that gets its file system files copied or linked.

Okay. I assume that between the moment you copy the pg_clog files from
the old server, and the moment you do the UPDATEs on pg_class and
pg_database, there is no chance for vacuum to run and remove clog
segments.

Right, we disable it, and had a long discussion about it. We actually
start the server with:

"-c autovacuum=off -c autovacuum_freeze_max_age=2000000000",

Still, it seems to me that this coding makes Min(datfrozenxid) to go
backwards, and that's bad news.

Yes, it is odd, but I don't see another option. Remember the problem
with xid wrap-around --- we really are defining two different xid eras,
and have to freeze to make that possible.

I wonder if pg_upgrade shouldn't be doing the conservative thing here,
which AFAICT would be to set all frozenxid values as furthest in the
past as possible (without causing a shutdown-due-to-wraparound, and
maybe without causing autovacuum to enter emergency mode either).

I already get complaints about requiring an "analyze" run after the
upgrade --- this would make it much worse. In fact I have to look into
upgrading optimizer statistics someday.

Why would it make it worse at all? It doesn't look to me like it
wouldn't affect in any way. The only thing it does, is tell the system
to keep clog segments around.

It will cause excessive vacuum freezing to happen on startup, I assume.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

In reply to: Bruce Momjian (#16)
hackersgeneral
Re: [GENERAL] pg_upgrade problem

On Wed, Aug 31, 2011 at 01:23:05PM -0400, Bruce Momjian wrote:

Can you get me the 9.0.X pg_class.relfrozenxid for the toast and heap
tables involved?

Sure:

=# select oid::regclass, relfrozenxid from pg_class where relname in ('transactions', 'pg_toast_106668498');
oid | relfrozenxid
-----------------------------+--------------
pg_toast.pg_toast_106668498 | 3673553926
transactions | 3623560321
(2 rows)

Best regards,

depesz

--
The best thing about modern society is how easy it is to avoid contact with it.
http://depesz.com/

#21Bruce Momjian
bruce@momjian.us
In reply to: daveg (#11)
hackersgeneral
#22Lou Picciano
loupicciano@comcast.net
In reply to: Bruce Momjian (#21)
hackersgeneral
#23Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#20)
hackersgeneral
#24Bruce Momjian
bruce@momjian.us
In reply to: daveg (#11)
hackersgeneral
#25Bruce Momjian
bruce@momjian.us
In reply to: Lou Picciano (#22)
hackersgeneral
#26Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#25)
hackersgeneral
#27Lou Picciano
loupicciano@comcast.net
In reply to: Bruce Momjian (#25)
hackersgeneral
In reply to: Bruce Momjian (#23)
hackersgeneral
In reply to: hubert depesz lubaczewski (#28)
hackersgeneral
#30Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#29)
hackersgeneral
In reply to: hubert depesz lubaczewski (#29)
hackersgeneral
#32Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#23)
hackersgeneral
#33Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#31)
hackersgeneral
In reply to: Bruce Momjian (#33)
hackersgeneral
#35Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#34)
hackersgeneral
#36Tom Lane
tgl@sss.pgh.pa.us
In reply to: hubert depesz lubaczewski (#34)
hackersgeneral
In reply to: Bruce Momjian (#35)
hackersgeneral
#38Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#37)
hackersgeneral
In reply to: Bruce Momjian (#38)
hackersgeneral
#40Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#39)
hackersgeneral
#41Bruce Momjian
bruce@momjian.us
In reply to: hubert depesz lubaczewski (#37)
hackersgeneral
#42Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#41)
hackersgeneral
#43daveg
daveg@sonic.net
In reply to: Bruce Momjian (#24)
hackersgeneral
#44Bruce Momjian
bruce@momjian.us
In reply to: daveg (#43)
hackersgeneral
#45daveg
daveg@sonic.net
In reply to: Bruce Momjian (#44)
hackersgeneral
#46Bruce Momjian
bruce@momjian.us
In reply to: daveg (#45)
hackersgeneral
#47Peter Eisentraut
peter_e@gmx.net
In reply to: Bruce Momjian (#40)
hackersgeneral
In reply to: Tom Lane (#42)
hackersgeneral
In reply to: hubert depesz lubaczewski (#48)
hackersgeneral
#50Tom Lane
tgl@sss.pgh.pa.us
In reply to: hubert depesz lubaczewski (#49)
hackersgeneral
#51Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#50)
hackersgeneral
In reply to: Bruce Momjian (#51)
hackersgeneral
#53Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#51)
hackersgeneral