Make deparsing of column defaults faster

Started by Jeff Janesover 7 years ago10 messages
#1Jeff Janes
jeff.janes@gmail.com
1 attachment(s)

The user docs say about column defaults: "The value is any variable-free
expression (subqueries and cross-references to other columns in the current
table are not allowed)"

And also say about pg_get_expr "If the expression might contain Vars,
specify the OID of the relation they refer to as the second parameter; if
no Vars are expected, zero is sufficient"

Since defaults can't contain Vars, this patch converts the second parameter
to zero in places where pg_get_expr is invoked on column defaults. Doing
this drops the time needed to run `pg_dump -s` on a 1600 column table with
defaults on all columns by a factor of 40. So pg_upgrade users with such
tables should see a big win (cc Justin).

I also made the change to the other places in the source which I could
identify with this issue. \d is now much faster for such tables as well,
but I didn't test the performance of the others.

There are surely more opportunities to speed up pg_get_expr for things
other than column defaults, but I wanted to submit this for now.

I have not tested the pg_upgrade using source versions <8.4, because I
can't get anything that old to "make check" correctly, so I have nothing to
test on.

Cheers,

Jeff

Attachments:

default_deparse_novar.patchapplication/octet-stream; name=default_deparse_novar.patchDownload
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index 78b0f43..90dbf52 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -4467,7 +4467,7 @@ postgresImportForeignSchema(ImportForeignSchemaStmt *stmt, Oid serverOid)
 								   "  attname, "
 								   "  format_type(atttypid, atttypmod), "
 								   "  attnotnull, "
-								   "  pg_get_expr(adbin, adrelid), "
+								   "  pg_get_expr(adbin, 0), "
 								   "  collname, "
 								   "  collnsp.nspname "
 								   "FROM pg_class c "
@@ -4488,7 +4488,7 @@ postgresImportForeignSchema(ImportForeignSchemaStmt *stmt, Oid serverOid)
 								   "  attname, "
 								   "  format_type(atttypid, atttypmod), "
 								   "  attnotnull, "
-								   "  pg_get_expr(adbin, adrelid), "
+								   "  pg_get_expr(adbin, 0), "
 								   "  NULL, NULL "
 								   "FROM pg_class c "
 								   "  JOIN pg_namespace n ON "
diff --git a/src/backend/catalog/information_schema.sql b/src/backend/catalog/information_schema.sql
index f4e69f4..23b552c 100644
--- a/src/backend/catalog/information_schema.sql
+++ b/src/backend/catalog/information_schema.sql
@@ -299,7 +299,7 @@ CREATE VIEW attributes AS
            CAST(c.relname AS sql_identifier) AS udt_name,
            CAST(a.attname AS sql_identifier) AS attribute_name,
            CAST(a.attnum AS cardinal_number) AS ordinal_position,
-           CAST(pg_get_expr(ad.adbin, ad.adrelid) AS character_data) AS attribute_default,
+           CAST(pg_get_expr(ad.adbin, 0) AS character_data) AS attribute_default,
            CAST(CASE WHEN a.attnotnull OR (t.typtype = 'd' AND t.typnotnull) THEN 'NO' ELSE 'YES' END
              AS yes_or_no)
              AS is_nullable, -- This column was apparently removed between SQL:2003 and SQL:2008.
@@ -656,7 +656,7 @@ CREATE VIEW columns AS
            CAST(c.relname AS sql_identifier) AS table_name,
            CAST(a.attname AS sql_identifier) AS column_name,
            CAST(a.attnum AS cardinal_number) AS ordinal_position,
-           CAST(pg_get_expr(ad.adbin, ad.adrelid) AS character_data) AS column_default,
+           CAST(pg_get_expr(ad.adbin, 0) AS character_data) AS column_default,
            CAST(CASE WHEN a.attnotnull OR (t.typtype = 'd' AND t.typnotnull) THEN 'NO' ELSE 'YES' END
              AS yes_or_no)
              AS is_nullable,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ea2f022..f1db915 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -8331,11 +8331,23 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 						  tbinfo->dobj.namespace->dobj.name,
 						  tbinfo->dobj.name);
 
-			printfPQExpBuffer(q, "SELECT tableoid, oid, adnum, "
-							  "pg_catalog.pg_get_expr(adbin, adrelid) AS adsrc "
-							  "FROM pg_catalog.pg_attrdef "
-							  "WHERE adrelid = '%u'::pg_catalog.oid",
-							  tbinfo->dobj.catId.oid);
+			if (fout->remoteVersion >= 80400)
+			{
+				printfPQExpBuffer(q, "SELECT tableoid, oid, adnum, "
+								  "pg_catalog.pg_get_expr(adbin, 0) AS adsrc "
+								  "FROM pg_catalog.pg_attrdef "
+								  "WHERE adrelid = '%u'::pg_catalog.oid",
+								  tbinfo->dobj.catId.oid);
+			}
+			else
+			{
+				/* Before 8.4, pg_get_expr does not allow 0 for its second arg */
+				printfPQExpBuffer(q, "SELECT tableoid, oid, adnum, "
+								  "pg_catalog.pg_get_expr(adbin, 'pg_catalog.pg_type'::pg_catalog.regclass) AS adsrc "
+								  "FROM pg_catalog.pg_attrdef "
+								  "WHERE adrelid = '%u'::pg_catalog.oid",
+								  tbinfo->dobj.catId.oid);
+			}
 
 			res = ExecuteSqlQuery(fout, q->data, PGRES_TUPLES_OK);
 
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index e5b3c1e..5a57db5 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1728,7 +1728,7 @@ describeOneTableDetails(const char *schemaname,
 	 */
 	printfPQExpBuffer(&buf, "SELECT a.attname,");
 	appendPQExpBufferStr(&buf, "\n  pg_catalog.format_type(a.atttypid, a.atttypmod),"
-						 "\n  (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128)"
+						 "\n  (SELECT substring(pg_catalog.pg_get_expr(d.adbin, 0) for 128)"
 						 "\n   FROM pg_catalog.pg_attrdef d"
 						 "\n   WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),"
 						 "\n  a.attnotnull, a.attnum,");
#2Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Jeff Janes (#1)
Re: Make deparsing of column defaults faster

On 6/4/18 20:55, Jeff Janes wrote:

The user docs say about column defaults: "The value is any variable-free
expression (subqueries and cross-references to other columns in the
current table are not allowed)"

And also say about pg_get_expr "If the expression might contain Vars,
specify the OID of the relation they refer to as the second parameter;
if no Vars are expected, zero is sufficient"

Since defaults can't contain Vars, this patch converts the second
parameter to zero in places where pg_get_expr is invoked on column
defaults.

My in-progress generated columns patch removes that assumption (since a
generated column generally refers to another column of the same table).

Doing this drops the time needed to run `pg_dump -s` on a
1600 column table with defaults on all columns by a factor of 40.  So
pg_upgrade users with such tables should see a big win (cc Justin).

That's impressive but also a bit disturbing. Where does this
performance change come from? Can we harness it in different ways?

I have not tested the pg_upgrade using source versions <8.4, because I
can't get anything that old to "make check" correctly, so I have nothing
to test on.

If you have trouble getting older PostgreSQL versions to run, try
building with -O0.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#3Jeff Janes
jeff.janes@gmail.com
In reply to: Peter Eisentraut (#2)
Re: Make deparsing of column defaults faster

On Mon, Jun 4, 2018 at 10:00 PM, Peter Eisentraut <
peter.eisentraut@2ndquadrant.com> wrote:

On 6/4/18 20:55, Jeff Janes wrote:

The user docs say about column defaults: "The value is any variable-free
expression (subqueries and cross-references to other columns in the
current table are not allowed)"

And also say about pg_get_expr "If the expression might contain Vars,
specify the OID of the relation they refer to as the second parameter;
if no Vars are expected, zero is sufficient"

Since defaults can't contain Vars, this patch converts the second
parameter to zero in places where pg_get_expr is invoked on column
defaults.

My in-progress generated columns patch removes that assumption (since a
generated column generally refers to another column of the same table).

Will your patch put the generated columns data into pg_attrdef, or create a
new catalog to hold them? I guess it would make sense to re-use the
catalog since you can't have both a default value and be a generated column.

Doing this drops the time needed to run `pg_dump -s` on a
1600 column table with defaults on all columns by a factor of 40. So
pg_upgrade users with such tables should see a big win (cc Justin).

That's impressive but also a bit disturbing. Where does this
performance change come from? Can we harness it in different ways?

The most fundamental way would be to change make_colname_unique to use a
hash table, rather than an N^2 algorithm. It would probably be similar to
Tom's commit 8004953b5a2449c26c4e "Speed up ruleutils' name de-duplication
code" but he does warn in the commit message that his method would be
trickier to use for column aliases than table aliases. That would speed up
a lot more cases than just DEFAULTS (check constraints, maybe triggers) ,
but would be harder to do.

Even once we fix this, it still wouldn't make sense to create a deparse
context which cannot be necessary. Perhaps the call to deparse_context_for
could be made lazy, so that it is delayed until a Var is actually found in
the expression node tree, rather than being done preemptively. Then all
cases without Vars would be improved, not just cases where it could be
proved by static code analysis that Vars are not possible.

When setting up the deparse context for a single table, the column names
already need to be unique (I think). So another possibility is to pass a
flag down from set_simple_column_names to set_relation_column_names to
indicate no unique checking is needed.

Since pg_dump calls pg_get_expr once over and over again on the same table
consecutively, perhaps we could cache the column alias assignments in a
single-entry cache, so if it is called on the same table as last time it
just re-uses the aliases from last time. I am not planning on working on
that, I don't know where such a cache could be stored such that is freed
and invalidated at the appropriate times.

Cheers,

Jeff

#4Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Jeff Janes (#3)
1 attachment(s)
Re: Make deparsing of column defaults faster

On 29.06.18 05:15, Jeff Janes wrote:

Since pg_dump calls pg_get_expr once over and over again on the same
table consecutively, perhaps we could cache the column alias assignments
in a single-entry cache, so if it is called on the same table as last
time it just re-uses the aliases from last time.  I am not planning on
working on that, I don't know where such a cache could be stored such
that is freed and invalidated at the appropriate times.

I looked into that. deparse_context_for() is actually not that
expensive on its own, well below one second, but it gets somewhat
expensive when you call it 1600 times for one table. So to address that
case, we can cache the deparse context between calls in the fn_extra
field of pg_get_expr. The attached patch does that. This makes the
pg_dump -s times pretty much constant even with 1600 columns with
defaults. psql \d should benefit similarly. I haven't seen any other
cases where we'd expect hundreds of related objects to deparse. (Do
people have hundreds of policies per table?)

(I suppose you could create scenarios with very many such tables to make
the overhead visible again.)

How realistic is this use case? Is it worth it?

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

0001-Cache-pg_get_expr-deparse-context-between-calls.patchtext/plain; charset=UTF-8; name=0001-Cache-pg_get_expr-deparse-context-between-calls.patch; x-mac-creator=0; x-mac-type=0Download
From 15621a2c8bbff2f72c2df8f7afd24910a51289c9 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter_e@gmx.net>
Date: Thu, 5 Jul 2018 16:27:44 +0200
Subject: [PATCH] Cache pg_get_expr deparse context between calls

---
 src/backend/utils/adt/ruleutils.c | 49 ++++++++++++++++++++++++++-----
 1 file changed, 41 insertions(+), 8 deletions(-)

diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 065238b0fe..aa73bb35d6 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -327,7 +327,7 @@ static char *pg_get_partkeydef_worker(Oid relid, int prettyFlags,
 						 bool attrsOnly, bool missing_ok);
 static char *pg_get_constraintdef_worker(Oid constraintId, bool fullCommand,
 							int prettyFlags, bool missing_ok);
-static text *pg_get_expr_worker(text *expr, Oid relid, const char *relname,
+static text *pg_get_expr_worker(FunctionCallInfo fcinfo, text *expr, Oid relid, const char *relname,
 				   int prettyFlags);
 static int print_function_arguments(StringInfo buf, HeapTuple proctup,
 						 bool print_table_args, bool print_defaults);
@@ -2292,7 +2292,7 @@ pg_get_expr(PG_FUNCTION_ARGS)
 	else
 		relname = NULL;
 
-	PG_RETURN_TEXT_P(pg_get_expr_worker(expr, relid, relname, prettyFlags));
+	PG_RETURN_TEXT_P(pg_get_expr_worker(fcinfo, expr, relid, relname, prettyFlags));
 }
 
 Datum
@@ -2317,14 +2317,20 @@ pg_get_expr_ext(PG_FUNCTION_ARGS)
 	else
 		relname = NULL;
 
-	PG_RETURN_TEXT_P(pg_get_expr_worker(expr, relid, relname, prettyFlags));
+	PG_RETURN_TEXT_P(pg_get_expr_worker(fcinfo, expr, relid, relname, prettyFlags));
 }
 
+struct cached_context
+{
+	Oid		relid;
+	List   *context;
+};
+
 static text *
-pg_get_expr_worker(text *expr, Oid relid, const char *relname, int prettyFlags)
+pg_get_expr_worker(FunctionCallInfo fcinfo, text *expr, Oid relid, const char *relname, int prettyFlags)
 {
 	Node	   *node;
-	List	   *context;
+	List	   *context = NIL;
 	char	   *exprstr;
 	char	   *str;
 
@@ -2338,9 +2344,36 @@ pg_get_expr_worker(text *expr, Oid relid, const char *relname, int prettyFlags)
 
 	/* Prepare deparse context if needed */
 	if (OidIsValid(relid))
-		context = deparse_context_for(relname, relid);
-	else
-		context = NIL;
+	{
+		struct cached_context *cc;
+
+		if (fcinfo->flinfo && fcinfo->flinfo->fn_extra)
+		{
+			cc = fcinfo->flinfo->fn_extra;
+			if (cc->relid == relid)
+				context = cc->context;
+		}
+
+		if (!context)
+		{
+			MemoryContext oldcontext = NULL;
+			struct cached_context *cc;
+
+			if (fcinfo->flinfo)
+				oldcontext = MemoryContextSwitchTo(fcinfo->flinfo->fn_mcxt);
+
+			context = deparse_context_for(relname, relid);
+
+			if (fcinfo->flinfo)
+			{
+				cc = palloc(sizeof(*cc));
+				cc->relid = relid;
+				cc->context = context;
+				fcinfo->flinfo->fn_extra = cc;
+				MemoryContextSwitchTo(oldcontext);
+			}
+		}
+	}
 
 	/* Deparse */
 	str = deparse_expression_pretty(node, context, false, false,

base-commit: f61988d160c1af8c1ed495e5c547726e88a45036
-- 
2.18.0

#5Justin Pryzby
pryzby@telsasoft.com
In reply to: Peter Eisentraut (#4)
Re: Make deparsing of column defaults faster

On Mon, Jun 04, 2018 at 10:00:53PM -0400, Peter Eisentraut wrote:

On 6/4/18 20:55, Jeff Janes wrote:

Since defaults can't contain Vars, this patch converts the second
parameter to zero in places where pg_get_expr is invoked on column
defaults.

My in-progress generated columns patch removes that assumption (since a
generated column generally refers to another column of the same table).

On Thu, Jul 05, 2018 at 04:45:07PM +0200, Peter Eisentraut wrote:

On 29.06.18 05:15, Jeff Janes wrote:

Since pg_dump calls�pg_get_expr once over and over again on the same
table consecutively, perhaps we could cache the column alias assignments
in a single-entry cache, so if it is called on the same table as last
time it just re-uses the aliases from last time.� I am not planning on

I looked into that. deparse_context_for() is actually not that
expensive on its own, well below one second, but it gets somewhat
expensive when you call it 1600 times for one table. So to address that
case, we can cache the deparse context between calls in the fn_extra
field of pg_get_expr. The attached patch does that. This makes the
pg_dump -s times pretty much constant even with 1600 columns with
defaults.

I checked on one customer running PG10.4, for which pg_dump takes 8 minutes to
pg_dump -s.

I imported existing schema to PG12dev (which itself took 25min) and compared:
patched: 2m33.616s
unpatched: 7m19.578s

Note that I've reduced the number of child tables in this DB recently (by
repartitioning tables from daily to monthly granularity), thereby reducing the
number of columns of the largest tables by a factor of 30, and reducing the
size of pg_dump -s to 51MB from 120MB (6 months ago). I expect this patch
would've saved even more before the cleanup.

How realistic is this use case? Is it worth it?

Note, that affects pg_upgrade, which is how this issue originally came up [0]/messages/by-id/CAMkU=1x-e+maqefhM1yMeSiJ8J9Z+SJHgW7c9bqo3E3JMG4iJA@mail.gmail.com.
(But I believe pg_upgrade likes to call pg_dump from the old server version, so
pg_upgrade to v11 couldn't benefit unless this was included in PG10.5).

[pryzbyj@database postgresql]$ grep -c 'SET DEFAULT' /srv/cdrperfbackup/ts/2018-07-04/pg_dump-section=pre-data
183915

Justin

[0]: /messages/by-id/CAMkU=1x-e+maqefhM1yMeSiJ8J9Z+SJHgW7c9bqo3E3JMG4iJA@mail.gmail.com

#6Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Justin Pryzby (#5)
Re: Make deparsing of column defaults faster

On 05.07.18 18:58, Justin Pryzby wrote:

I checked on one customer running PG10.4, for which pg_dump takes 8 minutes to
pg_dump -s.

I imported existing schema to PG12dev (which itself took 25min) and compared:
patched: 2m33.616s
unpatched: 7m19.578s

Which patch is that, Jeff's or mine?

Note, that affects pg_upgrade, which is how this issue originally came up [0].
(But I believe pg_upgrade likes to call pg_dump from the old server version, so
pg_upgrade to v11 couldn't benefit unless this was included in PG10.5).

pg_upgrade calls the new pg_dump.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#7Jeff Janes
jeff.janes@gmail.com
In reply to: Peter Eisentraut (#4)
Re: Make deparsing of column defaults faster

On Thu, Jul 5, 2018 at 10:45 AM, Peter Eisentraut <
peter.eisentraut@2ndquadrant.com> wrote:

On 29.06.18 05:15, Jeff Janes wrote:

Since pg_dump calls pg_get_expr once over and over again on the same
table consecutively, perhaps we could cache the column alias assignments
in a single-entry cache, so if it is called on the same table as last
time it just re-uses the aliases from last time. I am not planning on
working on that, I don't know where such a cache could be stored such
that is freed and invalidated at the appropriate times.

I looked into that. deparse_context_for() is actually not that
expensive on its own, well below one second, but it gets somewhat
expensive when you call it 1600 times for one table. So to address that
case, we can cache the deparse context between calls in the fn_extra
field of pg_get_expr. The attached patch does that. This makes the
pg_dump -s times pretty much constant even with 1600 columns with
defaults. psql \d should benefit similarly. I haven't seen any other
cases where we'd expect hundreds of related objects to deparse. (Do
people have hundreds of policies per table?)

One case that your patch doesn't improve (neither does my posted one) is
check constraints. To fix that, pg_get_constraintdef_worker would also
need to grow a cache as well. I don't know how often people put check
constraints on most of the columns of a table. Some people like their NOT
NULL constraints to be named, not implicit.

But from the bigger picture of making pg_upgrade faster, a major issue is
that while pg_dump -s gets faster for the column default case, the restore
of that dump is still slow (again, my posted patch also doesn't fix that).
In that case it is deparse_context_for called from StoreAttrDefault which
is slow.

(I suppose you could create scenarios with very many such tables to make

the overhead visible again.)

With partitioning, I think anything which happens once is likely to happen
many times. But I don't see any extra improvement from applying my patch
over yours (the bottleneck is shifted off to pg_dump itself), and yours is
definitely cleaner and slightly more general. I think that yours would be
worthwhile, even if not the last word on the subject.

Cheers,

Jeff

#8Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Jeff Janes (#7)
Re: Make deparsing of column defaults faster

On 07/07/2018 20:07, Jeff Janes wrote:

One case that your patch doesn't improve (neither does my posted one) is
check constraints.  To fix that, pg_get_constraintdef_worker would also
need to grow a cache as well.  I don't know how often people put check
constraints on most of the columns of a table.  Some people like their
NOT NULL constraints to be named, not implicit.

But from the bigger picture of making pg_upgrade faster, a major issue
is that while pg_dump -s gets faster for the column default case, the
restore of that dump is still slow (again, my posted patch also doesn't
fix that).  In that case it is deparse_context_for called from
StoreAttrDefault which is slow.

Any thoughts on how to proceed here? It seems there is more work to do
to cover all the issues with dumping and restoring tables with many
columns. Since the original report was in the context of pg_upgrade, we
should surely address at least the pg_restore slowness.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#9Jeff Janes
jeff.janes@gmail.com
In reply to: Peter Eisentraut (#8)
Re: Make deparsing of column defaults faster

On Mon, Jul 30, 2018 at 7:03 AM, Peter Eisentraut <
peter.eisentraut@2ndquadrant.com> wrote:

On 07/07/2018 20:07, Jeff Janes wrote:

One case that your patch doesn't improve (neither does my posted one) is
check constraints. To fix that, pg_get_constraintdef_worker would also
need to grow a cache as well. I don't know how often people put check
constraints on most of the columns of a table. Some people like their
NOT NULL constraints to be named, not implicit.

But from the bigger picture of making pg_upgrade faster, a major issue
is that while pg_dump -s gets faster for the column default case, the
restore of that dump is still slow (again, my posted patch also doesn't
fix that). In that case it is deparse_context_for called from
StoreAttrDefault which is slow.

Any thoughts on how to proceed here? It seems there is more work to do
to cover all the issues with dumping and restoring tables with many
columns. Since the original report was in the context of pg_upgrade, we
should surely address at least the pg_restore slowness.

I'll working on solving the problem using a hash table at the lowest level
(making column names unique), for a future commit fest. That should drop
it from N^3 to N^2, which since N can't go above 1600 should be good enough.

So we can set this to rejected, as that will be an entirely different
approach.

Your caching patch might be worthwhile on its own, though.

Cheers,

Jeff

#10Peter Eisentraut
peter.eisentraut@2ndquadrant.com
In reply to: Jeff Janes (#9)
Re: Make deparsing of column defaults faster

On 30/07/2018 13:51, Jeff Janes wrote:

Any thoughts on how to proceed here?  It seems there is more work to do
to cover all the issues with dumping and restoring tables with many
columns.  Since the original report was in the context of pg_upgrade, we
should surely address at least the pg_restore slowness.

I'll working on solving the problem using a hash table at the lowest
level (making column names unique), for a future commit fest.  That
should drop it from N^3 to N^2, which since N can't go above 1600 should
be good enough.

So we can set this to rejected, as that will be an entirely different
approach.  

Your caching patch might be worthwhile on its own, though.

I'm going to set this thread as returned with feedback until we have a
more complete solution.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services