Convert MAX_SAOP_ARRAY_SIZE to new guc

Started by James Colemanabout 7 years ago11 messages
#1James Coleman
jtc331@gmail.com
1 attachment(s)

Summary:
Create new guc array_optimization_size_limit and use it to replace
MAX_SAOP_ARRAY_SIZE in predtest.c.

Among other things this allows tuning when `col IN (1,2,3)` style
expressions can be matched against partial indexes.

It also fixes the comment:
"XXX is it worth exposing this as a GUC knob?"

Status:
The attached patch applies cleanly to master, builds without error,
and passes tests locally.

Thanks,
James Coleman

Attachments:

array_optimization_size_limit_guc-v1.patchapplication/octet-stream; name=array_optimization_size_limit_guc-v1.patchDownload
commit bbf3d7570b638a630b3d79a279fa4a2d572ebdf3
Author: jcoleman <jtc331@gmail.com>
Date:   Fri Nov 9 19:39:49 2018 +0000

    Convert MAX_SAOP_ARRAY_SIZE into guc

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index b8e32d765b..24a3353501 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4379,6 +4379,23 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
 
      <variablelist>
 
+     <varlistentry id="guc-array-optimization-size-limit" xreflabel="array_optimization_size_limit">
+      <term><varname>array_optimization_size_limit</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>array_optimization_size_limit</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        Sets the array size limit beyond which predicate optimization is not used.
+        The optimizer evaluates scalar array expressions to determine if they can
+        be treated as AND or OR clauses. This optimization proving is only performed
+        if the array contains at most this many items.
+        The default is <literal>100</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="guc-default-statistics-target" xreflabel="default_statistics_target">
       <term><varname>default_statistics_target</varname> (<type>integer</type>)
       <indexterm>
diff --git a/src/backend/optimizer/util/predtest.c b/src/backend/optimizer/util/predtest.c
index 446207de30..cba6608ce2 100644
--- a/src/backend/optimizer/util/predtest.c
+++ b/src/backend/optimizer/util/predtest.c
@@ -33,9 +33,8 @@
  * likely to require O(N^2) time, and more often than not fail anyway.
  * So we set an arbitrary limit on the number of array elements that
  * we will allow to be treated as an AND or OR clause.
- * XXX is it worth exposing this as a GUC knob?
  */
-#define MAX_SAOP_ARRAY_SIZE		100
+int array_optimization_size_limit = ARRAY_OPTIMIZATION_SIZE_LIMIT;
 
 /*
  * To avoid redundant coding in predicate_implied_by_recurse and
@@ -812,11 +811,11 @@ predicate_refuted_by_recurse(Node *clause, Node *predicate,
  * If the expression is classified as AND- or OR-type, then *info is filled
  * in with the functions needed to iterate over its components.
  *
- * This function also implements enforcement of MAX_SAOP_ARRAY_SIZE: if a
+ * This function also implements enforcement of array_optimization_size_limit: if a
  * ScalarArrayOpExpr's array has too many elements, we just classify it as an
  * atom.  (This will result in its being passed as-is to the simple_clause
  * functions, which will fail to prove anything about it.)	Note that we
- * cannot just stop after considering MAX_SAOP_ARRAY_SIZE elements; in general
+ * cannot just stop after considering array_optimization_size_limit elements; in general
  * that would result in wrong proofs, rather than failing to prove anything.
  */
 static PredClass
@@ -874,7 +873,7 @@ predicate_classify(Node *clause, PredIterInfo info)
 
 			arrayval = DatumGetArrayTypeP(((Const *) arraynode)->constvalue);
 			nelems = ArrayGetNItems(ARR_NDIM(arrayval), ARR_DIMS(arrayval));
-			if (nelems <= MAX_SAOP_ARRAY_SIZE)
+			if (nelems <= array_optimization_size_limit)
 			{
 				info->startup_fn = arrayconst_startup_fn;
 				info->next_fn = arrayconst_next_fn;
@@ -884,7 +883,7 @@ predicate_classify(Node *clause, PredIterInfo info)
 		}
 		else if (arraynode && IsA(arraynode, ArrayExpr) &&
 				 !((ArrayExpr *) arraynode)->multidims &&
-				 list_length(((ArrayExpr *) arraynode)->elements) <= MAX_SAOP_ARRAY_SIZE)
+				 list_length(((ArrayExpr *) arraynode)->elements) <= array_optimization_size_limit)
 		{
 			info->startup_fn = arrayexpr_startup_fn;
 			info->next_fn = arrayexpr_next_fn;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index ce54828fbb..f1bc483b2e 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -52,6 +52,7 @@
 #include "optimizer/geqo.h"
 #include "optimizer/paths.h"
 #include "optimizer/planmain.h"
+#include "optimizer/predtest.h"
 #include "parser/parse_expr.h"
 #include "parser/parse_type.h"
 #include "parser/parser.h"
@@ -3064,6 +3065,20 @@ static struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"array_optimization_size_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+			gettext_noop("Sets the array size limit beyond which predicate "
+						 "optimization is not used."),
+			gettext_noop("The optimizer evaluates scalar array expressions "
+						 "to determine if they can be treated as AND or OR clauses. "
+						 "This optimization proving is only performed if the array "
+						 "contains at most this many items.")
+		},
+		&array_optimization_size_limit,
+		ARRAY_OPTIMIZATION_SIZE_LIMIT, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, 0, 0, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 4e61bc6521..58d911950e 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -344,6 +344,7 @@
 
 # - Other Planner Options -
 
+#array_optimization_size_limit = 100
 #default_statistics_target = 100	# range 1-10000
 #constraint_exclusion = partition	# on, off, or partition
 #cursor_tuple_fraction = 0.1		# range 0.0-1.0
diff --git a/src/include/optimizer/predtest.h b/src/include/optimizer/predtest.h
index 69d87ea5c5..e8f6b91ef2 100644
--- a/src/include/optimizer/predtest.h
+++ b/src/include/optimizer/predtest.h
@@ -16,6 +16,8 @@
 
 #include "nodes/primnodes.h"
 
+#define ARRAY_OPTIMIZATION_SIZE_LIMIT		100
+extern PGDLLIMPORT int array_optimization_size_limit;
 
 extern bool predicate_implied_by(List *predicate_list, List *clause_list,
 					 bool weak);
diff --git a/src/test/modules/test_predtest/expected/test_predtest.out b/src/test/modules/test_predtest/expected/test_predtest.out
index 5574e03204..bca47107a9 100644
--- a/src/test/modules/test_predtest/expected/test_predtest.out
+++ b/src/test/modules/test_predtest/expected/test_predtest.out
@@ -767,6 +767,22 @@ w_i_holds         | t
 s_r_holds         | f
 w_r_holds         | f
 
+set array_optimization_size_limit to 1;
+select * from test_predtest($$
+select x <= 3, x in (1,3)
+from integers
+$$);
+-[ RECORD 1 ]-----+--
+strong_implied_by | f
+weak_implied_by   | f
+strong_refuted_by | f
+weak_refuted_by   | f
+s_i_holds         | t
+w_i_holds         | t
+s_r_holds         | f
+w_r_holds         | f
+
+set array_optimization_size_limit to default;
 select * from test_predtest($$
 select x <= 5, x in (1,3,5,7)
 from integers
diff --git a/src/test/modules/test_predtest/sql/test_predtest.sql b/src/test/modules/test_predtest/sql/test_predtest.sql
index 2734735843..39c6d383ad 100644
--- a/src/test/modules/test_predtest/sql/test_predtest.sql
+++ b/src/test/modules/test_predtest/sql/test_predtest.sql
@@ -301,6 +301,13 @@ select x <= 5, x in (1,3,5)
 from integers
 $$);
 
+set array_optimization_size_limit to 1;
+select * from test_predtest($$
+select x <= 3, x in (1,3)
+from integers
+$$);
+set array_optimization_size_limit to default;
+
 select * from test_predtest($$
 select x <= 5, x in (1,3,5,7)
 from integers
#2James Coleman
jtc331@gmail.com
In reply to: James Coleman (#1)
Re: Convert MAX_SAOP_ARRAY_SIZE to new guc

Note: the original email from David went to my spam folder, and it also
didn't show up on the archives (I assume caught by a spam filter there
also?)

Thanks for taking this on!

As far as you can tell, is the default correct at 100?

I'm not sure what a good way of measuring it would be (that is, what all
the possible cases are). I did try very simple SELECT * FROM t WHERE i IN
(...) style queries with increasing size and was able to see increased
planning time, but nothing staggering (going from 1000 to 2000 increased
from ~1.5ms to 2.5ms planning time, in an admittedly very unscientific
test.)

I think it's reasonable to leave the default at 100 for now. You could make
an argument for increasing it since the limit currently affects whether
scalar array ops can use partial indexes with "foo is not null" conditions,
but I think that's better solved more holistically, as I've attempted to do
in
/messages/by-id/CAAaqYe8yKSvzbyu8w-dThRs9aTFMwrFxn_BkTYeXgjqe3CbNjg@mail.gmail.com

What are some issues that might arise if it's set too low/too high?

Too low would result in queries being planned unsatisfactorily (i.e.,
scalar array ops switching from partial index scans to seq scans), and
setting it too high could significantly increase planning time.

#3Paul Ramsey
pramsey@cleverelephant.ca
In reply to: James Coleman (#1)
Re: Convert MAX_SAOP_ARRAY_SIZE to new guc

On Fri, Nov 9, 2018 at 1:32 PM James Coleman <jtc331@gmail.com> wrote:

Summary:
Create new guc array_optimization_size_limit and use it to replace
MAX_SAOP_ARRAY_SIZE in predtest.c.

Status:
The attached patch applies cleanly to master, builds without error,
and passes tests locally.

Confirmed that it applies and builds cleaning and regresses without error
in my environment (osx/clang)

My main comment is that the description of the purpose of the GUC doesn't
help me understand when or why I might want to alter it from the default
value. If nobody is going to alter it, because nobody understands it, it
might as well remain a compile-time constant.

+       <para>
+        Sets the array size limit beyond which predicate optimization is
not used.
+        The optimizer evaluates scalar array expressions to determine if
they can
+        be treated as AND or OR clauses. This optimization proving is only
performed
+        if the array contains at most this many items.
+        The default is <literal>100</literal>.
+       </para>

If I lower the value, what problem or use case do I solve? If I increase
it, what do I solve? What gets faster or slower at different settings of
the value? The description doesn't mention using the "IN" SQL clause which
is the use case the parameter targets. I'd suggest alternate wording, but
I'm actually still not 100% sure how a larger value would change the
behaviour of "IN" in the presence of large numbers of values?

P.

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Paul Ramsey (#3)
Re: Convert MAX_SAOP_ARRAY_SIZE to new guc

Paul Ramsey <pramsey@cleverelephant.ca> writes:

On Fri, Nov 9, 2018 at 1:32 PM James Coleman <jtc331@gmail.com> wrote:

Create new guc array_optimization_size_limit and use it to replace
MAX_SAOP_ARRAY_SIZE in predtest.c.

My main comment is that the description of the purpose of the GUC doesn't
help me understand when or why I might want to alter it from the default
value. If nobody is going to alter it, because nobody understands it, it
might as well remain a compile-time constant.

Yeah, that's sort of my reaction as well. I also feel like this is a
mighty special case to expose as a separate GUC. There are other magic
effort-limiting constants elsewhere in the planner --- we just added a
new one in e3f005d97, for instance --- and I can't get very excited about
exposing and trying to document them individually. We also have a lot
of existing exposed knobs like join_collapse_limit and the various geqo
parameters, which basically nobody knows how to use, a precedent that
isn't encouraging for adding more.

There have been occasional discussions of inventing a master "planner
effort" control knob, with values say 1..10 [1]... but my inner Spinal Tap fan wants it to go to 11., and allowing that one
thing to control all these decisions, as well as other things we might do
in the future that would cause increased planning time that might or might
not get paid back. I'd rather see somebody put effort into designing a
coherent feature like that than figure out how to document finer-grained
knobs.

regards, tom lane

[1]: ... but my inner Spinal Tap fan wants it to go to 11.

#5James Coleman
jtc331@gmail.com
In reply to: Tom Lane (#4)
Re: Convert MAX_SAOP_ARRAY_SIZE to new guc

My main comment is that the description of the purpose of the GUC doesn't
help me understand when or why I might want to alter it from the default
value. If nobody is going to alter it, because nobody understands it, it
might as well remain a compile-time constant.

Yeah, that's sort of my reaction as well. I also feel like this is a
mighty special case to expose as a separate GUC. There are other magic
effort-limiting constants elsewhere in the planner --- we just added a
new one in e3f005d97, for instance --- and I can't get very excited about
exposing and trying to document them individually. We also have a lot
of existing exposed knobs like join_collapse_limit and the various geqo
parameters, which basically nobody knows how to use, a precedent that
isn't encouraging for adding more.

I'd be happy to yank this in favor of my holistic solution to this
problem I posted recently on the mailing list [1]/messages/by-id/CAAaqYe8yKSvzbyu8w-dThRs9aTFMwrFxn_BkTYeXgjqe3CbNjg@mail.gmail.com.

Assuming we go that route, I'd propose we still yank the existing todo
comment about turning it into a GUC.

[1]: /messages/by-id/CAAaqYe8yKSvzbyu8w-dThRs9aTFMwrFxn_BkTYeXgjqe3CbNjg@mail.gmail.com

#6Simon Riggs
simon@2ndquadrant.com
In reply to: James Coleman (#5)
Re: Convert MAX_SAOP_ARRAY_SIZE to new guc

On Fri, 16 Nov 2018 at 14:00, James Coleman <jtc331@gmail.com> wrote:

My main comment is that the description of the purpose of the GUC

doesn't

help me understand when or why I might want to alter it from the

default

value. If nobody is going to alter it, because nobody understands it,

it

might as well remain a compile-time constant.

Yeah, that's sort of my reaction as well. I also feel like this is a
mighty special case to expose as a separate GUC. There are other magic
effort-limiting constants elsewhere in the planner --- we just added a
new one in e3f005d97, for instance --- and I can't get very excited about
exposing and trying to document them individually. We also have a lot
of existing exposed knobs like join_collapse_limit and the various geqo
parameters, which basically nobody knows how to use, a precedent that
isn't encouraging for adding more.

I'd be happy to yank this in favor of my holistic solution to this
problem I posted recently on the mailing list [1].

[1]: /messages/by-id/CAAaqYe8yKSvzbyu8w-dThRs9aTFMwrFxn_BkTYeXgjqe3CbNjg@mail.gmail.com
/messages/by-id/CAAaqYe8yKSvzbyu8w-dThRs9aTFMwrFxn_BkTYeXgjqe3CbNjg@mail.gmail.com

Not precisely sure what you mean - are you saying that we can just have an
overall test for NOT NULL, which thereby avoids the need to expand the
array and therefore dispenses with the GUC completely?

Having indexes defined using WHERE NOT NULL is a very important use case.

Assuming we go that route, I'd propose we still yank the existing todo
comment about turning it into a GUC.

Agreed

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Simon Riggs (#6)
Re: Convert MAX_SAOP_ARRAY_SIZE to new guc

Simon Riggs <simon@2ndquadrant.com> writes:

On Fri, 16 Nov 2018 at 14:00, James Coleman <jtc331@gmail.com> wrote:

Yeah, that's sort of my reaction as well. I also feel like this is a
mighty special case to expose as a separate GUC. There are other magic
effort-limiting constants elsewhere in the planner --- we just added a
new one in e3f005d97, for instance --- and I can't get very excited about
exposing and trying to document them individually. We also have a lot
of existing exposed knobs like join_collapse_limit and the various geqo
parameters, which basically nobody knows how to use, a precedent that
isn't encouraging for adding more.

I'd be happy to yank this in favor of my holistic solution to this
problem I posted recently on the mailing list [1].
[1] /messages/by-id/CAAaqYe8yKSvzbyu8w-dThRs9aTFMwrFxn_BkTYeXgjqe3CbNjg@mail.gmail.com

Not precisely sure what you mean - are you saying that we can just have an
overall test for NOT NULL, which thereby avoids the need to expand the
array and therefore dispenses with the GUC completely?

No, he's saying that other thing solves his particular problem.

We certainly have seen other cases where people wished they could adjust
MAX_SAOP_ARRAY_SIZE. I'm just not excited about exposing a GUC that does
exactly that one thing. I'd rather have some more-generic knob that's
not so tightly tied to implementation details.

regards, tom lane

#8James Coleman
jtc331@gmail.com
In reply to: Simon Riggs (#6)
Re: Convert MAX_SAOP_ARRAY_SIZE to new guc

I'd be happy to yank this in favor of my holistic solution to this
problem I posted recently on the mailing list [1].

[1] /messages/by-id/CAAaqYe8yKSvzbyu8w-dThRs9aTFMwrFxn_BkTYeXgjqe3CbNjg@mail.gmail.com

Not precisely sure what you mean - are you saying that we can just have an overall test for NOT NULL, which thereby avoids the need to expand the array and therefore dispenses with the GUC completely?

Having indexes defined using WHERE NOT NULL is a very important use case.

I don't think we can avoid expanding the array for other cases (for
example, being able to infer that "foo < 5" for "foo IN (1,2,3,4)". If
we wanted to keep that inference without expanding the array we'd have
to (at minimum, I think) duplicate a lot of the existing inference
logic, but I haven't investigated it much.

So keeping the GUC could allow someone to tune how large an array can
be and still guarantee inferences like "foo < 5". But I'm not sure
that that is as valuable. At least I haven't run into cases where I've
noticed a need for it.

My patch only addresses the IS NOT NULL inference precisely for the
reason you state: we have lots of plans that (unless you tack on an
explicit "foo IS NOT NULL" to your query) the planner decides it can't
use WHERE NOT NULL indexes because it can't currently infer the
correctness of that for large arrays.

#9Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#4)
Re: Convert MAX_SAOP_ARRAY_SIZE to new guc

On Thu, Nov 15, 2018 at 5:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

There have been occasional discussions of inventing a master "planner
effort" control knob, with values say 1..10 [1], and allowing that one
thing to control all these decisions, as well as other things we might do
in the future that would cause increased planning time that might or might
not get paid back. I'd rather see somebody put effort into designing a
coherent feature like that than figure out how to document finer-grained
knobs.

FWIW, I find it hard to believe this will make users very happy. I
think it'll just lead to people complaining that they can't get
planner optimization A without paying the cost of planner optimization
B. The stuff that people have proposed grouping under a planner
effort knob is all pretty much corner case behavior, so a lot of
people won't get any benefit at all from turning up the knob, and
among those that do, there will probably be one specific behavior that
they want to enable, so the optimal value will be just high enough to
enable that behavior, and then they'll wonder why they can't just
enable that one thing.

I do think it would make users happy if you could make it a nice,
smooth curve: turning up the planner strength increases planning time
at a predictable rate and decreases execution time at a predictable
rate. But I really doubt it's possible to create something that works
that way.

[1] ... but my inner Spinal Tap fan wants it to go to 11.

+1 for that, though.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#9)
Re: Convert MAX_SAOP_ARRAY_SIZE to new guc

Robert Haas <robertmhaas@gmail.com> writes:

On Thu, Nov 15, 2018 at 5:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

There have been occasional discussions of inventing a master "planner
effort" control knob, with values say 1..10 [1], and allowing that one
thing to control all these decisions, as well as other things we might do
in the future that would cause increased planning time that might or might
not get paid back. I'd rather see somebody put effort into designing a
coherent feature like that than figure out how to document finer-grained
knobs.

FWIW, I find it hard to believe this will make users very happy. I
think it'll just lead to people complaining that they can't get
planner optimization A without paying the cost of planner optimization
B.

I don't think so, because right now they (a) can't get either
optimization, and/or (b) don't know what either one does or
how to invoke it.

Also (c) exposing such knobs creates backwards-compatibility problems for
us any time we want to change the associated behavior, which is hardly
an unlikely wish considering that mostly these are kluges by definition.
(Which contributes to the documentation problem Paul already noted.)

A planner-effort knob would be really easy to understand, I think,
and we'd not be tied to any particular details about what it does.

The stuff that people have proposed grouping under a planner
effort knob is all pretty much corner case behavior,

One of the first things I'd replace with such a knob is
join_collapse_limit/from_collapse_limit, which is by no stretch
of the imagination a corner case.

I do think it would make users happy if you could make it a nice,
smooth curve: turning up the planner strength increases planning time
at a predictable rate and decreases execution time at a predictable
rate. But I really doubt it's possible to create something that works
that way.

Yeah, it's unlikely that it'd be linear on any particular query.
There would be jumps in the planner runtime whenever relevant
thresholds were exceeded.

[1] ... but my inner Spinal Tap fan wants it to go to 11.

+1 for that, though.

It's tempting to imagine that 10 means "highest reasonable effort
limits" and then 11 disengages all limits. The practical use of
that would be if you wanted to see whether the planner *could*
produce the plan you wanted and was just not trying hard enough.

regards, tom lane

#11Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#10)
Re: Convert MAX_SAOP_ARRAY_SIZE to new guc

On Fri, Nov 16, 2018 at 10:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

I don't think so, because right now they (a) can't get either
optimization, and/or (b) don't know what either one does or
how to invoke it.

Sure. But as soon as they know that, they're just going to try to
figure out how to get the thing they want without the stuff they don't
want.

A planner-effort knob would be really easy to understand, I think,
and we'd not be tied to any particular details about what it does.

That's just wishful thinking. People will want to know what it does,
they'll want that to be documented, and they complain if it changes
from release to release.

I mean, you've often taken the position that people will notice and/or
care deeply if our *C* interfaces change from release to release, and
even moreso in a minor release. I think you overstate that danger,
but it must be admitted that the danger is not zero. GUCs, unlike C
functions, are unarguably part of the exposed interface, and the
danger there is considerably more, at least IMHO.

One of the first things I'd replace with such a knob is
join_collapse_limit/from_collapse_limit, which is by no stretch
of the imagination a corner case.

True. So then you'll have people who can't get sufficiently-high
collapse limits without enabling a bunch of other stuff they don't
care about, or on the other hand have to raise the collapse limits
higher than makes sense for them to get the other optimizations that
they want. They also won't be able to use the hack where you set
join_collapse_limit=1 to force a join ordering any more. And the
mapping of the 1..infinity collapse limit space onto the 1..10 planner
effort space is going to be basically totally arbitrary. There is no
hope at all that you're going to pick values that everyone likes.

I think it might be reasonable to set various individual GUCs to
values that mean "use the autoconfigure default" and then provide a
planner-strength GUC that varies that default. But I believe that
depriving people of the ability to control the settings individually
is bound to produce complaints, both for things where we already
expose them (like the collapse limits) and for things where we don't
(like $SUBJECT).

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company