Re: Status of plperl inter-sp calling
On Thu, Dec 31, 2009 at 09:47:24AM -0800, David E. Wheeler wrote:
On Dec 30, 2009, at 2:54 PM, Tim Bunce wrote:
That much works currently. Behind the scenes, when a stored procedure is
loaded into plperl the code ref for the perl sub is stored in a cache.
Effectively just
$cache{$name}[$nargs] = $coderef;
An SP::AUTOLOAD sub intercepts any SP::* call and effectively does
lookup_sp($name, \@_)->(@_);
For SPs that are already loaded lookup_sp returns $cache{$name}[$nargs]
so the overhead of the call is very small.Definite benefit, there. How does it handle the difference between
IMMUTABLE | STABLE | VOLATILE, as well as STRICT functions?
It doesn't at the moment. I think IMMUTABLE, STABLE and VOLATILE can be
(documented as being) ignored in this context.
Supporting STRICT probably wouldn't be too hard.
And what does it do if the function called is not actually a Perl function?
(See "fallback-to-SQL" two paragraphs below)
For SPs that are not cached, lookup_sp returns a code ref of a closure
that will invoke $name with the args in @_ via
spi_exec_query("select * from $name($encoded_args)");The fallback-to-SQL behaviour neatly handles non-cached SPs (forcing
them to be loaded and thus cached), and inter-language calling (both
plperl<->plperl and other PLs).Is there a way for such a function to be cached? If not, I'm not sure
where cached functions come from.
The act of calling the function via spi_exec_query will load it, and
thereby cache it in the perl interpreter as a side effect (if the
language is the is the same: e.g., plperlu->plperlu).
Limitations:
* It's not meant to handle type polymorphism, only the number of args.
Well, spi_exec_query() handles the type polymorphism. So might it be
possible to call SP::function() and have it not use a cached query?
That way, one gets the benefit of polymorphism. Maybe there's a SP
package that does caching, and an SPI package that does not? (Better
named, though.)
The underlying issue here is perl's lack of strong typing.
See http://search.cpan.org/~mlehmann/JSON-XS-2.26/XS.pm#PERL_-%3E_JSON
especially the "simple scalars" section and "used as string" example.
As far as I can see there's no way for perl to support the kind of
rich type polymorphism that PostgreSQL offers via the kind of "make it
look like a perl function call" interface that we're discussing.
[I can envisage a more complex interface where you ask for a code ref to
a sub with a specific type signature and then use that code ref to make the
call. Ah, I've just had a better idea but it needs a little more thought.
I'll send a another email later.]
* When invoked via SQL, because the SP isn't cached, all non-ref args
are all expressed as strings via quote_nullable(). Any array refs
are encoded as ARRAY[...] via encode_array_constructor().Hrm. Why not use spi_prepare() and let spi_exec_prepared() handle the quoting?
No reason, assuming spi_exec_prepared handles array refs properly
[I was just doing "simplest thing that could possibly work" at this stage]
I don't see either of those as significant issues: "If you need more
control for a particular SP then don't use SP::* to call that SP."If there was a non-cached version that was essentially just sugar for
the SPI stuff, I think that would be more predicable, no? I'm not
saying there shouldn't be a cached interface, just that it should not
be the first choice when using polymorphic functions and non-PL/Perl
functions.
So you're suggesting SP::foo(...) _always_ executes foo(...) via bunch
of spi_* calls. Umm. I thought performance was a major driving factor.
Sounds like you're more keen on syntactic sugar.
Tim.
Import Notes
Reply to msg id not found: 603c8f070912311014h45589375uf451362b5d40606f@mail.gmail.com21634.1262231640@sss.pgh.pa.us3E7F47EA-6F5F-4BD7-9A29-026A8FDEEFAD@kineticode.com603c8f070912301617j6a587cb8g313a23164d440d41@mail.gmail.comF4D56ABC-C503-47B4-AF11-A3830F54F154@kineticode.com
On Jan 5, 2010, at 12:59 PM, Tim Bunce wrote:
So you're suggesting SP::foo(...) _always_ executes foo(...) via bunch
of spi_* calls. Umm. I thought performance was a major driving factor.
Sounds like you're more keen on syntactic sugar.
I'm saying do both. Make the cached version the one that will be used most often, but make available a second version that doesn't cache so that you get the sugar and the polymorphic dispatch. Such would only have to be used in cases where there is more than one function that takes the same number of arguments. The rest of the time -- most of the time, that is -- one can use the cached version.
Best,
David
On Tue, Jan 05, 2010 at 01:05:40PM -0800, David E. Wheeler wrote:
On Jan 5, 2010, at 12:59 PM, Tim Bunce wrote:
So you're suggesting SP::foo(...) _always_ executes foo(...) via bunch
of spi_* calls. Umm. I thought performance was a major driving factor.
Sounds like you're more keen on syntactic sugar.I'm saying do both. Make the cached version the one that will be used
most often, but make available a second version that doesn't cache so
that you get the sugar and the polymorphic dispatch. Such would only
have to be used in cases where there is more than one function that
takes the same number of arguments. The rest of the time -- most of
the time, that is -- one can use the cached version.
I think I have a best-of-both solution. E-mail to follow...
Tim.
Tim Bunce <Tim.Bunce@pobox.com> writes:
On Thu, Dec 31, 2009 at 09:47:24AM -0800, David E. Wheeler wrote:
Definite benefit, there. How does it handle the difference between
IMMUTABLE | STABLE | VOLATILE, as well as STRICT functions?
It doesn't at the moment. I think IMMUTABLE, STABLE and VOLATILE can be
(documented as being) ignored in this context.
Just for the record, I think that would be a seriously bad idea.
There is a semantic difference there (having to do with snapshot
management), and ignoring it would mean that a function could behave
subtly differently depending on how it was called. It's the kind of
thing that would be a nightmare to debug, too, because you'd never
see a problem except when the right sort of race condition occurred
with another transaction.
I see downthread that you seem to have an approach without this gotcha,
so that's fine, but I wanted to make it clear that you can't just ignore
volatility.
regards, tom lane
On Tue, Jan 05, 2010 at 06:54:36PM -0500, Tom Lane wrote:
Tim Bunce <Tim.Bunce@pobox.com> writes:
On Thu, Dec 31, 2009 at 09:47:24AM -0800, David E. Wheeler wrote:
Definite benefit, there. How does it handle the difference between
IMMUTABLE | STABLE | VOLATILE, as well as STRICT functions?It doesn't at the moment. I think IMMUTABLE, STABLE and VOLATILE can be
(documented as being) ignored in this context.Just for the record, I think that would be a seriously bad idea.
There is a semantic difference there (having to do with snapshot
management), and ignoring it would mean that a function could behave
subtly differently depending on how it was called. It's the kind of
thing that would be a nightmare to debug, too, because you'd never
see a problem except when the right sort of race condition occurred
with another transaction.I see downthread that you seem to have an approach without this gotcha,
so that's fine, but I wanted to make it clear that you can't just ignore
volatility.
Ok, thanks Tom.
For my own benefit, being a PostgreSQL novice, could you expand a little?
For example, given two stored procedures, A and V, where V is marked
VOLATILE and both are plperl. How would having A call V directly, within
the plperl interpreter, cause problems?
Tim.
Tim Bunce <Tim.Bunce@pobox.com> writes:
For my own benefit, being a PostgreSQL novice, could you expand a little?
For example, given two stored procedures, A and V, where V is marked
VOLATILE and both are plperl. How would having A call V directly, within
the plperl interpreter, cause problems?
That case is fine. The problem would be in calling, say, VOLATILE from
STABLE. Any SPI queries executed inside the VOLATILE function would
need to be handled under read-write not read-only rules.
Now it's perhaps possible for you to track that yourself and make sure
to call SPI with the right arguments for the type of function you're
currently in, even if you didn't get to it via the front door. But
that's a far cry from "ignoring" the volatility property. It seems
nontrivial to do if you try to set things up so that no plperl code is
executed during the transition from one function to another.
regards, tom lane
On Wed, Jan 6, 2010 at 9:46 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Tim Bunce <Tim.Bunce@pobox.com> writes:
For my own benefit, being a PostgreSQL novice, could you expand a little?
For example, given two stored procedures, A and V, where V is marked
VOLATILE and both are plperl. How would having A call V directly, within
the plperl interpreter, cause problems?That case is fine. The problem would be in calling, say, VOLATILE from
STABLE. Any SPI queries executed inside the VOLATILE function would
need to be handled under read-write not read-only rules.Now it's perhaps possible for you to track that yourself and make sure
to call SPI with the right arguments for the type of function you're
currently in, even if you didn't get to it via the front door. But
that's a far cry from "ignoring" the volatility property. It seems
nontrivial to do if you try to set things up so that no plperl code is
executed during the transition from one function to another.
I think it's becoming clear that it's hopeless to make this work in a
way that is parallel to what will happen if you call these functions
via a real SPI call. Even if Tim were able to reimplement all of our
semantics in terms of what is immutable, stable, volatile,
overloading, default arguments, variadic arguments, etc., I am fairly
certain that we do not wish to maintain a Perl reimplementation of all
of our calling conventions which will then have to be updated every
time someone adds a new bit of magic to the core code.
I think what we should do is either (1) implement a poor man's caching
that doesn't try to cope with any of these issues, and document that
you get what you pay for or (2) reject this idea in its entirety.
Trying to reimplement all of our normal function call semantics in a
caching layer does not seem very sane.
...Robert
Robert Haas <robertmhaas@gmail.com> writes:
I think what we should do is either (1) implement a poor man's caching
that doesn't try to cope with any of these issues, and document that
you get what you pay for or (2) reject this idea in its entirety.
Trying to reimplement all of our normal function call semantics in a
caching layer does not seem very sane.
What about (3) implementing the caching layer in the core code so that
any caller benefit from it? I guess the size of the project is not the
same though.
Regards,
--
dim
Tom Lane wrote:
Tim Bunce <Tim.Bunce@pobox.com> writes:
For my own benefit, being a PostgreSQL novice, could you expand a little?
For example, given two stored procedures, A and V, where V is marked
VOLATILE and both are plperl. How would having A call V directly, within
the plperl interpreter, cause problems?That case is fine. The problem would be in calling, say, VOLATILE from
STABLE. Any SPI queries executed inside the VOLATILE function would
need to be handled under read-write not read-only rules.Now it's perhaps possible for you to track that yourself and make sure
to call SPI with the right arguments for the type of function you're
currently in, even if you didn't get to it via the front door. But
that's a far cry from "ignoring" the volatility property. It seems
nontrivial to do if you try to set things up so that no plperl code is
executed during the transition from one function to another.
I don't understand that phrase "call SPI with the right arguments for
the type of function you're currently in". What calls that we make from
plperl code would have different arguments depending on the volatility
of the function? If a cached plan is going to behave differently, I'd be
inclined to say that we should only allow direct inter-sp calling to
volatile functions from volatile functions - if U understand you right
the only problem could be caused by calling in this direction, a
volatile function calling a stable function would not cause a problem.
That is surely the most likely case anyway. I at least rarely create
non-volatile plperl functions, apart from an occasional immutable
function that probably shouldn't be calling SPI anyway.
cheers
andrew
Andrew Dunstan <andrew@dunslane.net> writes:
I don't understand that phrase "call SPI with the right arguments for
the type of function you're currently in". What calls that we make from
plperl code would have different arguments depending on the volatility
of the function?
eg, in plperl_spi_exec,
spi_rv = SPI_execute(query, current_call_data->prodesc->fn_readonly,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
limit);
If a cached plan is going to behave differently, I'd be
inclined to say that we should only allow direct inter-sp calling to
volatile functions from volatile functions - if U understand you right
the only problem could be caused by calling in this direction, a
volatile function calling a stable function would not cause a problem.
The other way is just as wrong.
regards, tom lane
Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
I don't understand that phrase "call SPI with the right arguments for
the type of function you're currently in". What calls that we make from
plperl code would have different arguments depending on the volatility
of the function?eg, in plperl_spi_exec,
spi_rv = SPI_execute(query, current_call_data->prodesc->fn_readonly,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
limit);
OK, but won't that automatically supply the value from the function
called from postgres, which will be the right thing? i.e. if postgres
calls S which direct-calls V which calls SPI_execute(), the value of
current_call_data->prodesc->fn_readonly in the call above will be
supplied from S, not V, since S will be at the top of the plperl call stack.
cheers
andrew
Andrew Dunstan <andrew@dunslane.net> writes:
Tom Lane wrote:
spi_rv = SPI_execute(query, current_call_data->prodesc->fn_readonly,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OK, but won't that automatically supply the value from the function
called from postgres, which will be the right thing?
My point was that that is exactly the wrong thing. If I have a function
declared stable, it must not suddenly start behaving as volatile because
it was called from a volatile function. Nor vice versa.
Now as I mentioned upthread, there might be other ways to get the
correct value of the readonly parameter. One that comes to mind is
to somehow attach it to the spi call "at compile time", whatever that
means in the perl world. But you can't just have it be determined by
the outermost active function call.
regards, tom lane
Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
Tom Lane wrote:
spi_rv = SPI_execute(query, current_call_data->prodesc->fn_readonly,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^OK, but won't that automatically supply the value from the function
called from postgres, which will be the right thing?My point was that that is exactly the wrong thing. If I have a function
declared stable, it must not suddenly start behaving as volatile because
it was called from a volatile function. Nor vice versa.Now as I mentioned upthread, there might be other ways to get the
correct value of the readonly parameter. One that comes to mind is
to somehow attach it to the spi call "at compile time", whatever that
means in the perl world. But you can't just have it be determined by
the outermost active function call.
OK.
Well, no doubt Tim might have better ideas, but the only way I can think
of is to attach a readonly attribute (see perdoc attributes) to the
function and then pass that back in the SPI call (not sure how easy it
is to get the caller's attributes in C code). Unless we come up with a
neatish way I'd be a bit inclined to agree with Robert that this is all
looking too complex.
Next question: what do we do if a direct-called function calls
return_next()? That at least must surely take effect in the caller's
context - the callee won't have anywhere to stash the the results at all.
cheers
andrew
Andrew Dunstan <andrew@dunslane.net> writes:
Next question: what do we do if a direct-called function calls
return_next()? That at least must surely take effect in the caller's
context - the callee won't have anywhere to stash the the results at all.
Whatever do you mean by "take effect in the caller's context"? I surely
hope it's not "return the row to the caller's caller, who likely isn't
expecting anything of the kind".
I suspect Tim will just answer that he isn't going to try to
short-circuit the call path for set-returning functions.
regards, tom lane
Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
Next question: what do we do if a direct-called function calls
return_next()? That at least must surely take effect in the caller's
context - the callee won't have anywhere to stash the the results at all.Whatever do you mean by "take effect in the caller's context"? I surely
hope it's not "return the row to the caller's caller, who likely isn't
expecting anything of the kind".I suspect Tim will just answer that he isn't going to try to
short-circuit the call path for set-returning functions.
FYI, I am excited PL/Perl is getting a good review and cleaning by Tim.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
On Wed, Jan 06, 2010 at 11:14:38AM -0500, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
Tom Lane wrote:
spi_rv = SPI_execute(query, current_call_data->prodesc->fn_readonly,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^OK, but won't that automatically supply the value from the function
called from postgres, which will be the right thing?My point was that that is exactly the wrong thing. If I have a function
declared stable, it must not suddenly start behaving as volatile because
it was called from a volatile function. Nor vice versa.Now as I mentioned upthread, there might be other ways to get the
correct value of the readonly parameter. One that comes to mind is
to somehow attach it to the spi call "at compile time", whatever that
means in the perl world. But you can't just have it be determined by
the outermost active function call.
If you want 'a perl compile time hook', those are called attributes.
http://search.cpan.org/~dapm/perl-5.10.1/lib/attributes.pm
You can define attributes to effect how a given syntax compiles.
perl.
my $var :foo;
or
sub bar :foo;
The subroutine or variable is compiled in a way defined by the
':foo' attribute.
This might be a clean way around the type dispatch issues
as well. One could include the invokant type information in the
perl declaration.
sub sp_something :pg_sp ('bigint bigint');
sp_something ("12",0);
Anyway, that looks like a nice interface to me...
Although, I don't understand the Pg internals problem faced here
so ... I'm not sure my suggestion is helpful.
Garick
Show quoted text
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Garick Hamlin <ghamlin@isc.upenn.edu> writes:
If you want 'a perl compile time hook', those are called attributes.
http://search.cpan.org/~dapm/perl-5.10.1/lib/attributes.pm
Hm ... first question that comes to mind is how far back does that work?
The comments on that page about this or that part of it being still
experimental aren't very comforting either...
regards, tom lane
Tom Lane wrote:
Garick Hamlin <ghamlin@isc.upenn.edu> writes:
If you want 'a perl compile time hook', those are called attributes.
http://search.cpan.org/~dapm/perl-5.10.1/lib/attributes.pmHm ... first question that comes to mind is how far back does that work?
The comments on that page about this or that part of it being still
experimental aren't very comforting either...
That's a case of out of date docco more than anything else, AFAIK. It's
been there at least since 5.6.2 (which is the earliest source I have on
hand).
cheers
andrew
On Jan 6, 2010, at 11:27 AM, Andrew Dunstan wrote:
That's a case of out of date docco more than anything else, AFAIK. It's been there at least since 5.6.2 (which is the earliest source I have on hand).
Which likely also means 5.6.1 and quite possibly 5.6.0. I don't recommend anything earlier than 5.6.2, though, frankly, and 5.8.9 is a better choice. 5.10.1 better still. Is there a documented required minimum version for PL/Perl?
Best,
David
"David E. Wheeler" <david@kineticode.com> writes:
Which likely also means 5.6.1 and quite possibly 5.6.0. I don't recommend anything earlier than 5.6.2, though, frankly, and 5.8.9 is a better choice. 5.10.1 better still. Is there a documented required minimum version for PL/Perl?
One of the things on my to-do list for today is to make configure reject
Perl versions less than $TBD. I thought we had agreed a week or so back
that 5.8 was the lowest safe version because of utf8 and other
considerations.
regards, tom lane
On Jan 6, 2010, at 12:20 PM, Tom Lane wrote:
One of the things on my to-do list for today is to make configure reject
Perl versions less than $TBD. I thought we had agreed a week or so back
that 5.8 was the lowest safe version because of utf8 and other
considerations.
+1, and 5.8.3 at a minimum for utf8 stuff, 5.8.8 much much better.
David
On Wed, Jan 06, 2010 at 01:45:45PM -0800, David E. Wheeler wrote:
On Jan 6, 2010, at 12:20 PM, Tom Lane wrote:
One of the things on my to-do list for today is to make configure reject
Perl versions less than $TBD. I thought we had agreed a week or so back
that 5.8 was the lowest safe version because of utf8 and other
considerations.+1, and 5.8.3 at a minimum for utf8 stuff, 5.8.8 much much better.
I think we said 5.8.1 at the time, but 5.8.3 sounds good to me.
There would be _very_ few places using < 5.8.6.
Tim.
On Wed, Jan 06, 2010 at 11:41:46AM -0500, Tom Lane wrote:
Andrew Dunstan <andrew@dunslane.net> writes:
Next question: what do we do if a direct-called function calls
return_next()? That at least must surely take effect in the caller's
context - the callee won't have anywhere to stash the the results at all.Whatever do you mean by "take effect in the caller's context"? I surely
hope it's not "return the row to the caller's caller, who likely isn't
expecting anything of the kind".I suspect Tim will just answer that he isn't going to try to
short-circuit the call path for set-returning functions.
For 8.5 I don't think I'll even attempt direct inter-plperl-calls.
I'll just do a nicely-sugared wrapper around spi_exec_prepared().
Either via import, as I outlined earlier, or Garick Hamlin's suggestion
of attributes - which is certainly worth exploring.
Tim.
On Jan 6, 2010, at 3:31 PM, Tim Bunce wrote:
For 8.5 I don't think I'll even attempt direct inter-plperl-calls.
I'll just do a nicely-sugared wrapper around spi_exec_prepared().
Either via import, as I outlined earlier, or Garick Hamlin's suggestion
of attributes - which is certainly worth exploring.
If it's just the sugar, then in addition to the export, which is a great idea, I'd still like to have the AUTOLOAD solution, since there may be a bunch of different functions and I might not want to import them all.
Best,
David
Tim Bunce <Tim.Bunce@pobox.com> writes:
On Wed, Jan 06, 2010 at 01:45:45PM -0800, David E. Wheeler wrote:
On Jan 6, 2010, at 12:20 PM, Tom Lane wrote:
One of the things on my to-do list for today is to make configure reject
Perl versions less than $TBD. I thought we had agreed a week or so back
that 5.8 was the lowest safe version because of utf8 and other
considerations.+1, and 5.8.3 at a minimum for utf8 stuff, 5.8.8 much much better.
I think we said 5.8.1 at the time, but 5.8.3 sounds good to me.
There would be _very_ few places using < 5.8.6.
I went with 5.8 as the cutoff, for a couple of reasons: we're not in
the business of telling people they ought to be up-to-date, but only of
rejecting versions that demonstrably fail badly; and I found out that
older versions of awk are not sufficiently competent with && and || to
code a more complex test properly :-(. A version check that doesn't
actually do what it claims to is worse than useless, and old buggy awk
is exactly what you'd expect to find on a box with old buggy perl.
(It's also worth noting that the perl version seen at configure time
is not necessarily that seen at runtime, anyway, so there's not a lot
of point in getting too finicky here.)
regards, tom lane
On Jan 6, 2010, at 5:46 PM, Tom Lane wrote:
I went with 5.8 as the cutoff, for a couple of reasons: we're not in
the business of telling people they ought to be up-to-date, but only of
rejecting versions that demonstrably fail badly; and I found out that
older versions of awk are not sufficiently competent with && and || to
code a more complex test properly :-(. A version check that doesn't
actually do what it claims to is worse than useless, and old buggy awk
is exactly what you'd expect to find on a box with old buggy perl.
Yes, but even a buggy old Perl is quite competent with && and ||. Why use awk to test the version of Perl when you have this other nice utility to do the job?
(It's also worth noting that the perl version seen at configure time
is not necessarily that seen at runtime, anyway, so there's not a lot
of point in getting too finicky here.)
Fair enough.
Best,
David
On Wed, Jan 06, 2010 at 08:46:11PM -0500, Tom Lane wrote:
Tim Bunce <Tim.Bunce@pobox.com> writes:
On Wed, Jan 06, 2010 at 01:45:45PM -0800, David E. Wheeler wrote:
On Jan 6, 2010, at 12:20 PM, Tom Lane wrote:
One of the things on my to-do list for today is to make configure reject
Perl versions less than $TBD. I thought we had agreed a week or so back
that 5.8 was the lowest safe version because of utf8 and other
considerations.+1, and 5.8.3 at a minimum for utf8 stuff, 5.8.8 much much better.
I think we said 5.8.1 at the time, but 5.8.3 sounds good to me.
There would be _very_ few places using < 5.8.6.I went with 5.8 as the cutoff, for a couple of reasons: we're not in
the business of telling people they ought to be up-to-date, but only of
rejecting versions that demonstrably fail badly;
I think 5.8.0 will fail badly, possibly demonstrably but more likely in
subtle ways relating to utf8 that are hard to debug.
and I found out that
older versions of awk are not sufficiently competent with && and || to
code a more complex test properly :-(. A version check that doesn't
actually do what it claims to is worse than useless, and old buggy awk
is exactly what you'd expect to find on a box with old buggy perl.
Either of these approaches should work back to perl 5.0...
perl -we 'use 5.008001' 2>/dev/null && echo ok
or
perl -we 'exit($] < 5.008001)' && echo ok
(It's also worth noting that the perl version seen at configure time
is not necessarily that seen at runtime, anyway, so there's not a lot
of point in getting too finicky here.)
A simple
use 5.008001;
at the start of src/pl/plperl/plc_perlboot.pl would address that.
I believe Andrew is planing to commit my plperl refactor patch soon.
He could add it then, or I could add it to my feature patch (which I
plan to reissue soon, with very minor changes, and add to commitfest).
Tim.