Changed SRF in targetlist handling
Hi,
discussing executor performance with a number of people at pgcon,
several hackers - me included - complained about the additional
complexity, both code and runtime, required to handle SRFs in the target
list.
One idea I circulated was to fix that by interjecting a special executor
node to process SRF containing targetlists (reusing Result possibly?).
That'd allow to remove the isDone argument from ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.
Robert suggested - IIRC mentioning previous on-list discussion - to
instead rewrite targetlist SRFs into lateral joins. My gut feeling is
that that'd be a larger undertaking, with significant semantics changes.
If we accept bigger semantical changes, I'm inclined to instead just get
rid of targetlist SRFs in total; they're really weird and not needed
anymore.
One issue with removing targetlist SRFs is that they're currently
considerably faster than SRFs in FROM:
tpch[14693][1]=# COPY (SELECT * FROM generate_series(1, 10000000)) TO '/dev/null';
COPY 10000000
Time: 2217.167 ms
tpch[14693][1]=# COPY (SELECT generate_series(1, 10000000)) TO '/dev/null';
COPY 10000000
Time: 1355.929 ms
tpch[14693][1]=#
I'm no tto concerned about that, and we could probably fixing by
removing forced materialization from the relevant code path.
Comments?
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 23 May 2016 at 08:53, Andres Freund <andres@anarazel.de> wrote:
Hi,
discussing executor performance with a number of people at pgcon,
several hackers - me included - complained about the additional
complexity, both code and runtime, required to handle SRFs in the target
list.One idea I circulated was to fix that by interjecting a special executor
node to process SRF containing targetlists (reusing Result possibly?).
That'd allow to remove the isDone argument from ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.Robert suggested - IIRC mentioning previous on-list discussion - to
instead rewrite targetlist SRFs into lateral joins. My gut feeling is
that that'd be a larger undertaking, with significant semantics changes.If we accept bigger semantical changes, I'm inclined to instead just get
rid of targetlist SRFs in total; they're really weird and not needed
anymore.One issue with removing targetlist SRFs is that they're currently
considerably faster than SRFs in FROM:
tpch[14693][1]=# COPY (SELECT * FROM generate_series(1, 10000000)) TO
'/dev/null';
COPY 10000000
Time: 2217.167 ms
tpch[14693][1]=# COPY (SELECT generate_series(1, 10000000)) TO '/dev/null';
COPY 10000000
Time: 1355.929 ms
tpch[14693][1]=#I'm no tto concerned about that, and we could probably fixing by
removing forced materialization from the relevant code path.Comments?
SRFs-in-tlist are a lot faster for lockstep iteration etc. They're also
much simpler to write, though if the result result rowcount differs
unexpectedly between the functions you get exciting and unexpected
behaviour.
WITH ORDINALITY provides what I think is the last of the functionality
needed to replace SRFs-in-from, but at a syntatactic complexity and
performance cost. The following example demonstrates that, though it
doesn't do anything that needs LATERAL etc. I'm aware the following aren't
semantically identical if the rowcounts differ.
craig=> EXPLAIN ANALYZE SELECT generate_series(1,1000000) x,
generate_series(1,1000000) y;
QUERY PLAN
----------------------------------------------------------------------------------------------
Result (cost=0.00..5.01 rows=1000 width=0) (actual time=0.024..92.845
rows=1000000 loops=1)
Planning time: 0.039 ms
Execution time: 123.123 ms
(3 rows)
Time: 123.719 ms
craig=> EXPLAIN ANALYZE SELECT x, y FROM generate_series(1,1000000) WITH
ORDINALITY AS x(i, n) INNER JOIN generate_series(1,1000000) WITH ORDINALITY
AS y(i, n) ON (x.n = y.n);
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------
Merge Join (cost=0.01..97.50 rows=5000 width=64) (actual
time=179.863..938.375 rows=1000000 loops=1)
Merge Cond: (x.n = y.n)
-> Function Scan on generate_series x (cost=0.00..10.00 rows=1000
width=40) (actual time=108.813..303.690 rows=1000000 loops=1)
-> Materialize (cost=0.00..12.50 rows=1000 width=40) (actual
time=71.043..372.880 rows=1000000 loops=1)
-> Function Scan on generate_series y (cost=0.00..10.00
rows=1000 width=40) (actual time=71.039..266.209 rows=1000000 loops=1)
Planning time: 0.184 ms
Execution time: 970.744 ms
(7 rows)
Time: 971.706 ms
I get the impression the with-ordinality case could perform just as well if
the optimiser recognised a join on the ordinality column and iterated the
functions in lockstep to populate the result row directly. Though that
could perform _worse_ if the function is computationally costly and
benefits significantly from the CPU cache, where we're better off
materializing it or at least executing it in chunks/batches...
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
tl;dr
Semantic changes to SRF-in-target-list processing are undesirable when they
are all but deprecated.
I'd accept a refactoring that trades a performance gain for unaffected
queries for a reasonable performance hit of those afflicted.
Preamble...
Most recent thread that I can recall seeing on the topic - and where I
believe the rewrite idea was first presented.
/messages/by-id/25750.1458767514@sss.pgh.pa.us
On Sun, May 22, 2016 at 8:53 PM, Andres Freund <andres@anarazel.de> wrote:
Hi,
discussing executor performance with a number of people at pgcon,
several hackers - me included - complained about the additional
complexity, both code and runtime, required to handle SRFs in the target
list.One idea I circulated was to fix that by interjecting a special executor
node to process SRF containing targetlists (reusing Result possibly?).
That'd allow to remove the isDone argument from ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.
Conceptually I'm all for minimizing the impact on queries of this form.
It seems to be the most likely to get written and committed and the least
likely to cause unforeseen issues.
Robert suggested - IIRC mentioning previous on-list discussion - to
instead rewrite targetlist SRFs into lateral joins. My gut feeling is
that that'd be a larger undertaking, with significant semantics changes.
[...]
If we accept bigger semantical changes, I'm inclined to instead just get
rid of targetlist SRFs in total; they're really weird and not needed
anymore.
I cannot see these, in isolation, being a good option. Nonetheless, I
don't think any semantic change should happen before 9.2 becomes no longer
supported. I'd be inclined to take a similar approach as with
standard_conforming_strings (minus the execution guc, just the warning one)
with whatever after-the-fact learning taken into account.
Its worth considering query rewrite and making it forbidden as a joint goal.
For something like a canonical version of this, especially for
composite-returning SRF:
WITH func_call (
SELECT func(tbl.col)
FROM tbl
)
SELECT (func_call.func).*
FROM func_call;
If we can rewrite the CTE portion into a lateral - with the exact same
semantics (specifically, returning the single-column composite) then check
the rewritten query the select list SRF would not longer be present and no
error would be thrown.
For situations where a rewrite cannot be made to behave properly we leave
the construct alone and let the query raise an error.
In considering what I just wrote I'm not particularly enamored with
it...hence my overall conclusion. Can't say I hate it and after re-reading
the aforementioned thread I'm inclined to like it for cases where, for
instance, we are susceptible to a LCM evaluation.
David J.
Andres Freund <andres@anarazel.de> writes:
discussing executor performance with a number of people at pgcon,
several hackers - me included - complained about the additional
complexity, both code and runtime, required to handle SRFs in the target
list.
Yeah, this has been an annoyance for a long time.
One idea I circulated was to fix that by interjecting a special executor
node to process SRF containing targetlists (reusing Result possibly?).
That'd allow to remove the isDone argument from ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.
Would that not lead to, in effect, duplicating all of execQual.c? The new
executor node would still have to be prepared to process all expression
node types.
Robert suggested - IIRC mentioning previous on-list discussion - to
instead rewrite targetlist SRFs into lateral joins. My gut feeling is
that that'd be a larger undertaking, with significant semantics changes.
Yes, this was discussed on-list awhile back (I see David found a reference
already). I think it's feasible, although we'd first have to agree
whether we want to remain bug-compatible with the old
least-common-multiple-of-the-periods behavior. I would vote for not,
but it's certainly a debatable thing.
If we accept bigger semantical changes, I'm inclined to instead just get
rid of targetlist SRFs in total; they're really weird and not needed
anymore.
This seems a bridge too far to me. It's just way too common to do
"select generate_series(1,n)". We could tell people they have to
rewrite to "select * from generate_series(1,n)", but it would be far
more polite to do that for them.
One issue with removing targetlist SRFs is that they're currently
considerably faster than SRFs in FROM:
I suspect that depends greatly on your test case. But in any case
we could put more effort into optimizing nodeFunctionscan.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, May 23, 2016 at 01:10:29PM -0400, Tom Lane wrote:
This seems a bridge too far to me. It's just way too common to do
"select generate_series(1,n)". We could tell people they have to
rewrite to "select * from generate_series(1,n)", but it would be far
more polite to do that for them.
How about making "TABLE generate_series(1,n)" work? It's even
shorter in exchange for some cognitive load.
Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com
Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
David Fetter <david@fetter.org> writes:
On Mon, May 23, 2016 at 01:10:29PM -0400, Tom Lane wrote:
This seems a bridge too far to me. It's just way too common to do
"select generate_series(1,n)". We could tell people they have to
rewrite to "select * from generate_series(1,n)", but it would be far
more polite to do that for them.
How about making "TABLE generate_series(1,n)" work? It's even
shorter in exchange for some cognitive load.
No thanks --- the word after TABLE ought to be a table name, not some
arbitrary expression. That's way too much mess to save one keystroke.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, May 23, 2016 at 01:36:57PM -0400, Tom Lane wrote:
David Fetter <david@fetter.org> writes:
On Mon, May 23, 2016 at 01:10:29PM -0400, Tom Lane wrote:
This seems a bridge too far to me. It's just way too common to do
"select generate_series(1,n)". We could tell people they have to
rewrite to "select * from generate_series(1,n)", but it would be far
more polite to do that for them.How about making "TABLE generate_series(1,n)" work? It's even
shorter in exchange for some cognitive load.No thanks --- the word after TABLE ought to be a table name, not some
arbitrary expression. That's way too much mess to save one keystroke.
It's not just about saving a keystroke. This change would go with
removing the ability to do SRFs in the target list of a SELECT
query.
Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com
Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
David Fetter <david@fetter.org> writes:
On Mon, May 23, 2016 at 01:36:57PM -0400, Tom Lane wrote:
David Fetter <david@fetter.org> writes:
How about making "TABLE generate_series(1,n)" work? It's even
shorter in exchange for some cognitive load.
No thanks --- the word after TABLE ought to be a table name, not some
arbitrary expression. That's way too much mess to save one keystroke.
It's not just about saving a keystroke. This change would go with
removing the ability to do SRFs in the target list of a SELECT
query.
I guess you did not understand that I was rejecting doing that.
Telling people they have to modify existing code that does this and
works fine is exactly what I felt we can't do. We might be able to
blow off complicated cases, but I think simpler cases are too common
in the field.
I'm on board with fixing things so that the *implementation* doesn't
support SRF-in-tlist. But we can't just remove it from the language.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, May 23, 2016 at 12:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
discussing executor performance with a number of people at pgcon,
several hackers - me included - complained about the additional
complexity, both code and runtime, required to handle SRFs in the target
list.Yeah, this has been an annoyance for a long time.
One idea I circulated was to fix that by interjecting a special executor
node to process SRF containing targetlists (reusing Result possibly?).
That'd allow to remove the isDone argument from ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.Would that not lead to, in effect, duplicating all of execQual.c? The new
executor node would still have to be prepared to process all expression
node types.Robert suggested - IIRC mentioning previous on-list discussion - to
instead rewrite targetlist SRFs into lateral joins. My gut feeling is
that that'd be a larger undertaking, with significant semantics changes.Yes, this was discussed on-list awhile back (I see David found a reference
already). I think it's feasible, although we'd first have to agree
whether we want to remain bug-compatible with the old
least-common-multiple-of-the-periods behavior. I would vote for not,
but it's certainly a debatable thing.
+1 on removing LCM.
The behavior of multiple targetlist SRF is so bizarre that it's
incredible to believe anyone would reasonably expect it to work that
way. Agree also that casual sane usage of target list SRF is
incredibly common via generate_series() and unnest() etc is
exceptionally common...better not to break those cases without a
better justification than code simplicity.
merlin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, May 23, 2016 at 1:44 PM, David Fetter <david@fetter.org> wrote:
On Mon, May 23, 2016 at 01:36:57PM -0400, Tom Lane wrote:
David Fetter <david@fetter.org> writes:
On Mon, May 23, 2016 at 01:10:29PM -0400, Tom Lane wrote:
This seems a bridge too far to me. It's just way too common to do
"select generate_series(1,n)". We could tell people they have to
rewrite to "select * from generate_series(1,n)", but it would be far
more polite to do that for them.How about making "TABLE generate_series(1,n)" work? It's even
shorter in exchange for some cognitive load.No thanks --- the word after TABLE ought to be a table name, not some
arbitrary expression. That's way too much mess to save one keystroke.It's not just about saving a keystroke. This change would go with
removing the ability to do SRFs in the target list of a SELECT
query.
If you want to make an argument for doing this regardless of the target
list SRF change by all means - but it does absolutely nothing to mitigate
the breakage that would result if we choose this path.
David J.
On Mon, May 23, 2016 at 01:28:11PM -0500, Merlin Moncure wrote:
On Mon, May 23, 2016 at 12:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
discussing executor performance with a number of people at pgcon,
several hackers - me included - complained about the additional
complexity, both code and runtime, required to handle SRFs in the target
list.Yeah, this has been an annoyance for a long time.
One idea I circulated was to fix that by interjecting a special executor
node to process SRF containing targetlists (reusing Result possibly?).
That'd allow to remove the isDone argument from ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.Would that not lead to, in effect, duplicating all of execQual.c? The new
executor node would still have to be prepared to process all expression
node types.Robert suggested - IIRC mentioning previous on-list discussion - to
instead rewrite targetlist SRFs into lateral joins. My gut feeling is
that that'd be a larger undertaking, with significant semantics changes.Yes, this was discussed on-list awhile back (I see David found a reference
already). I think it's feasible, although we'd first have to agree
whether we want to remain bug-compatible with the old
least-common-multiple-of-the-periods behavior. I would vote for not,
but it's certainly a debatable thing.+1 on removing LCM.
As a green field project, that would make total sense. As a thing
decades in, it's not clear to me that that would break less stuff or
break it worse than simply disallowing SRFs in the target list, which
has been rejected on bugward-compatibility grounds. I suspect it
would be even worse because disallowing SRFs in target lists would at
least be obvious and localized when it broke code.
Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com
Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, May 23, 2016 at 2:13 PM, David Fetter <david@fetter.org> wrote:
On Mon, May 23, 2016 at 01:28:11PM -0500, Merlin Moncure wrote:
On Mon, May 23, 2016 at 12:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
discussing executor performance with a number of people at pgcon,
several hackers - me included - complained about the additional
complexity, both code and runtime, required to handle SRFs in the target
list.Yeah, this has been an annoyance for a long time.
One idea I circulated was to fix that by interjecting a special executor
node to process SRF containing targetlists (reusing Result possibly?).
That'd allow to remove the isDone argument from ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.Would that not lead to, in effect, duplicating all of execQual.c? The new
executor node would still have to be prepared to process all expression
node types.Robert suggested - IIRC mentioning previous on-list discussion - to
instead rewrite targetlist SRFs into lateral joins. My gut feeling is
that that'd be a larger undertaking, with significant semantics changes.Yes, this was discussed on-list awhile back (I see David found a reference
already). I think it's feasible, although we'd first have to agree
whether we want to remain bug-compatible with the old
least-common-multiple-of-the-periods behavior. I would vote for not,
but it's certainly a debatable thing.+1 on removing LCM.
As a green field project, that would make total sense. As a thing
decades in, it's not clear to me that that would break less stuff or
break it worse than simply disallowing SRFs in the target list, which
has been rejected on bugward-compatibility grounds. I suspect it
would be even worse because disallowing SRFs in target lists would at
least be obvious and localized when it broke code.
If I'm reading this correctly, it sounds to me like you are making the
case that removing target list SRF completely would somehow cause less
breakage than say, rewriting it to a LATERAL based implementation for
example. With more than a little forbearance, let's just say I don't
agree.
merlin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 05/23/2016 12:39 PM, Merlin Moncure wrote:
On Mon, May 23, 2016 at 2:13 PM, David Fetter <david@fetter.org> wrote:
On Mon, May 23, 2016 at 01:28:11PM -0500, Merlin Moncure wrote:
On Mon, May 23, 2016 at 12:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
discussing executor performance with a number of people at pgcon,
several hackers - me included - complained about the additional
complexity, both code and runtime, required to handle SRFs in the target
list.Yeah, this has been an annoyance for a long time.
One idea I circulated was to fix that by interjecting a special executor
node to process SRF containing targetlists (reusing Result possibly?).
That'd allow to remove the isDone argument from ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.Would that not lead to, in effect, duplicating all of execQual.c? The new
executor node would still have to be prepared to process all expression
node types.Robert suggested - IIRC mentioning previous on-list discussion - to
instead rewrite targetlist SRFs into lateral joins. My gut feeling is
that that'd be a larger undertaking, with significant semantics changes.Yes, this was discussed on-list awhile back (I see David found a reference
already). I think it's feasible, although we'd first have to agree
whether we want to remain bug-compatible with the old
least-common-multiple-of-the-periods behavior. I would vote for not,
but it's certainly a debatable thing.+1 on removing LCM.
As a green field project, that would make total sense. As a thing
decades in, it's not clear to me that that would break less stuff or
break it worse than simply disallowing SRFs in the target list, which
has been rejected on bugward-compatibility grounds. I suspect it
would be even worse because disallowing SRFs in target lists would at
least be obvious and localized when it broke code.If I'm reading this correctly, it sounds to me like you are making the
case that removing target list SRF completely would somehow cause less
breakage than say, rewriting it to a LATERAL based implementation for
example. With more than a little forbearance, let's just say I don't
agree.
I'm not necessarily saying that we should totally remove target list
SRFs, but I will point out it has been deprecated ever since SRFs were
first introduced:
http://www.postgresql.org/docs/7.3/static/xfunc-sql.html
"Currently, functions returning sets may also be called in the target
list of a SELECT query. For each row that the SELECT generates by
itself, the function returning set is invoked, and an output row is
generated for each element of the function's result set. Note,
however, that this capability is deprecated and may be removed in
future releases."
I would be in favor of rewriting it to a LATERAL, but that would not be
backwards compatible entirely either IIUC.
I'll also note that, unless I missed something, we also have to consider
that the capability to pipeline results is still only available in the
target list.
Joe
--
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development
On Mon, May 23, 2016 at 02:39:54PM -0500, Merlin Moncure wrote:
On Mon, May 23, 2016 at 2:13 PM, David Fetter <david@fetter.org> wrote:
On Mon, May 23, 2016 at 01:28:11PM -0500, Merlin Moncure wrote:
On Mon, May 23, 2016 at 12:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
discussing executor performance with a number of people at pgcon,
several hackers - me included - complained about the additional
complexity, both code and runtime, required to handle SRFs in the target
list.Yeah, this has been an annoyance for a long time.
One idea I circulated was to fix that by interjecting a special executor
node to process SRF containing targetlists (reusing Result possibly?).
That'd allow to remove the isDone argument from ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.Would that not lead to, in effect, duplicating all of execQual.c? The new
executor node would still have to be prepared to process all expression
node types.Robert suggested - IIRC mentioning previous on-list discussion - to
instead rewrite targetlist SRFs into lateral joins. My gut feeling is
that that'd be a larger undertaking, with significant semantics changes.Yes, this was discussed on-list awhile back (I see David found a reference
already). I think it's feasible, although we'd first have to agree
whether we want to remain bug-compatible with the old
least-common-multiple-of-the-periods behavior. I would vote for not,
but it's certainly a debatable thing.+1 on removing LCM.
As a green field project, that would make total sense. As a thing
decades in, it's not clear to me that that would break less stuff or
break it worse than simply disallowing SRFs in the target list, which
has been rejected on bugward-compatibility grounds. I suspect it
would be even worse because disallowing SRFs in target lists would at
least be obvious and localized when it broke code.If I'm reading this correctly, it sounds to me like you are making the
case that removing target list SRF completely would somehow cause less
breakage than say, rewriting it to a LATERAL based implementation for
example.
Yes.
Making SRFs in target lists throw an error is a thing that will be
pretty straightforward to deal with in extant code bases, whatever
size of pain in the neck it might be. The line of code that caused
the error would be very clear, and the fix would be very obvious.
Making their behavior different in some way that throws no warnings is
guaranteed to cause subtle and hard to track bugs in extant code
bases. We lost not a few existing users when we caused similar
knock-ons in 8.3 by removing automated casts to text.
I am no longer advocating for removing the functionality. I am just
pointing out that the knock-on effects of changing the functionality
may well cause more pain than the ones from removing it entirely.
With more than a little forbearance, let's just say I don't agree.
If you'd be so kind as to explain your reasons, I think we'd all
benefit.
Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com
Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Merlin Moncure <mmoncure@gmail.com> writes:
On Mon, May 23, 2016 at 2:13 PM, David Fetter <david@fetter.org> wrote:
On Mon, May 23, 2016 at 01:28:11PM -0500, Merlin Moncure wrote:
+1 on removing LCM.
As a green field project, that would make total sense. As a thing
decades in, it's not clear to me that that would break less stuff or
break it worse than simply disallowing SRFs in the target list, which
has been rejected on bugward-compatibility grounds. I suspect it
would be even worse because disallowing SRFs in target lists would at
least be obvious and localized when it broke code.
If I'm reading this correctly, it sounds to me like you are making the
case that removing target list SRF completely would somehow cause less
breakage than say, rewriting it to a LATERAL based implementation for
example. With more than a little forbearance, let's just say I don't
agree.
We should consider single and multiple SRFs in a targetlist as distinct
use-cases; only the latter has got weird properties.
There are several things we could potentially do with multiple SRFs in
the same targetlist. In increasing order of backwards compatibility and
effort required:
1. Throw error if there's more than one SRF.
2. Rewrite into LATERAL ROWS FROM (srf1(), srf2(), ...). This would
have the same behavior as before if the SRFs all return the same number
of rows, and otherwise would behave differently.
3. Rewrite into some other construct that still ends up as a FunctionScan
RTE node, but has the old LCM behavior if the SRFs produce different
numbers of rows. (Perhaps we would not need to expose this construct
as something directly SQL-visible.)
It's certainly arguable that the common use-cases for SRF-in-tlist
don't have more than one SRF per tlist, and thus that implementing #1
would be an appropriate amount of effort. It's worth noting also that
the LCM behavior has been repeatedly reported as a bug, and therefore
that if we do #3 we'll be expending very substantial effort to be
literally bug-compatible with ancient behavior that no one in the
current development group thinks is well-designed. As far as #2 goes,
it would have the advantage that code depending on the same-number-of-
rows case would continue to work as before. David has a point that it
would silently break application code that's actually depending on the
LCM behavior, but how much of that is there likely to be, really?
[ reflects a bit... ] I guess there is room for an option 2-and-a-half:
2.5. Rewrite into LATERAL ROWS FROM (srf1(), srf2(), ...), but decorate
the FunctionScan RTE to tell the executor to throw an error if the SRFs
don't all return the same number of rows, rather than silently
null-padding. This would have the same behavior as before for the sane
case, and would be very not-silent about cases where apps actually invoked
the LCM behavior. Again, we wouldn't necessarily have to expose such an
option at the SQL level. (Though it strikes me that such a restriction
could have value in its own right, analogous to the STRICT options that
we've invented in some other places to allow insisting on the expected
numbers of rows being returned. ROWS FROM STRICT (srf1(), srf2(), ...),
anybody?)
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, May 23, 2016 at 4:05 PM, David Fetter <david@fetter.org> wrote:
On Mon, May 23, 2016 at 02:39:54PM -0500, Merlin Moncure wrote:
On Mon, May 23, 2016 at 2:13 PM, David Fetter <david@fetter.org> wrote:
On Mon, May 23, 2016 at 01:28:11PM -0500, Merlin Moncure wrote:
On Mon, May 23, 2016 at 12:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
discussing executor performance with a number of people at pgcon,
several hackers - me included - complained about the additional
complexity, both code and runtime, required to handle SRFs in thetarget
list.
Yeah, this has been an annoyance for a long time.
One idea I circulated was to fix that by interjecting a special
executor
node to process SRF containing targetlists (reusing Result
possibly?).
That'd allow to remove the isDone argument from
ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.
Would that not lead to, in effect, duplicating all of execQual.c?
The new
executor node would still have to be prepared to process all
expression
node types.
Robert suggested - IIRC mentioning previous on-list discussion - to
instead rewrite targetlist SRFs into lateral joins. My gut feelingis
that that'd be a larger undertaking, with significant semantics
changes.
Yes, this was discussed on-list awhile back (I see David found a
reference
already). I think it's feasible, although we'd first have to agree
whether we want to remain bug-compatible with the old
least-common-multiple-of-the-periods behavior. I would vote fornot,
but it's certainly a debatable thing.
+1 on removing LCM.
As a green field project, that would make total sense. As a thing
decades in, it's not clear to me that that would break less stuff or
break it worse than simply disallowing SRFs in the target list, which
has been rejected on bugward-compatibility grounds. I suspect it
would be even worse because disallowing SRFs in target lists would at
least be obvious and localized when it broke code.If I'm reading this correctly, it sounds to me like you are making the
case that removing target list SRF completely would somehow cause less
breakage than say, rewriting it to a LATERAL based implementation for
example.Yes.
Making SRFs in target lists throw an error is a thing that will be
pretty straightforward to deal with in extant code bases, whatever
size of pain in the neck it might be. The line of code that caused
the error would be very clear, and the fix would be very obvious.Making their behavior different in some way that throws no warnings is
guaranteed to cause subtle and hard to track bugs in extant code
bases.
I'm advocating that if a presently allowed SRF-in-target-list is allowed
to remain it executes using the same semantics it has today. In all other
cases, including LCM, if the present behavior is undesirable to maintain we
throw an error. I'd hope that such an error can be written in such a way
as to name the offending function or functions.
If the user of a complex query doesn't want to expend the effort to locate
the specific instance of SRF that is in violation they will still have the
option to rewrite all of their uses in that particular query.
David J.
Joe Conway <mail@joeconway.com> writes:
I would be in favor of rewriting it to a LATERAL, but that would not be
backwards compatible entirely either IIUC.
It could be made so, I think, but it may be more trouble than it's worth;
see my previous message.
I'll also note that, unless I missed something, we also have to consider
that the capability to pipeline results is still only available in the
target list.
Yes, we would definitely want to improve nodeFunctionscan.c to perform
better for ValuePerCall SRFs. But that has value independently of this.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Tom Lane wrote:
Joe Conway <mail@joeconway.com> writes:
I'll also note that, unless I missed something, we also have to consider
that the capability to pipeline results is still only available in the
target list.Yes, we would definitely want to improve nodeFunctionscan.c to perform
better for ValuePerCall SRFs. But that has value independently of this.
Ah, so that's what "pipeline results" mean! I hadn't gotten that. I
agree; Abhijit had a patch or a plan for this, a long time ago ...
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, May 23, 2016 at 4:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Merlin Moncure <mmoncure@gmail.com> writes:
On Mon, May 23, 2016 at 2:13 PM, David Fetter <david@fetter.org> wrote:
On Mon, May 23, 2016 at 01:28:11PM -0500, Merlin Moncure wrote:
+1 on removing LCM.
As a green field project, that would make total sense. As a thing
decades in, it's not clear to me that that would break less stuff or
break it worse than simply disallowing SRFs in the target list, which
has been rejected on bugward-compatibility grounds. I suspect it
would be even worse because disallowing SRFs in target lists would at
least be obvious and localized when it broke code.If I'm reading this correctly, it sounds to me like you are making the
case that removing target list SRF completely would somehow cause less
breakage than say, rewriting it to a LATERAL based implementation for
example. With more than a little forbearance, let's just say I don't
agree.We should consider single and multiple SRFs in a targetlist as distinct
use-cases; only the latter has got weird properties.There are several things we could potentially do with multiple SRFs in
the same targetlist. In increasing order of backwards compatibility and
effort required:1. Throw error if there's more than one SRF.
2. Rewrite into LATERAL ROWS FROM (srf1(), srf2(), ...). This would
have the same behavior as before if the SRFs all return the same number
of rows, and otherwise would behave differently.3. Rewrite into some other construct that still ends up as a FunctionScan
RTE node, but has the old LCM behavior if the SRFs produce different
numbers of rows. (Perhaps we would not need to expose this construct
as something directly SQL-visible.)It's certainly arguable that the common use-cases for SRF-in-tlist
don't have more than one SRF per tlist, and thus that implementing #1
would be an appropriate amount of effort. It's worth noting also that
the LCM behavior has been repeatedly reported as a bug, and therefore
that if we do #3 we'll be expending very substantial effort to be
literally bug-compatible with ancient behavior that no one in the
current development group thinks is well-designed. As far as #2 goes,
it would have the advantage that code depending on the same-number-of-
rows case would continue to work as before. David has a point that it
would silently break application code that's actually depending on the
LCM behavior, but how much of that is there likely to be, really?[ reflects a bit... ] I guess there is room for an option 2-and-a-half:
2.5. Rewrite into LATERAL ROWS FROM (srf1(), srf2(), ...), but decorate
the FunctionScan RTE to tell the executor to throw an error if the SRFs
don't all return the same number of rows, rather than silently
null-padding. This would have the same behavior as before for the sane
case, and would be very not-silent about cases where apps actually invoked
the LCM behavior. Again, we wouldn't necessarily have to expose such an
option at the SQL level. (Though it strikes me that such a restriction
could have value in its own right, analogous to the STRICT options that
we've invented in some other places to allow insisting on the expected
numbers of rows being returned. ROWS FROM STRICT (srf1(), srf2(), ...),
anybody?)
I'd let the engineers decide between 1, 2.5, and 3 - but if we accomplish
our goals while implementing #3 I'd say that would be the best outcome for
the community as whole.
We don't have the luxury of providing a safe-usage mode where people
writing new queries get the error but pre-existing queries are considered
OK. We will have to rely upon education and deal with the occasional bug
report but our long-time customers, even if only a minority would be
affected, will appreciate the effort taken to not break code that has been
working for a long time.
The minority is likely small enough to at least make options 1 and 2.5
viable though I'd think we make an effort to avoid #1.
David J.
On Mon, May 23, 2016 at 4:24 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:
Tom Lane wrote:
Joe Conway <mail@joeconway.com> writes:
I'll also note that, unless I missed something, we also have to
consider
that the capability to pipeline results is still only available in the
target list.Yes, we would definitely want to improve nodeFunctionscan.c to perform
better for ValuePerCall SRFs. But that has value independently of this.Ah, so that's what "pipeline results" mean! I hadn't gotten that. I
agree; Abhijit had a patch or a plan for this, a long time ago ...
Is this sidebar strictly an implementation detail, not user visible?
David J.
"David G. Johnston" <david.g.johnston@gmail.com> writes:
On Mon, May 23, 2016 at 4:24 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:Ah, so that's what "pipeline results" mean! I hadn't gotten that. I
agree; Abhijit had a patch or a plan for this, a long time ago ...
Is this sidebar strictly an implementation detail, not user visible?
Hmm. It could be visible in the sense that the execution of multiple
functions in one ROWS FROM() construct could be interleaved, while
(I think) the current implementation runs each one to completion
serially. But if you're writing code that assumes that, I think you
should not be very surprised when we break it. In any case, that
would not affect the proposed translation for SRFs-in-tlist, since
those have that behavior today.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, May 23, 2016 at 4:42 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"David G. Johnston" <david.g.johnston@gmail.com> writes:
On Mon, May 23, 2016 at 4:24 PM, Alvaro Herrera <
alvherre@2ndquadrant.com>
wrote:
Ah, so that's what "pipeline results" mean! I hadn't gotten that. I
agree; Abhijit had a patch or a plan for this, a long time ago ...Is this sidebar strictly an implementation detail, not user visible?
Hmm. It could be visible in the sense that the execution of multiple
functions in one ROWS FROM() construct could be interleaved, while
(I think) the current implementation runs each one to completion
serially. But if you're writing code that assumes that, I think you
should not be very surprised when we break it. In any case, that
would not affect the proposed translation for SRFs-in-tlist, since
those have that behavior today.
Thanks
Sounds like "zipper results" would be a better term for it...but, yes, if
that's the general context it falls into implementation from my
perspective.
But then I don't get Joe's point - if its an implementation detail why
should it matter if rewriting the SRF-in-tlist to be laterals changes
execution from a serial to an interleaved implementation. Plus, Joe's
claim: "the capability to pipeline results is still only available in the
target list", and yours above are at odds since you claim the rewritten
behavior is the same today. Is there a disconnect in knowledge or are you
talking about different things?
David J.
On 05/23/2016 02:37 PM, David G. Johnston wrote:
But then I don't get Joe's point - if its an implementation detail why
should it matter if rewriting the SRF-in-tlist to be laterals changes
execution from a serial to an interleaved implementation. Plus, Joe's
claim: "the capability to pipeline results is still only available in
the target list", and yours above are at odds since you claim the
rewritten behavior is the same today. Is there a disconnect in
knowledge or are you talking about different things?
Unless there have been recent changes which I missed, ValuePerCall SRFs
are still run to completion in one go, when executed in the FROM clause,
but they project one-row-at-a-time in the target list. If your SRF
returns many-many rows, the problem with the former case is that the
entire thing has to be materialized in memory.
Joe
--
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development
On 2016-05-23 13:10:29 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
One idea I circulated was to fix that by interjecting a special executor
node to process SRF containing targetlists (reusing Result possibly?).
That'd allow to remove the isDone argument from ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.Would that not lead to, in effect, duplicating all of execQual.c? The new
executor node would still have to be prepared to process all expression
node types.
I don't think it necessarily has to. ISTM that if we add a version of
ExecProject()/ExecTargetList() that continues returning multiple rows,
we can make the knowledge about the one type of expression we allow to
return multiple rows. That'd require a bit of uglyness to implement
stuff like
SELECT generate_series(1, 2)::text, generate_series(1, 2) * 5;
etc. It seems we'd basically have to do one projection step for the
SRFs, and then another for the rest. I'm inclined to think that's
acceptable to get rid of a lot of the related uglyness.
One issue with removing targetlist SRFs is that they're currently
considerably faster than SRFs in FROM:I suspect that depends greatly on your test case. But in any case
we could put more effort into optimizing nodeFunctionscan.
I doubt you'll find cases where it's significantly the other way round
for percall SRFs. The fundamental issue is that targetlist SRFs don't
have to spill to a tuplestore, whereas nodeFunctionscan ones have to
(even if they're percall).
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-05-23 13:10:29 -0400, Tom Lane wrote:
Would that not lead to, in effect, duplicating all of execQual.c? The new
executor node would still have to be prepared to process all expression
node types.
I don't think it necessarily has to. ISTM that if we add a version of
ExecProject()/ExecTargetList() that continues returning multiple rows,
we can make the knowledge about the one type of expression we allow to
return multiple rows. That'd require a bit of uglyness to implement
stuff like
SELECT generate_series(1, 2)::text, generate_series(1, 2) * 5;
etc. It seems we'd basically have to do one projection step for the
SRFs, and then another for the rest. I'm inclined to think that's
acceptable to get rid of a lot of the related uglyness.
[ shrug... ] That seems like it's morally equivalent to (but uglier than)
what I wanted to do, which is to teach the planner to rewrite the query to
put the SRFs into a lateral FROM item. Splitting the tlist into two
levels will work out to be exactly the same rewriting problem.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-05-25 15:02:23 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-05-23 13:10:29 -0400, Tom Lane wrote:
Would that not lead to, in effect, duplicating all of execQual.c? The new
executor node would still have to be prepared to process all expression
node types.I don't think it necessarily has to. ISTM that if we add a version of
ExecProject()/ExecTargetList() that continues returning multiple rows,
we can make the knowledge about the one type of expression we allow to
return multiple rows. That'd require a bit of uglyness to implement
stuff like
SELECT generate_series(1, 2)::text, generate_series(1, 2) * 5;
etc. It seems we'd basically have to do one projection step for the
SRFs, and then another for the rest. I'm inclined to think that's
acceptable to get rid of a lot of the related uglyness.[ shrug... ] That seems like it's morally equivalent to (but uglier than)
what I wanted to do, which is to teach the planner to rewrite the query to
put the SRFs into a lateral FROM item. Splitting the tlist into two
levels will work out to be exactly the same rewriting problem.
I think that depends on how bug compatible we want to be. It seems
harder to get the (rather odd!) lockstep iteration behaviour between two
SRFS with the LATERAL approach?
tpch[6098][1]=# SELECT generate_series(1, 3), generate_series(1,3);
┌─────────────────┬─────────────────┐
│ generate_series │ generate_series │
├─────────────────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 2 │
│ 3 │ 3 │
└─────────────────┴─────────────────┘
(3 rows)
tpch[6098][1]=# SELECT generate_series(1, 3), generate_series(1,4);
┌─────────────────┬─────────────────┐
│ generate_series │ generate_series │
├─────────────────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 2 │
│ 3 │ 3 │
│ 1 │ 4 │
│ 2 │ 1 │
│ 3 │ 2 │
│ 1 │ 3 │
│ 2 │ 4 │
│ 3 │ 1 │
│ 1 │ 2 │
│ 2 │ 3 │
│ 3 │ 4 │
└─────────────────┴─────────────────┘
(12 rows)
Regards,
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-05-25 15:02:23 -0400, Tom Lane wrote:
[ shrug... ] That seems like it's morally equivalent to (but uglier than)
what I wanted to do, which is to teach the planner to rewrite the query to
put the SRFs into a lateral FROM item. Splitting the tlist into two
levels will work out to be exactly the same rewriting problem.
I think that depends on how bug compatible we want to be. It seems
harder to get the (rather odd!) lockstep iteration behaviour between two
SRFS with the LATERAL approach?
We could certainly make a variant behavior in nodeFunctionscan.c that
emulates that, if we feel that being exactly bug-compatible on the point
is actually what we want. I'm dubious about that though, not least
because I don't think *anyone* actually believes that that behavior isn't
broken. Did you read my upthread message suggesting assorted compromise
choices?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-05-25 15:20:03 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-05-25 15:02:23 -0400, Tom Lane wrote:
[ shrug... ] That seems like it's morally equivalent to (but uglier than)
what I wanted to do, which is to teach the planner to rewrite the query to
put the SRFs into a lateral FROM item. Splitting the tlist into two
levels will work out to be exactly the same rewriting problem.I think that depends on how bug compatible we want to be. It seems
harder to get the (rather odd!) lockstep iteration behaviour between two
SRFS with the LATERAL approach?We could certainly make a variant behavior in nodeFunctionscan.c that
emulates that, if we feel that being exactly bug-compatible on the point
is actually what we want. I'm dubious about that though, not least
because I don't think *anyone* actually believes that that behavior isn't
broken. Did you read my upthread message suggesting assorted compromise
choices?
You mean /messages/by-id/21076.1464034513@sss.pgh.pa.us ?
If so, yes.
If we go with rewriting this into LATERAL, I'd vote for 2.5 (trailed by
option 1), that'd keep most of the functionality, and would break
visibly rather than invisibly in the cases where not.
I guess you're not planning to work on this?
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-05-25 15:20:03 -0400, Tom Lane wrote:
We could certainly make a variant behavior in nodeFunctionscan.c that
emulates that, if we feel that being exactly bug-compatible on the point
is actually what we want. I'm dubious about that though, not least
because I don't think *anyone* actually believes that that behavior isn't
broken. Did you read my upthread message suggesting assorted compromise
choices?
You mean /messages/by-id/21076.1464034513@sss.pgh.pa.us ?
If so, yes.
If we go with rewriting this into LATERAL, I'd vote for 2.5 (trailed by
option 1), that'd keep most of the functionality, and would break
visibly rather than invisibly in the cases where not.
2.5 would be okay with me.
I guess you're not planning to work on this?
Well, not right now, as it's clearly too late for 9.6. I might hack on
it later if nobody beats me to it.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, May 25, 2016 at 3:55 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-05-25 15:20:03 -0400, Tom Lane wrote:
We could certainly make a variant behavior in nodeFunctionscan.c that
emulates that, if we feel that being exactly bug-compatible on the point
is actually what we want. I'm dubious about that though, not least
because I don't think *anyone* actually believes that that behavior isn't
broken. Did you read my upthread message suggesting assorted compromise
choices?You mean /messages/by-id/21076.1464034513@sss.pgh.pa.us ?
If so, yes.If we go with rewriting this into LATERAL, I'd vote for 2.5 (trailed by
option 1), that'd keep most of the functionality, and would break
visibly rather than invisibly in the cases where not.2.5 would be okay with me.
I guess you're not planning to work on this?
Well, not right now, as it's clearly too late for 9.6. I might hack on
it later if nobody beats me to it.
Curious if this approach will also rewrite:
select generate_series(1,generate_series(1,3)) s;
...into
select s from generate_series(1,3) x cross join lateral generate_series(1,x) s;
another interesting case today is:
create sequence s;
select generate_series(1,nextval('s')), generate_series(1,nextval('s'));
this statement never terminates. a lateral rewrite of this query
would always terminate with much better defined and well understood
behaviors -- this is good.
merlin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Merlin Moncure <mmoncure@gmail.com> writes:
On Wed, May 25, 2016 at 3:55 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
If we go with rewriting this into LATERAL, I'd vote for 2.5 (trailed by
option 1), that'd keep most of the functionality, and would break
visibly rather than invisibly in the cases where not.2.5 would be okay with me.
Curious if this approach will also rewrite:
select generate_series(1,generate_series(1,3)) s;
...into
select s from generate_series(1,3) x cross join lateral generate_series(1,x) s;
Yeah, that would be the idea.
another interesting case today is:
create sequence s;
select generate_series(1,nextval('s')), generate_series(1,nextval('s'));
this statement never terminates. a lateral rewrite of this query
would always terminate with much better defined and well understood
behaviors -- this is good.
Interesting example demonstrating that 100% bug compatibility is not
possible. But as you say, most people would probably prefer the other
behavior anyhow.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Friday, June 3, 2016, Tom Lane <tgl@sss.pgh.pa.us
<javascript:_e(%7B%7D,'cvml','tgl@sss.pgh.pa.us');>> wrote:
Merlin Moncure <mmoncure@gmail.com> writes:
On Wed, May 25, 2016 at 3:55 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
If we go with rewriting this into LATERAL, I'd vote for 2.5 (trailed by
option 1), that'd keep most of the functionality, and would break
visibly rather than invisibly in the cases where not.2.5 would be okay with me.
Curious if this approach will also rewrite:
select generate_series(1,generate_series(1,3)) s;
...into
select s from generate_series(1,3) x cross join lateralgenerate_series(1,x) s;
Yeah, that would be the idea.
Ok... It's only a single srf as far as the outer query is concerned so
while it is odd the behavior is well defined and can be transformed while
giving the same result.
another interesting case today is:
create sequence s;
select generate_series(1,nextval('s')), generate_series(1,nextval('s'));this statement never terminates. a lateral rewrite of this query
would always terminate with much better defined and well understood
behaviors -- this is good.Interesting example demonstrating that 100% bug compatibility is not
possible. But as you say, most people would probably prefer the other
behavior anyhow.
If taking the 2.5 approach this one would fail as opposed to being
rewritten.
This could be an exception to the policy in #3 and would be ok in #2. It
would fail in #1.
Given the apparent general consensus for 2.5 and the lack of working field
versions of this form the error seems like a no brainer.
David J.
"David G. Johnston" <david.g.johnston@gmail.com> writes:
On Friday, June 3, 2016, Tom Lane <tgl@sss.pgh.pa.us
<javascript:_e(%7B%7D,'cvml','tgl@sss.pgh.pa.us');>> wrote:Merlin Moncure <mmoncure@gmail.com> writes:
another interesting case today is:
create sequence s;
select generate_series(1,nextval('s')), generate_series(1,nextval('s'));
If taking the 2.5 approach this one would fail as opposed to being
rewritten.
Well, it'd be rewritten and then would fail at runtime because of the SRF
calls not producing the same number of rows. But even option #3 would not
be strictly bug-compatible because it would (I imagine) evaluate the
arguments of each SRF only once. The reason this case doesn't terminate
in the current implementation is that it re-evaluates the SRF arguments
each time we start a SRF over. That's just weird ...
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, May 23, 2016 at 4:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
We should consider single and multiple SRFs in a targetlist as distinct
use-cases; only the latter has got weird properties.There are several things we could potentially do with multiple SRFs in
the same targetlist. In increasing order of backwards compatibility and
effort required:1. Throw error if there's more than one SRF.
2. Rewrite into LATERAL ROWS FROM (srf1(), srf2(), ...). This would
have the same behavior as before if the SRFs all return the same number
of rows, and otherwise would behave differently.
I thought the idea was to rewrite it as LATERAL ROWS FROM (srf1()),
LATERAL ROWS FROM (srf2()), ...
The rewrite you propose here seems to NULL-pad rows after the first
SRF is exhausted:
rhaas=# select * from dual, lateral rows from (generate_series(1,3),
generate_series(1,4));
x | generate_series | generate_series
-------+-----------------+-----------------
dummy | 1 | 1
dummy | 2 | 2
dummy | 3 | 3
dummy | | 4
(4 rows)
...whereas with a separate LATERAL clause for each row you get this:
rhaas=# select * from dual, lateral rows from (generate_series(1,3))
a, lateral rows from (generate_series(1,4)) b;
x | a | b
-------+---+---
dummy | 1 | 1
dummy | 1 | 2
dummy | 1 | 3
dummy | 1 | 4
dummy | 2 | 1
dummy | 2 | 2
dummy | 2 | 3
dummy | 2 | 4
dummy | 3 | 1
dummy | 3 | 2
dummy | 3 | 3
dummy | 3 | 4
(12 rows)
The latter is how I'd expect SRF-in-targetlist to work.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
On Mon, May 23, 2016 at 4:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
2. Rewrite into LATERAL ROWS FROM (srf1(), srf2(), ...). This would
have the same behavior as before if the SRFs all return the same number
of rows, and otherwise would behave differently.
I thought the idea was to rewrite it as LATERAL ROWS FROM (srf1()),
LATERAL ROWS FROM (srf2()), ...
No, because then you get the cross-product of multiple SRFs, not the
run-in-lockstep behavior.
The rewrite you propose here seems to NULL-pad rows after the first
SRF is exhausted:
Yes. That's why I said it's not compatible if the SRFs don't all return
the same number of rows. It seems like a reasonable definition to me
though, certainly much more reasonable than the current run-until-LCM
behavior.
The latter is how I'd expect SRF-in-targetlist to work.
That's not even close to how it works now. It would break *every*
existing application that has multiple SRFs in the tlist, not just
the ones whose SRFs return different numbers of rows. And I'm not
convinced that it's a more useful behavior.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Jun 6, 2016 at 11:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
On Mon, May 23, 2016 at 4:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
2. Rewrite into LATERAL ROWS FROM (srf1(), srf2(), ...). This would
have the same behavior as before if the SRFs all return the same number
of rows, and otherwise would behave differently.I thought the idea was to rewrite it as LATERAL ROWS FROM (srf1()),
LATERAL ROWS FROM (srf2()), ...No, because then you get the cross-product of multiple SRFs, not the
run-in-lockstep behavior.The rewrite you propose here seems to NULL-pad rows after the first
SRF is exhausted:Yes. That's why I said it's not compatible if the SRFs don't all return
the same number of rows. It seems like a reasonable definition to me
though, certainly much more reasonable than the current run-until-LCM
behavior.
IOW, this is why this mode query has to fail.
The latter is how I'd expect SRF-in-targetlist to work.
That's not even close to how it works now. It would break *every*
existing application that has multiple SRFs in the tlist, not just
the ones whose SRFs return different numbers of rows. And I'm not
convinced that it's a more useful behavior.
To clarify, the present behavior is basically a combination of both of
Robert's results.
If the SRFs return the same number of rows the first (zippered) result is
returned without an NULL padding.
If the SRFs return a different number of rows the LCM behavior kicks in and
you get Robert's second result.
SELECT generate_series(1, 4), generate_series(1, 4) ORDER BY 1, 2;
is the same as
SELECT * FROM ROWS FROM ( generate_series(1, 4), generate_series(1, 4) );
BUT
SELECT generate_series(1, 3), generate_series(1, 4) ORDER BY 1, 2;
is the same as
SELECT * FROM ROWS FROM generate_series(1, 3) a, LATERAL ROWS FROM
generate_series(1, 4) b;
Tom's 2.5 proposal basically says we make the former equivalence succeed
and have the later one fail.
The rewrite would be unaware of the cardinality of the SRF and so it cannot
conditionally rewrite the query. One of the two must be chosen and the
incompatible behavior turned into an error.
David J.
On 06/06/16 18:30, David G. Johnston wrote:
To clarify, the present behavior is basically a combination of both of
Robert's results.If the SRFs return the same number of rows the first (zippered) result
is returned without an NULL padding.If the SRFs return a different number of rows the LCM behavior kicks in
and you get Robert's second result.
No.
SELECT generate_series(1, 4), generate_series(1, 4) ORDER BY 1, 2;
is the same as
SELECT * FROM ROWS FROM ( generate_series(1, 4), generate_series(1, 4) );BUT
SELECT generate_series(1, 3), generate_series(1, 4) ORDER BY 1, 2;
is the same as
SELECT * FROM ROWS FROM generate_series(1, 3) a, LATERAL ROWS FROM
generate_series(1, 4) b;
What would you do with:
SELECT generate_series(1, 3), generate_series(1, 6);
?
Tom's 2.5 proposal basically says we make the former equivalence succeed
and have the later one fail.The rewrite would be unaware of the cardinality of the SRF and so it
cannot conditionally rewrite the query. One of the two must be chosen
and the incompatible behavior turned into an error.
--
Vik Fearing +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
"David G. Johnston" <david.g.johnston@gmail.com> writes:
If the SRFs return a different number of rows the LCM behavior kicks in and
you get Robert's second result.
Only if the periods of the SRFs are relatively prime. That is, neither of
his examples demonstrate the full weirdness of the current behavior; for
that, you need periods that are multiples of each other. For instance:
SELECT generate_series(1, 2), generate_series(1, 4);
generate_series | generate_series
-----------------+-----------------
1 | 1
2 | 2
1 | 3
2 | 4
(4 rows)
That doesn't comport with any behavior available from LATERAL.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Jun 6, 2016 at 11:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
On Mon, May 23, 2016 at 4:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
2. Rewrite into LATERAL ROWS FROM (srf1(), srf2(), ...). This would
have the same behavior as before if the SRFs all return the same number
of rows, and otherwise would behave differently.I thought the idea was to rewrite it as LATERAL ROWS FROM (srf1()),
LATERAL ROWS FROM (srf2()), ...No, because then you get the cross-product of multiple SRFs, not the
run-in-lockstep behavior.
Oh. I assumed that was the expected behavior. But, ah, what do I know?
The rewrite you propose here seems to NULL-pad rows after the first
SRF is exhausted:Yes. That's why I said it's not compatible if the SRFs don't all return
the same number of rows. It seems like a reasonable definition to me
though, certainly much more reasonable than the current run-until-LCM
behavior.
I can't argue with that.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas wrote:
On Mon, Jun 6, 2016 at 11:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
On Mon, May 23, 2016 at 4:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
2. Rewrite into LATERAL ROWS FROM (srf1(), srf2(), ...). This would
have the same behavior as before if the SRFs all return the same number
of rows, and otherwise would behave differently.I thought the idea was to rewrite it as LATERAL ROWS FROM (srf1()),
LATERAL ROWS FROM (srf2()), ...No, because then you get the cross-product of multiple SRFs, not the
run-in-lockstep behavior.Oh. I assumed that was the expected behavior. But, ah, what do I know?
Lots, I assume -- but in this case, probably next to nothing, just like
most of us, because what sane person or application would be really
relying on the wacko historical behavior, in order to generate some
collective knowledge? However, I think that it is possible that
someone, somewhere has two SRFs-in-targetlist that return the same
number of rows and that the current implementation works fine for them;
if we redefine it to work differently, we would break their application
silently, which seems a worse problem than breaking it noisily while
providing an easy way forward (which is to move SRFs to the FROM list)
My vote is to raise an error in the case of more than one SRF in targetlist.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Jun 6, 2016 at 2:31 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:
Robert Haas wrote:
On Mon, Jun 6, 2016 at 11:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
On Mon, May 23, 2016 at 4:15 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
2. Rewrite into LATERAL ROWS FROM (srf1(), srf2(), ...). This would
have the same behavior as before if the SRFs all return the samenumber
of rows, and otherwise would behave differently.
I thought the idea was to rewrite it as LATERAL ROWS FROM (srf1()),
LATERAL ROWS FROM (srf2()), ...No, because then you get the cross-product of multiple SRFs, not the
run-in-lockstep behavior.Oh. I assumed that was the expected behavior. But, ah, what do I know?
Lots, I assume -- but in this case, probably next to nothing, just like
most of us, because what sane person or application would be really
relying on the wacko historical behavior, in order to generate some
collective knowledge? However, I think that it is possible that
someone, somewhere has two SRFs-in-targetlist that return the same
number of rows and that the current implementation works fine for them;
if we redefine it to work differently, we would break their application
silently, which seems a worse problem than breaking it noisily while
providing an easy way forward (which is to move SRFs to the FROM list)My vote is to raise an error in the case of more than one SRF in
targetlist.
As long as someone is willing to put in the effort we can make a subset of
these multiple-SRFs-in-targetlist queries work without any change in the
tabular output, though the processing mechanism might change. Your vote
is essentially #1 up-thread which seems the most draconian. Assuming a
viable option 2.5 or 3 solution is presented would you vote against it
being committed? If so I'd like to understand why. I see #1 as basically
OK only if their are technical barriers we cannot overcome - including
performance.
Link to the definition of the various options Tom proposed:
/messages/by-id/21076.1464034513@sss.pgh.pa.us
David J.
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
Robert Haas wrote:
On Mon, Jun 6, 2016 at 11:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
No, because then you get the cross-product of multiple SRFs, not the
run-in-lockstep behavior.
Oh. I assumed that was the expected behavior. But, ah, what do I know?
Lots, I assume -- but in this case, probably next to nothing, just like
most of us, because what sane person or application would be really
relying on the wacko historical behavior, in order to generate some
collective knowledge? However, I think that it is possible that
someone, somewhere has two SRFs-in-targetlist that return the same
number of rows and that the current implementation works fine for them;
Yes. Run-in-lockstep is an extremely useful behavior, so much so that
we made a LATERAL variant for it. I do not see a reason to break such
cases in the targetlist.
My vote is to raise an error in the case of more than one SRF in targetlist.
Note that that risks breaking cases that the user does not think are "more
than one SRF". Consider this example using a regression-test table:
regression=# create function foo() returns setof int8_tbl as
regression-# 'select * from int8_tbl' language sql;
CREATE FUNCTION
regression=# select foo();
foo
--------------------------------------
(123,456)
(123,4567890123456789)
(4567890123456789,123)
(4567890123456789,4567890123456789)
(4567890123456789,-4567890123456789)
(5 rows)
regression=# explain verbose select foo();
QUERY PLAN
----------------------------------------------
Result (cost=0.00..5.25 rows=1000 width=32)
Output: foo()
(2 rows)
regression=# select (foo()).*;
q1 | q2
------------------+-------------------
123 | 456
123 | 4567890123456789
4567890123456789 | 123
4567890123456789 | 4567890123456789
4567890123456789 | -4567890123456789
(5 rows)
regression=# explain verbose select (foo()).*;
QUERY PLAN
----------------------------------------------
Result (cost=0.00..5.50 rows=1000 width=16)
Output: (foo()).q1, (foo()).q2
(2 rows)
The reason we can get away with this simplistic treatment of
composite-returning SRFs is precisely the run-in-lockstep behavior.
Otherwise the second query would have returned 25 rows.
Now, if we decide to try to rewrite tlist SRFs as LATERAL, it would likely
behoove us to do that rewrite before expanding * not after, so that we can
eliminate the multiple evaluation of foo() that happens currently. (That
makes it a parser problem not a planner problem.) And maybe we should
rewrite non-SRF composite-returning functions this way too, because people
have definitely complained about the extra evaluations in that context.
But my point here is that lockstep evaluation does have practical use
when the SRFs are iterating over matching collections of generated rows.
And that seems like a pretty common use-case.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Jun 6, 2016 at 2:53 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Now, if we decide to try to rewrite tlist SRFs as LATERAL, it would likely
behoove us to do that rewrite before expanding * not after, so that we can
eliminate the multiple evaluation of foo() that happens currently. (That
makes it a parser problem not a planner problem.) And maybe we should
rewrite non-SRF composite-returning functions this way too, because people
have definitely complained about the extra evaluations in that context.
But my point here is that lockstep evaluation does have practical use
when the SRFs are iterating over matching collections of generated rows.
And that seems like a pretty common use-case.
Yeah, OK. I'm not terribly opposed to going that way. I think the
current behavior sucks badly enough - both because the semantics are
bizarre and because it complicates the whole executor for a niche
feature - that it's worth taking a backward compatibility hit to
change it. I guess I'd prefer #2 to #2.5, #2.5 to #3, and #3 to #1.
I really don't like #1 much - I think I'd almost rather do nothing.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
... I guess I'd prefer #2 to #2.5, #2.5 to #3, and #3 to #1.
I really don't like #1 much - I think I'd almost rather do nothing.
FWIW, that's about my evaluation of the alternatives as well. I fear
that #1 would get a lot of pushback. If we think that something like
"LATERAL ROWS FROM STRICT" is worth having on its own merits, then
doing #2.5 seems worthwhile to me, but otherwise I'm just as happy
with #2. David J. seems to feel that throwing an error (as in #2.5)
rather than silently behaving incompatibly (as in #2) is important,
but I'm not convinced. In a green field I think we'd prefer #2 over
#2.5, so I'd rather go that direction.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Jun 6, 2016 at 3:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
... I guess I'd prefer #2 to #2.5, #2.5 to #3, and #3 to #1.
I really don't like #1 much - I think I'd almost rather do nothing.FWIW, that's about my evaluation of the alternatives as well. I fear
that #1 would get a lot of pushback. If we think that something like
"LATERAL ROWS FROM STRICT" is worth having on its own merits, then
doing #2.5 seems worthwhile to me, but otherwise I'm just as happy
with #2. David J. seems to feel that throwing an error (as in #2.5)
rather than silently behaving incompatibly (as in #2) is important,
but I'm not convinced. In a green field I think we'd prefer #2 over
#2.5, so I'd rather go that direction.
I suspect the decision to error or not is a one or two line change in
whatever form the final patch takes. It seems like approach #2 is
acceptable on a theoretical level which implies there is no desire to make
the existing LCM behavior available post-patch.
Assuming it is simple then everyone will have a chance to make their
opinion known on whether the 2.0 or 2.5 variation is preferable for the
final commit. If a decision needs to be made sooner due to a design
decision I'd hope the author of the patch would make that known so we can
bring this to resolution at that point instead.
David J.
On Mon, Jun 6, 2016 at 3:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
... I guess I'd prefer #2 to #2.5, #2.5 to #3, and #3 to #1.
I really don't like #1 much - I think I'd almost rather do nothing.FWIW, that's about my evaluation of the alternatives as well. I fear
that #1 would get a lot of pushback. If we think that something like
"LATERAL ROWS FROM STRICT" is worth having on its own merits, then
doing #2.5 seems worthwhile to me, but otherwise I'm just as happy
with #2. David J. seems to feel that throwing an error (as in #2.5)
rather than silently behaving incompatibly (as in #2) is important,
but I'm not convinced. In a green field I think we'd prefer #2 over
#2.5, so I'd rather go that direction.
Same here. That behavior is actually potentially quite useful, right?
Like, you might want to rely on the NULL-extension thing, if it were
documented as behavior you can count on?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-05-25 16:55:23 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-05-25 15:20:03 -0400, Tom Lane wrote:
We could certainly make a variant behavior in nodeFunctionscan.c that
emulates that, if we feel that being exactly bug-compatible on the point
is actually what we want. I'm dubious about that though, not least
because I don't think *anyone* actually believes that that behavior isn't
broken. Did you read my upthread message suggesting assorted compromise
choices?You mean /messages/by-id/21076.1464034513@sss.pgh.pa.us ?
If so, yes.If we go with rewriting this into LATERAL, I'd vote for 2.5 (trailed by
option 1), that'd keep most of the functionality, and would break
visibly rather than invisibly in the cases where not.2.5 would be okay with me.
I guess you're not planning to work on this?
Well, not right now, as it's clearly too late for 9.6. I might hack on
it later if nobody beats me to it.
FWIW, as it's blocking my plans for executor related rework (expression
evaluation, batch processing) I started to hack on this.
I've an implementation that
1) turns all targetlist SRF (tSRF from now on) into ROWS FROM
expressions. If there's tSRFs in the argument of a tSRF those becomes
a separate, lateral, ROWS FROM expression.
2) If grouping/window functions are present, the entire query is wrapped
in a subquery RTE, except for the set-returning function. All
referenced Var|Aggref|GroupingFunc|WindowFunc|Param nodes in the
original targetlist are made to reference that subquery, which gets a
TargetEntry for them.
3) If sortClause does *not* reference any tSRFs the sorting is evaluated
in a subquery, to preserve the output ordering of SRFs in queries
like
SELECT id, generate_series(1,3) FROM (VALUES(1),(2)) d(id) ORDER BY id DESC;
if in contrast sortClause does reference the tSRF output, it's
evaluated in the outer SRF.
this seems to generally work, and allows to remove considerable amounts
of code.
So far I have one problem without an easy solution: Historically queries
like
=# SELECT id, generate_series(1,2) FROM (VALUES(1),(2)) few(id);
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 1 │ 1 │
│ 1 │ 2 │
│ 2 │ 1 │
│ 2 │ 2 │
└────┴─────────────────┘
have preserved the SRF output ordering. But by turning the SRF into a
ROWS FROM, there's no guarantee that the cross join between "few" and
generate_series(1,3) above is implemented in that order. I.e. we can get
something like
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 1 │
│ 1 │ 2 │
│ 2 │ 2 │
└────┴─────────────────┘
because it's implemented as
┌──────────────────────────────────────────────────────────────────────────────┐
│ QUERY PLAN │
├──────────────────────────────────────────────────────────────────────────────┤
│ Nested Loop (cost=0.00..35.03 rows=2000 width=8) │
│ -> Function Scan on generate_series (cost=0.00..10.00 rows=1000 width=4) │
│ -> Materialize (cost=0.00..0.04 rows=2 width=4) │
│ -> Values Scan on "*VALUES*" (cost=0.00..0.03 rows=2 width=4) │
└──────────────────────────────────────────────────────────────────────────────┘
I right now see no easy and nice-ish way to constrain that.
Besides that I'm structurally wondering whether turning the original
query into a subquery is the right thing to do. It requires some kind of
ugly munching of Query->*, and has the above problem. One alternative
would be to instead perform the necessary magic in grouping_planner(),
by "manually" adding nestloop joins before/after create_ordered_paths()
(depending on SRFs being referenced in the sort clause). That'd create
plans we'd not have created so far, by layering NestLoop and
FunctionScan nodes above the normal query - that'd allow us to to easily
force the ordering of SRF evaluation.
If we go the subquery route, I'm wondering about where to tackle the
restructuring. So far I'm doing it very early in subquery_planner() -
otherwise the aggregation/sorting/... behaviour is easier to handle.
Perhaps doing it in standard_planner() itself would be better though.
An alternative approach would be to do this during parse-analysis, but I
think that might end up being confusing, because stored rules would
suddenly have a noticeably different structure, and it'd tie us a lot
more to the details of that transformation than I'd like.
Besides the removal of the least-common-multiple behaviour of tSRF queries,
there's some other consequences that using function scans have:
Previously if a tSRF was never evaluated, it didn't cause the number of
rows from being increased. E.g.
SELECT id, COALESCE(1, generate_series(1,2)) FROM (VALUES(1),(2)) few(id);
only produced two rows. But using joins means that a simple
implementation of using ROWS FROM returns four rows. We could try to
inject sufficient join conditions in that type of query, to prune down
the number of rows again, but I really don't want to go there - it's
kinda hard in the general case...
Comments?
Regards,
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
I've an implementation that
1) turns all targetlist SRF (tSRF from now on) into ROWS FROM
expressions. If there's tSRFs in the argument of a tSRF those becomes
a separate, lateral, ROWS FROM expression.
2) If grouping/window functions are present, the entire query is wrapped
in a subquery RTE, except for the set-returning function. All
referenced Var|Aggref|GroupingFunc|WindowFunc|Param nodes in the
original targetlist are made to reference that subquery, which gets a
TargetEntry for them.
FWIW, I'd be inclined to do the subquery RTE all the time, adding some
optimization fence to ensure it doesn't get folded back. That fixes
your problem here:
So far I have one problem without an easy solution: Historically queries
like
=# SELECT id, generate_series(1,2) FROM (VALUES(1),(2)) few(id);
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 1 │ 1 │
│ 1 │ 2 │
│ 2 │ 1 │
│ 2 │ 2 │
└────┴─────────────────┘
have preserved the SRF output ordering. But by turning the SRF into a
ROWS FROM, there's no guarantee that the cross join between "few" and
generate_series(1,3) above is implemented in that order.
Besides that I'm structurally wondering whether turning the original
query into a subquery is the right thing to do. It requires some kind of
ugly munching of Query->*, and has the above problem.
It does not seem like it should be that hard, certainly no worse than
subquery pullup. Want to show code?
An alternative approach would be to do this during parse-analysis, but I
think that might end up being confusing, because stored rules would
suddenly have a noticeably different structure, and it'd tie us a lot
more to the details of that transformation than I'd like.
-1 on that; we do not want this transformation visible in stored rules.
Besides the removal of the least-common-multiple behaviour of tSRF queries,
there's some other consequences that using function scans have:
Previously if a tSRF was never evaluated, it didn't cause the number of
rows from being increased. E.g.
SELECT id, COALESCE(1, generate_series(1,2)) FROM (VALUES(1),(2)) few(id);
only produced two rows. But using joins means that a simple
implementation of using ROWS FROM returns four rows.
Hmm. I don't mind changing behavior in that sort of corner case.
If we're prepared to discard the LCM behavior, this seems at least
an order of magnitude less likely to be worth worrying about.
Having said that, I do seem to recall a bug report about misbehavior when
a SRF was present in just one arm of a CASE statement. That would have
the same type of behavior as you describe here, and evidently there's at
least one person out there depending on it.
Would it be worth detecting SRFs below CASE/COALESCE/etc and throwing
an error? It would be easier to sell throwing an error than silently
changing behavior, I think.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-08-02 19:02:38 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
I've an implementation that
1) turns all targetlist SRF (tSRF from now on) into ROWS FROM
expressions. If there's tSRFs in the argument of a tSRF those becomes
a separate, lateral, ROWS FROM expression.2) If grouping/window functions are present, the entire query is wrapped
in a subquery RTE, except for the set-returning function. All
referenced Var|Aggref|GroupingFunc|WindowFunc|Param nodes in the
original targetlist are made to reference that subquery, which gets a
TargetEntry for them.FWIW, I'd be inclined to do the subquery RTE all the time,
Yea, that's what I ended up doing.
adding some
optimization fence to ensure it doesn't get folded back. That fixes
your problem here:
So far I have one problem without an easy solution: Historically queries
like
=# SELECT id, generate_series(1,2) FROM (VALUES(1),(2)) few(id);
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 1 │ 1 │
│ 1 │ 2 │
│ 2 │ 1 │
│ 2 │ 2 │
└────┴─────────────────┘
have preserved the SRF output ordering. But by turning the SRF into a
ROWS FROM, there's no guarantee that the cross join between "few" and
generate_series(1,3) above is implemented in that order.
But I don't see how that fixes the above problem? The join, on the
top-level because of aggregates, still can be implemented as
subquery join srf or as srf join subquery, with the different output order
that implies. I've duct-taped together a solution for that, by forcing
the lateral machinery to always see a dependency from the SRF to the
subquery; but that probably needs a nicer fix than a RangeTblEntry->deps
field which is processed in extract_lateral_references() ;)
Besides that I'm structurally wondering whether turning the original
query into a subquery is the right thing to do. It requires some kind of
ugly munching of Query->*, and has the above problem.It does not seem like it should be that hard, certainly no worse than
subquery pullup. Want to show code?
It's not super hard, there's some stuff like pushing/not-pushing
various sortgrouprefs to the subquery. But I think we can live with it.
Let me clean up the code some, hope to have something today or tomorrow.
An alternative approach would be to do this during parse-analysis, but I
think that might end up being confusing, because stored rules would
suddenly have a noticeably different structure, and it'd tie us a lot
more to the details of that transformation than I'd like.-1 on that; we do not want this transformation visible in stored rules.
Agreed.
Besides the removal of the least-common-multiple behaviour of tSRF queries,
there's some other consequences that using function scans have:
Previously if a tSRF was never evaluated, it didn't cause the number of
rows from being increased. E.g.
SELECT id, COALESCE(1, generate_series(1,2)) FROM (VALUES(1),(2)) few(id);
only produced two rows. But using joins means that a simple
implementation of using ROWS FROM returns four rows.Hmm. I don't mind changing behavior in that sort of corner case.
If we're prepared to discard the LCM behavior, this seems at least
an order of magnitude less likely to be worth worrying about.
I think it's fine, and potentially less confusing.
Would it be worth detecting SRFs below CASE/COALESCE/etc and throwing
an error? It would be easier to sell throwing an error than silently
changing behavior, I think.
Hm. We could, but I think the new behaviour would actually make sense in
the long run. Interpreting the coalesce to run on the output of the SRF
doesn't seem bad to me.
I found another edgecase, which we need to make a decision about.
'record' returning SRFs can't be transformed easily into a ROWS
FROM. Consider e.g. the following from the regression tests:
create function array_to_set(anyarray) returns setof record as $$
select i AS "index", $1[i] AS "value" from generate_subscripts($1, 1) i
$$ language sql strict immutable;
select array_to_set(array['one', 'two']);
┌──────────────┐
│ array_to_set │
├──────────────┤
│ (1,one) │
│ (2,two) │
└──────────────┘
(2 rows)
which currently works. That currently can't be modeled as ROWS FROM()
directly, because that desperately wants to return the columns as
columns, which we can't do for 'record' returning things, because they
don't have defined columns. For composite returning SRFs I've currently
implemented that by generating a ROWS() expression, but that doesn't
work for record.
So it seems like we need some, not necessarily user exposed, way of
making nodeFunctionscan.c return the return value as one datum. One
way, as suggested by Andrew G. on IRC, was to interpret empty column
definition in ROWS FROM interpreted that way.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-08-02 16:30:55 -0700, Andres Freund wrote:
Besides that I'm structurally wondering whether turning the original
query into a subquery is the right thing to do. It requires some kind of
ugly munching of Query->*, and has the above problem.It does not seem like it should be that hard, certainly no worse than
subquery pullup. Want to show code?It's not super hard, there's some stuff like pushing/not-pushing
various sortgrouprefs to the subquery. But I think we can live with it.Let me clean up the code some, hope to have something today or
tomorrow.
Here we go. This *clearly* is a POC, not more. But it mostly works.
0001 - adds some test, some of those change after the later patches
0002 - main SRF via ROWS FROM () implementation
0003 - Large patch removing now unused code. Most satisfying.
The interesting bit is obviously 0002. What it basically does is, at the beginning
of subquery_planner():
1) unsrfify:
move the jointree into a subquery
2) unsrfify_reference_subquery_mutator:
process the old targetlist to reference the new subquery. If a
TargetEntry doesn't contain a set, it's entirely moved into the
subquery. Otherwise all Vars/Aggrefs/... it references are moved to
the subquery, and referenced in the outer query's target list.
3) unsrfify_implement_srfs_mutator:
Replace set returning functions in the targetlist with references to
a new FUNCTION RTE. All non-nested tSRFs are part of the same RTE
(i.e. the least common multiple behaviour is gone). all tSRFs in
arguments are implemented as another FUNCTION RTE.
I discovered that we allow SRFs in UPDATE target lists. It's not clear
to me what that's supposed to mean. Nor how exactly to implement that,
given expand_targetlist(). Right now that fails with the patch, because
it re-inserts Var's for the relation replaced by the subquery.
Note that I've not bothered to fix up the regression test output - I'm
certain that explain output and such will still change.
Biggest questions / tasks:
* General approach
* DML handling
* Operator implementation
* SETOF record handling
* correct handling of lateral dependency from RTE to subquery to force
evaluation order, instead of my RangeTblEntry->deps hack.
* lot of cleanup
Comments?
Greetings,
Andres Freund
Attachments:
0001-Add-some-more-targetlist-srf-tests.patchtext/x-patch; charset=us-asciiDownload
From 8173640aec148d3c20e7089c4d04a040fa8867e2 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Wed, 3 Aug 2016 18:29:42 -0700
Subject: [PATCH 1/3] Add some more targetlist srf tests.
---
src/test/regress/expected/tsrf.out | 156 +++++++++++++++++++++++++++++++++++++
src/test/regress/parallel_schedule | 2 +-
src/test/regress/serial_schedule | 1 +
src/test/regress/sql/tsrf.sql | 56 +++++++++++++
4 files changed, 214 insertions(+), 1 deletion(-)
create mode 100644 src/test/regress/expected/tsrf.out
create mode 100644 src/test/regress/sql/tsrf.sql
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
new file mode 100644
index 0000000..119c046
--- /dev/null
+++ b/src/test/regress/expected/tsrf.out
@@ -0,0 +1,156 @@
+--
+-- tsrf - targetlist set returning function tests
+--
+-- simple srf
+SELECT generate_series(1, 3);
+ generate_series
+-----------------
+ 1
+ 2
+ 3
+(3 rows)
+
+-- parallel iteration
+SELECT generate_series(1, 3), generate_series(3,5);
+ generate_series | generate_series
+-----------------+-----------------
+ 1 | 3
+ 2 | 4
+ 3 | 5
+(3 rows)
+
+-- parallel iteration, different number of rows
+SELECT generate_series(1, 2), generate_series(1,4);
+ generate_series | generate_series
+-----------------+-----------------
+ 1 | 1
+ 2 | 2
+ 1 | 3
+ 2 | 4
+(4 rows)
+
+-- srf, with SRF argument
+SELECT generate_series(1, generate_series(1, 3));
+ generate_series
+-----------------
+ 1
+ 1
+ 2
+ 1
+ 2
+ 3
+(6 rows)
+
+-- srf, with two SRF arguments
+SELECT generate_series(generate_series(1,3), generate_series(2, 4));
+ERROR: functions and operators can take at most one set argument
+CREATE TABLE few(id int, dataa text, datab text);
+INSERT INTO few VALUES(1, 'a', 'foo'),(2, 'a', 'bar'),(3, 'b', 'bar');
+-- SRF output order of sorting is maintained, if SRF is not referenced
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id DESC;
+ id | g
+----+---
+ 3 | 1
+ 3 | 2
+ 3 | 3
+ 2 | 1
+ 2 | 2
+ 2 | 3
+ 1 | 1
+ 1 | 2
+ 1 | 3
+(9 rows)
+
+-- but SRFs can be referenced in sort
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, g DESC;
+ id | g
+----+---
+ 1 | 3
+ 1 | 2
+ 1 | 1
+ 2 | 3
+ 2 | 2
+ 2 | 1
+ 3 | 3
+ 3 | 2
+ 3 | 1
+(9 rows)
+
+-- SRFs are computed after aggregation
+SELECT few.dataa, count(*), min(id), max(id), generate_series(1,3) FROM few GROUP BY few.dataa;
+ dataa | count | min | max | generate_series
+-------+-------+-----+-----+-----------------
+ b | 1 | 3 | 3 | 1
+ b | 1 | 3 | 3 | 2
+ b | 1 | 3 | 3 | 3
+ a | 2 | 1 | 2 | 1
+ a | 2 | 1 | 2 | 2
+ a | 2 | 1 | 2 | 3
+(6 rows)
+
+-- SRFs are computed after window functions
+SELECT id,lag(id) OVER(), count(*) OVER(), generate_series(1,3) FROM few;
+ id | lag | count | generate_series
+----+-----+-------+-----------------
+ 1 | | 3 | 1
+ 1 | | 3 | 2
+ 1 | | 3 | 3
+ 2 | 1 | 3 | 1
+ 2 | 1 | 3 | 2
+ 2 | 1 | 3 | 3
+ 3 | 2 | 3 | 1
+ 3 | 2 | 3 | 2
+ 3 | 2 | 3 | 3
+(9 rows)
+
+-- sorting + grouping
+SELECT few.dataa, count(*), min(id), max(id), generate_series(1,3) FROM few GROUP BY few.dataa ORDER BY 5;
+ dataa | count | min | max | generate_series
+-------+-------+-----+-----+-----------------
+ b | 1 | 3 | 3 | 1
+ a | 2 | 1 | 2 | 1
+ b | 1 | 3 | 3 | 2
+ a | 2 | 1 | 2 | 2
+ b | 1 | 3 | 3 | 3
+ a | 2 | 1 | 2 | 3
+(6 rows)
+
+-- grouping sets are a bit special, they produce NULLs in columns not actually NULL
+SELECT dataa, datab b, count(*) FROM few GROUP BY CUBE(dataa, datab) ORDER BY 1,2,3;
+ dataa | b | count
+-------+-----+-------
+ a | bar | 1
+ a | foo | 1
+ a | | 2
+ b | bar | 1
+ b | | 1
+ | bar | 2
+ | foo | 1
+ | | 3
+(8 rows)
+
+-- data modification
+CREATE TABLE fewmore AS SELECT generate_series(1,3) AS data;
+INSERT INTO fewmore VALUES(generate_series(4,5));
+SELECT * FROM fewmore;
+ data
+------
+ 1
+ 2
+ 3
+ 4
+ 5
+(5 rows)
+
+-- nonsensically that seems to be allowed
+UPDATE fewmore SET data = generate_series(4,9);
+-- SRFs are now allowed in RETURNING
+INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
+ERROR: set-valued function called in context that cannot accept a set
+-- nor aggregate arguments
+SELECT count(generate_series(1,3)) FROM few;
+ERROR: set-valued function called in context that cannot accept a set
+-- nor proper VALUES
+VALUES(1, generate_series(1,2));
+ERROR: set-valued function called in context that cannot accept a set
+-- test DISTINCT ON, LIMIT/OFFSET, correlated subqueries
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 4ebad04..46a119d 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -92,7 +92,7 @@ test: brin gin gist spgist privileges init_privs security_label collate matview
test: alter_generic alter_operator misc psql async dbsize misc_functions
# rules cannot run concurrently with any test that creates a view
-test: rules psql_crosstab select_parallel
+test: rules psql_crosstab select_parallel tsrf
# ----------
# Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 5c7038d..1f2caa4 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -126,6 +126,7 @@ test: misc_functions
test: rules
test: psql_crosstab
test: select_parallel
+test: tsrf
test: select_views
test: portals_p2
test: foreign_key
diff --git a/src/test/regress/sql/tsrf.sql b/src/test/regress/sql/tsrf.sql
new file mode 100644
index 0000000..eb341bc
--- /dev/null
+++ b/src/test/regress/sql/tsrf.sql
@@ -0,0 +1,56 @@
+--
+-- tsrf - targetlist set returning function tests
+--
+
+-- simple srf
+SELECT generate_series(1, 3);
+
+-- parallel iteration
+SELECT generate_series(1, 3), generate_series(3,5);
+
+-- parallel iteration, different number of rows
+SELECT generate_series(1, 2), generate_series(1,4);
+
+-- srf, with SRF argument
+SELECT generate_series(1, generate_series(1, 3));
+
+-- srf, with two SRF arguments
+SELECT generate_series(generate_series(1,3), generate_series(2, 4));
+
+CREATE TABLE few(id int, dataa text, datab text);
+INSERT INTO few VALUES(1, 'a', 'foo'),(2, 'a', 'bar'),(3, 'b', 'bar');
+
+-- SRF output order of sorting is maintained, if SRF is not referenced
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id DESC;
+
+-- but SRFs can be referenced in sort
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, g DESC;
+
+-- SRFs are computed after aggregation
+SELECT few.dataa, count(*), min(id), max(id), generate_series(1,3) FROM few GROUP BY few.dataa;
+
+-- SRFs are computed after window functions
+SELECT id,lag(id) OVER(), count(*) OVER(), generate_series(1,3) FROM few;
+
+-- sorting + grouping
+SELECT few.dataa, count(*), min(id), max(id), generate_series(1,3) FROM few GROUP BY few.dataa ORDER BY 5;
+
+-- grouping sets are a bit special, they produce NULLs in columns not actually NULL
+SELECT dataa, datab b, count(*) FROM few GROUP BY CUBE(dataa, datab) ORDER BY 1,2,3;
+
+-- data modification
+CREATE TABLE fewmore AS SELECT generate_series(1,3) AS data;
+INSERT INTO fewmore VALUES(generate_series(4,5));
+SELECT * FROM fewmore;
+
+-- nonsensically that seems to be allowed
+UPDATE fewmore SET data = generate_series(4,9);
+
+-- SRFs are now allowed in RETURNING
+INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
+-- nor aggregate arguments
+SELECT count(generate_series(1,3)) FROM few;
+-- nor proper VALUES
+VALUES(1, generate_series(1,2));
+
+-- test DISTINCT ON, LIMIT/OFFSET, correlated subqueries
--
2.8.1
0002-Basic-implementation-of-targetlist-SRFs-via-ROWS-FRO.patchtext/x-patch; charset=us-asciiDownload
From acc5949aaf61352ff3dd0cca2d7e921a8b8746d0 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Fri, 29 Jul 2016 18:51:02 -0700
Subject: [PATCH 2/3] Basic implementation of targetlist SRFs via ROWS FROM.
---
src/backend/executor/execQual.c | 7 +
src/backend/nodes/copyfuncs.c | 2 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/nodes/outfuncs.c | 2 +
src/backend/nodes/readfuncs.c | 2 +
src/backend/optimizer/plan/initsplan.c | 4 +
src/backend/optimizer/plan/planner.c | 8 +
src/backend/optimizer/prep/prepjointree.c | 4 +
src/backend/optimizer/util/clauses.c | 581 ++++++++++++++++++++++++++++++
src/backend/parser/analyze.c | 10 +
src/backend/parser/parse_func.c | 5 +
src/backend/parser/parse_oper.c | 5 +
src/include/nodes/parsenodes.h | 6 +-
src/include/optimizer/clauses.h | 2 +
src/include/parser/parse_node.h | 1 +
15 files changed, 639 insertions(+), 1 deletion(-)
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index 69bf65d..8896455 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -2420,6 +2420,13 @@ ExecEvalFunc(FuncExprState *fcache,
init_fcache(func->funcid, func->inputcollid, fcache,
econtext->ecxt_per_query_memory, true);
+ if (fcache->func.fn_retset)
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
+ }
+
/*
* We need to invoke ExecMakeFunctionResult if either the function itself
* or any of its input expressions can return a set. Otherwise, invoke
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 3244c76..8418b80 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -2150,6 +2150,7 @@ _copyRangeTblEntry(const RangeTblEntry *from)
COPY_BITMAPSET_FIELD(insertedCols);
COPY_BITMAPSET_FIELD(updatedCols);
COPY_NODE_FIELD(securityQuals);
+ COPY_NODE_FIELD(deps);
return newnode;
}
@@ -2719,6 +2720,7 @@ _copyQuery(const Query *from)
COPY_SCALAR_FIELD(hasModifyingCTE);
COPY_SCALAR_FIELD(hasForUpdate);
COPY_SCALAR_FIELD(hasRowSecurity);
+ COPY_SCALAR_FIELD(hasTargetSRF);
COPY_NODE_FIELD(cteList);
COPY_NODE_FIELD(rtable);
COPY_NODE_FIELD(jointree);
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 1eb6799..b30bcc5 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2459,6 +2459,7 @@ _equalRangeTblEntry(const RangeTblEntry *a, const RangeTblEntry *b)
COMPARE_BITMAPSET_FIELD(insertedCols);
COMPARE_BITMAPSET_FIELD(updatedCols);
COMPARE_NODE_FIELD(securityQuals);
+ COMPARE_NODE_FIELD(deps);
return true;
}
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index acaf4ea..35eab0b 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -2675,6 +2675,7 @@ _outQuery(StringInfo str, const Query *node)
WRITE_BOOL_FIELD(hasModifyingCTE);
WRITE_BOOL_FIELD(hasForUpdate);
WRITE_BOOL_FIELD(hasRowSecurity);
+ WRITE_BOOL_FIELD(hasTargetSRF);
WRITE_NODE_FIELD(cteList);
WRITE_NODE_FIELD(rtable);
WRITE_NODE_FIELD(jointree);
@@ -2852,6 +2853,7 @@ _outRangeTblEntry(StringInfo str, const RangeTblEntry *node)
WRITE_BITMAPSET_FIELD(insertedCols);
WRITE_BITMAPSET_FIELD(updatedCols);
WRITE_NODE_FIELD(securityQuals);
+ WRITE_NODE_FIELD(deps);
}
static void
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 94954dc..ca7d20c 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -244,6 +244,7 @@ _readQuery(void)
READ_BOOL_FIELD(hasModifyingCTE);
READ_BOOL_FIELD(hasForUpdate);
READ_BOOL_FIELD(hasRowSecurity);
+ READ_BOOL_FIELD(hasTargetSRF);
READ_NODE_FIELD(cteList);
READ_NODE_FIELD(rtable);
READ_NODE_FIELD(jointree);
@@ -1322,6 +1323,7 @@ _readRangeTblEntry(void)
READ_BITMAPSET_FIELD(insertedCols);
READ_BITMAPSET_FIELD(updatedCols);
READ_NODE_FIELD(securityQuals);
+ READ_NODE_FIELD(deps);
READ_DONE();
}
diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c
index 84ce6b3..ada34cc 100644
--- a/src/backend/optimizer/plan/initsplan.c
+++ b/src/backend/optimizer/plan/initsplan.c
@@ -339,6 +339,10 @@ extract_lateral_references(PlannerInfo *root, RelOptInfo *brel, Index rtindex)
return; /* keep compiler quiet */
}
+ /* DIRTY hack time, add dependency for targetlist SRFs */
+ vars = list_concat(vars,
+ pull_vars_of_level((Node *) rte->deps, 0));
+
if (vars == NIL)
return; /* nothing to do */
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index b265628..ffc1c85 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -490,6 +490,14 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
root->non_recursive_path = NULL;
/*
+ * Convert SRFs in targetlist into FUNCTION rtes. As this, if applicable,
+ * will move the move the main portion of the query into a subselect, this
+ * has to be done early on in subquery_planner().
+ */
+ if (parse->hasTargetSRF)
+ unsrfify(root);
+
+ /*
* If there is a WITH list, process each WITH query and build an initplan
* SubPlan structure for it.
*/
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index a334f15..0e06a98 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1982,6 +1982,10 @@ replace_vars_in_jointree(Node *jtnode,
Assert(false);
break;
}
+
+ rte->deps = (List *)
+ pullup_replace_vars((Node *) rte->deps,
+ context);
}
}
}
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index a69af7c..7e60694 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -36,6 +36,7 @@
#include "optimizer/cost.h"
#include "optimizer/planmain.h"
#include "optimizer/prep.h"
+#include "optimizer/tlist.h"
#include "optimizer/var.h"
#include "parser/analyze.h"
#include "parser/parse_agg.h"
@@ -94,6 +95,30 @@ typedef struct
bool allow_restricted;
} has_parallel_hazard_arg;
+typedef struct unsrfify_context
+{
+ PlannerInfo *root;
+ /* query being converted */
+ Query *outer_query;
+ /* created subquery */
+ Query *inner_query;
+ /* RT index of the above */
+ Index subquery_rti;
+
+ /* targetlist of the new subquery */
+ List *subquery_tlist;
+ List *subquery_colnames;
+
+ /* RTE for the currently generated function RTE */
+ RangeTblEntry *currte;
+ Index currti; /* and it's RT index */
+ /* current column number in function RTE */
+ int coloff;
+
+ /* current target's resname during expression iteration */
+ char *current_resname;
+} unsrfify_context;
+
static bool contain_agg_clause_walker(Node *node, void *context);
static bool get_agg_clause_costs_walker(Node *node,
get_agg_clause_costs_context *context);
@@ -2251,6 +2276,562 @@ rowtype_field_matches(Oid rowtypeid, int fieldnum,
return true;
}
+/*
+ * Push down expression into the subquery, return resno of targetlist entry.
+ */
+static int
+unsrfify_push_expr_to_subquery(Expr *expr, Index sortgroupref,
+ unsrfify_context *context)
+{
+ ListCell *tc;
+ int resno = 1;
+ char *resname = context->current_resname;
+ TargetEntry *new_te;
+
+ /*
+ * Check whether we already moved this expression to subquery, if so,
+ * reuse.
+ */
+ foreach(tc, context->subquery_tlist)
+ {
+ TargetEntry *te = (TargetEntry *) lfirst(tc);
+ Expr *oldexpr = te->expr;
+
+ if (equal(oldexpr, expr))
+ {
+ if (sortgroupref > 0)
+ {
+ if (te->ressortgroupref != sortgroupref &&
+ te->ressortgroupref > 0)
+ {
+ /* FIXME: might happen with duplicate expressions? */
+ elog(ERROR, "non-unique ressortgroupref?");
+ }
+ else
+ {
+ te->ressortgroupref = sortgroupref;
+ return resno;
+ }
+ }
+ return resno;
+ }
+ resno++;
+ }
+
+ /* XXX */
+ if (!resname)
+ resname = "...";
+
+ Assert(resno == list_length(context->subquery_tlist) + 1);
+
+ new_te = makeTargetEntry((Expr *) copyObject(expr),
+ resno, resname , false);
+ new_te->ressortgroupref = sortgroupref;
+ context->subquery_tlist = lappend(context->subquery_tlist, new_te);
+ context->subquery_colnames = lappend(context->subquery_colnames,
+ makeString(context->current_resname));
+
+ return resno;
+}
+
+/*
+ * Change target list to reference subquery.
+ *
+ * TargetEntry's that dont't contain a set returning function are pushed down
+ * entirely, others are modified to have relevant expressions refer to (new)
+ * entries in the subquery targetlist.
+ */
+static Node *
+unsrfify_reference_subquery_mutator(Node *node, unsrfify_context *context)
+{
+ check_stack_depth();
+
+ if (node == NULL)
+ return NULL;
+
+ switch (nodeTag(node))
+ {
+ case T_TargetEntry:
+ {
+ TargetEntry *te = (TargetEntry *) node;
+
+ /*
+ * Note that we're intentionally pushing down sortgrouprefs,
+ * that way grouping et al will work. It's more than a bit
+ * debatable though to do this unconditionally: We'll
+ * currently end up with sortgrouprefs in both top-level and
+ * subquery.
+ */
+
+ /* XXX: naming here isn't great */
+ if (!te->resname)
+ context->current_resname = "...";
+ else
+ context->current_resname = pstrdup(te->resname);
+
+ /* if expression doesn't return set, push down entirely */
+ if (!expression_returns_set((Node *) te->expr))
+ {
+ AttrNumber resno =
+ unsrfify_push_expr_to_subquery(te->expr,
+ te->ressortgroupref,
+ context);
+ te = flatCopyTargetEntry(te);
+
+ te->expr = (Expr *) makeVar(context->subquery_rti,
+ resno,
+ exprType((Node *) te->expr),
+ exprTypmod((Node *) te->expr),
+ exprCollation((Node *) te->expr),
+ 0);
+ }
+ else
+ {
+ te = (TargetEntry *)
+ expression_tree_mutator((Node *) te,
+ unsrfify_reference_subquery_mutator,
+ (void *) context);
+ }
+
+ context->current_resname = NULL;
+ return (Node *) te;
+ }
+ break;
+ /* Anything additional? */
+ case T_Var:
+ case T_Aggref:
+ case T_GroupingFunc:
+ case T_WindowFunc:
+ case T_Param /* ? */:
+ /*
+ * Vars, aggrefs, groupingfuncs, ... come from the subquery in
+ * which the main query is being moved. For each reference in the
+ * main targetlist - containing the reference to the SRF and such
+ * - move the underlying clause as a separate TargetEntry into the
+ * subquery, and reference that.
+ *
+ * Note that varlevelsup for expressions in the subquery is later
+ * adjusted with IncrementVarSublevelsUp, together with the other
+ * expressions in the subquery.
+ */
+ {
+ AttrNumber resno =
+ unsrfify_push_expr_to_subquery((Expr *) node, 0, context);
+
+ return (Node *) makeVar(context->subquery_rti,
+ resno,
+ exprType(node),
+ exprTypmod(node),
+ exprCollation(node),
+ 0);
+ }
+ return node;
+ default:
+ break;
+ }
+
+ return expression_tree_mutator(node, unsrfify_reference_subquery_mutator,
+ (void *) context);
+}
+
+static Node *
+unsrfify_implement_srfs_mutator(Node *node, unsrfify_context *context)
+{
+ check_stack_depth();
+
+ if (node == NULL)
+ return NULL;
+ switch (nodeTag(node))
+ {
+ case T_OpExpr:
+ {
+ OpExpr *expr = (OpExpr *) node;
+
+ if (expr->opretset)
+ {
+ /*
+ * TODO: Hrmpf, implement. And why is there not a single
+ * test for this :(
+ */
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("XXX: SETOF record returning operators are not supported")));
+ }
+ }
+ break;
+
+ case T_FuncExpr:
+ {
+ FuncExpr *expr = (FuncExpr *) node;
+
+ /*
+ * For set returning functions, move them to the current
+ * level's ROWS FROM expression, and add a Var referencing
+ * that expressions result.
+ */
+ if (expr->funcretset)
+ {
+ RangeTblEntry *old_currte;
+ Index old_currti;
+ int old_coloff;
+
+ /*
+ * Process set-returning arguments to set-returning
+ * functions as a separate ROWS FROM expression, again
+ * laterally joined to this.
+ */
+ old_currte = context->currte;
+ old_currti = context->currti;
+ old_coloff = context->coloff;
+
+ context->currte = NULL;
+ context->currti = 0;
+ context->coloff = 0;
+
+ expr->args = (List *)
+ expression_tree_mutator((Node *) expr->args,
+ unsrfify_implement_srfs_mutator,
+ (void *) context);
+ context->currte = old_currte;
+ context->currti = old_currti;
+ context->coloff = old_coloff;
+
+ }
+ else
+ {
+ expr->args = (List *)
+ expression_tree_mutator((Node *) expr->args,
+ unsrfify_implement_srfs_mutator,
+ (void *) context);
+ }
+
+ if (expr->funcretset)
+ {
+ RangeTblEntry *rte;
+ RangeTblFunction *rtfunc;
+ RangeTblRef *rtf;
+ Index rti;
+ TypeFuncClass functypclass;
+ TupleDesc tupdesc;
+ Oid funcrettype;
+ /* FIXME: used in places it shouldn't */
+ char *funcname = get_func_name(expr->funcid);
+ int i;
+
+ functypclass = get_expr_result_type(node,
+ &funcrettype,
+ &tupdesc);
+
+ if (functypclass == TYPEFUNC_COMPOSITE)
+ {
+ /* Composite data type, e.g. a table's row type */
+ Assert(tupdesc);
+ }
+ else if (functypclass == TYPEFUNC_SCALAR)
+ {
+ /* Base data type, i.e. scalar */
+ tupdesc = CreateTemplateTupleDesc(1, false);
+ TupleDescInitEntry(tupdesc,
+ (AttrNumber) 1,
+ funcname,
+ funcrettype,
+ -1,
+ 0);
+ }
+ else if (functypclass == TYPEFUNC_RECORD)
+ {
+ /* Add ROWS FROM() feature to support this? */
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("XXX: SETOF record returning functions are not allowed in target list")));
+ }
+ else
+ {
+ Assert(false);
+ }
+
+ if (context->currte == NULL)
+ {
+ Alias *eref;
+
+ rte = makeNode(RangeTblEntry);
+ rte->rtekind = RTE_FUNCTION;
+ rte->lateral = true;
+ rte->inh = false;
+ rte->inFromCl = true;
+
+ eref = makeAlias(funcname, NIL);
+
+ rte->eref = eref;
+
+ rte->funcordinality = false;
+
+ /*
+ * DIRTY hack time: add LATERAL dependency to the
+ * subquery containing the original query. That forces
+ * the planner to evaluate the subquery first
+ * (i.e. nestloop subquery to SRF, not the other way
+ * round), persisting the output ordering of the SRF.
+ */
+ rte->deps = list_make1(makeVar(context->subquery_rti, 0, RECORDOID, -1, InvalidOid, 0));
+
+ context->outer_query->rtable =
+ lappend(context->outer_query->rtable, rte);
+
+ rti = list_length(context->outer_query->rtable);
+
+ rtf = makeNode(RangeTblRef);
+ rtf->rtindex = rti;
+
+ context->outer_query->jointree->fromlist =
+ lappend(context->outer_query->jointree->fromlist, rtf);
+
+ context->currte = rte;
+ context->currti = rti;
+ }
+ else
+ {
+ rte = context->currte;
+ rti = context->currti;
+ }
+
+ /* add SRF RTE */
+ rtfunc = makeNode(RangeTblFunction);
+ rtfunc->funcexpr = (Node *) expr;
+ rtfunc->funccolcount = tupdesc ? tupdesc->natts : 1;
+
+ rte->functions = lappend(rte->functions, rtfunc);
+
+ if (functypclass == TYPEFUNC_SCALAR)
+ {
+ rte->eref->colnames = lappend(rte->eref->colnames,
+ makeString(funcname));
+
+ /* replace reference to RTE */
+ return (Node *) makeVar(rti,
+ ++context->coloff,
+ funcrettype,
+ exprTypmod(node),
+ expr->funccollid,
+ 0);
+ }
+ else
+ {
+ /*
+ * targetlist SRFs returning a composite type have all
+ * columns in one field. ROWS FROM returns all columns
+ * separately. Construct a ROW(a,b,c, ...) expression,
+ * referring to the ROWS FROM expression output.
+ */
+ RowExpr *row = makeNode(RowExpr);
+
+ row->row_typeid = funcrettype;
+
+ for (i = 0; i < tupdesc->natts; i++)
+ {
+ Form_pg_attribute attr = tupdesc->attrs[i];
+ Var *var = makeVar(rti,
+ ++context->coloff,
+ attr->atttypid,
+ attr->atttypmod,
+ attr->attcollation,
+ 0);
+ row->args = lappend(row->args, var);
+ row->colnames = lappend(row->colnames,
+ makeString(pstrdup(NameStr(attr->attname))));
+ rte->eref->colnames = lappend(rte->eref->colnames,
+ makeString(pstrdup(NameStr(attr->attname))));
+ }
+
+ return (Node *) row;
+ }
+ }
+ }
+ break;
+ default:
+ break;
+ }
+
+ return expression_tree_mutator(node, unsrfify_implement_srfs_mutator,
+ (void *) context);
+}
+
+/*
+ * Implement set-returning-functions in the targetlist using ROWS FROM() in
+ * the from list.
+ */
+void
+unsrfify(PlannerInfo *root)
+{
+ unsrfify_context context;
+ Query *outer_query = root->parse;
+ List *outerOldTlist = root->parse->targetList;
+ bool sortContainsSRF = false;
+ Query *inner_query;
+ RangeTblEntry *rte;
+ RangeTblRef *rtf;
+ ListCell *lc;
+
+ /* skip work if targetlist doesn't contain an SRF */
+ if (!expression_returns_set((Node *) root->parse->targetList))
+ {
+ return;
+ }
+
+ inner_query = makeNode(Query);
+ rte = makeNode(RangeTblEntry);
+ rtf = makeNode(RangeTblRef);
+
+ memset(&context, 0, sizeof(context));
+ context.root = root;
+ context.outer_query = outer_query;
+ context.inner_query = inner_query;
+
+ /* check whether sorting has to be performed before/after SRF processing */
+ foreach(lc, root->parse->sortClause)
+ {
+ SortGroupClause *sgc = lfirst(lc);
+ Node *sortExpr = get_sortgroupclause_expr(sgc, root->parse->targetList);
+
+ if (expression_returns_set(sortExpr))
+ {
+ sortContainsSRF = true;
+ break;
+ }
+ }
+
+ /*
+ * Move main query processing into a subquery. Otherwise aggregates will
+ * possibly process more rows, due to the SRF expanding the result set. We
+ * could perform this work conditionally, but that seems like an
+ * unnecessary complication.
+ *
+ * If the query has an order-by, but that order-by does not reference SRF
+ * output, then SRF expansion should happen after the sort, for two
+ * reasons: Firstly, to process fewer rows. Secondly, to have less
+ * confusing results, if the output of the SRF are sorted.
+ */
+ rte->rtekind = RTE_SUBQUERY;
+ rte->subquery = inner_query;
+ rte->security_barrier = false;
+ context.subquery_rti = list_length(outer_query->rtable) + 1;
+ rtf->rtindex = context.subquery_rti;
+
+ inner_query->commandType = CMD_SELECT;
+ inner_query->querySource = QSRC_TARGETLIST_SRF;
+ inner_query->canSetTag = true;
+
+ /*
+ * Copy the range-table, without resetting it on the outside. If the outer
+ * query is a data-modifying one, resultRelation needs to point to the
+ * actually modified table. XXX: But that doesn't work at all for
+ * UPDATEs, because there expand_targetlist() will add Vars pointing to
+ * the result relation.
+ */
+ inner_query->rtable = copyObject(outer_query->rtable);
+
+ if (outer_query->commandType == CMD_UPDATE)
+ elog(ERROR, "what does this SRF mean anyway?");
+
+ inner_query->jointree = outer_query->jointree;
+
+ inner_query->hasAggs = outer_query->hasAggs;
+ outer_query->hasAggs = false; /* moved to subquery */
+
+ inner_query->hasWindowFuncs = outer_query->hasWindowFuncs; /* FIXME */
+ outer_query->hasWindowFuncs = false;
+
+ /* can still be present in outer query */
+ inner_query->hasSubLinks = outer_query->hasSubLinks;
+
+ /*
+ * CTEs stay on outer level, IncrementVarSublevelsUp adjusts ctelevelsup.
+ */
+ inner_query->hasRecursive = false;
+ inner_query->hasModifyingCTE = false;
+
+ inner_query->hasForUpdate = false;
+
+ inner_query->hasRowSecurity = outer_query->hasRowSecurity;
+
+ /* we've expanded everything */
+ outer_query->hasTargetSRF = false;
+
+ outer_query->rtable = lappend(outer_query->rtable, rte);
+
+ outer_query->jointree = makeFromExpr(list_make1(rtf), NULL);
+
+ /* targetlist is set later */
+
+ /* not modifying */
+ inner_query->onConflict = NULL;
+ inner_query->returningList = NIL;
+
+ /* transfer group / window related clauses to child */
+ inner_query->groupClause = outer_query->groupClause;
+ outer_query->groupClause = NIL;
+
+ inner_query->groupingSets = outer_query->groupingSets;
+ outer_query->groupingSets = NIL;
+
+ inner_query->havingQual = outer_query->havingQual;
+ outer_query->havingQual = NULL;
+
+ inner_query->windowClause = outer_query->windowClause;
+ outer_query->windowClause = NIL;
+
+ /* DISTINCT [ON] is computed outside */
+
+ /* sort is computed in sub query, unless referencing SRF output */
+ /* XXX: what about combinations with DISTINCT? */
+ if (!sortContainsSRF && list_length(outer_query->sortClause) > 0)
+ {
+ inner_query->sortClause = outer_query->sortClause;
+ outer_query->sortClause = NIL;
+ }
+
+
+ /* limit is processed after SRF expansion */
+
+ /* XXX: where should row marks be processed? */
+
+ /* XXX: where should set operations be processed? */
+ inner_query->setOperations = outer_query->setOperations;
+ outer_query->setOperations = NULL;
+
+ /* constraints should stay on top level */
+
+ /* XXX: where should WITH CHECK options be processed? */
+
+ /*
+ * Update the outer query's targetlist to reference subquery for all
+ * Vars, Aggs and such.
+ */
+ outer_query->targetList = (List *)
+ unsrfify_reference_subquery_mutator((Node *) outerOldTlist,
+ &context);
+ /*
+ * Now convert all targetlist SRFs into FUNCTION RTEs.
+ */
+ outer_query->targetList = (List *)
+ unsrfify_implement_srfs_mutator((Node *) outer_query->targetList,
+ &context);
+
+
+ rte->eref = makeAlias("srf", context.subquery_colnames);
+
+ inner_query->targetList = context.subquery_tlist;
+
+ /*
+ * varlevelsup for expression not local to the query (i.e. varlevelsup >
+ * 0) have to be increased by one, to adjust for the additional layer of
+ * subquery added. Do so after the above processing populating the
+ * subselect's targetlist, to avoid having to deal with varlevelsup in
+ * multiple places.
+ */
+ IncrementVarSublevelsUp((Node *) inner_query, 1, 1);
+}
+
/*--------------------
* eval_const_expressions
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index eac86cc..4e0d095 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -418,6 +418,7 @@ transformDeleteStmt(ParseState *pstate, DeleteStmt *stmt)
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
qry->hasAggs = pstate->p_hasAggs;
+ qry->hasTargetSRF = pstate->p_hasTargetSRF;
if (pstate->p_hasAggs)
parseCheckAggregates(pstate, qry);
@@ -820,6 +821,7 @@ transformInsertStmt(ParseState *pstate, InsertStmt *stmt)
qry->jointree = makeFromExpr(pstate->p_joinlist, NULL);
qry->hasSubLinks = pstate->p_hasSubLinks;
+ qry->hasTargetSRF = pstate->p_hasTargetSRF;
assign_query_collations(pstate, qry);
@@ -1232,6 +1234,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
qry->hasAggs = pstate->p_hasAggs;
+ qry->hasTargetSRF = pstate->p_hasTargetSRF;
if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
parseCheckAggregates(pstate, qry);
@@ -1463,6 +1466,11 @@ transformValuesClause(ParseState *pstate, SelectStmt *stmt)
qry->hasSubLinks = pstate->p_hasSubLinks;
+ if (pstate->p_hasTargetSRF)
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
+
assign_query_collations(pstate, qry);
return qry;
@@ -1692,6 +1700,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
qry->hasAggs = pstate->p_hasAggs;
+ qry->hasTargetSRF = pstate->p_hasTargetSRF;
if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
parseCheckAggregates(pstate, qry);
@@ -2171,6 +2180,7 @@ transformUpdateStmt(ParseState *pstate, UpdateStmt *stmt)
qry->jointree = makeFromExpr(pstate->p_joinlist, qual);
qry->hasSubLinks = pstate->p_hasSubLinks;
+ qry->hasTargetSRF = pstate->p_hasTargetSRF;
assign_query_collations(pstate, qry);
diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c
index 61af484..770903d 100644
--- a/src/backend/parser/parse_func.c
+++ b/src/backend/parser/parse_func.c
@@ -625,6 +625,11 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs,
exprLocation((Node *) llast(fargs)))));
}
+ if (retset)
+ {
+ pstate->p_hasTargetSRF = true;
+ }
+
/* build the appropriate output structure */
if (fdresult == FUNCDETAIL_NORMAL)
{
diff --git a/src/backend/parser/parse_oper.c b/src/backend/parser/parse_oper.c
index e913d05..0a1a0f1 100644
--- a/src/backend/parser/parse_oper.c
+++ b/src/backend/parser/parse_oper.c
@@ -841,6 +841,11 @@ make_op(ParseState *pstate, List *opname, Node *ltree, Node *rtree,
ReleaseSysCache(tup);
+ if (result->opretset)
+ {
+ pstate->p_hasTargetSRF = true;
+ }
+
return (Expr *) result;
}
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1481fff..2d3081e 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -32,7 +32,8 @@ typedef enum QuerySource
QSRC_PARSER, /* added by parse analysis (now unused) */
QSRC_INSTEAD_RULE, /* added by unconditional INSTEAD rule */
QSRC_QUAL_INSTEAD_RULE, /* added by conditional INSTEAD rule */
- QSRC_NON_INSTEAD_RULE /* added by non-INSTEAD rule */
+ QSRC_NON_INSTEAD_RULE, /* added by non-INSTEAD rule */
+ QSRC_TARGETLIST_SRF /* added by targetlist SRF processing */
} QuerySource;
/* Sort ordering options for ORDER BY and CREATE INDEX */
@@ -122,6 +123,7 @@ typedef struct Query
bool hasModifyingCTE; /* has INSERT/UPDATE/DELETE in WITH */
bool hasForUpdate; /* FOR [KEY] UPDATE/SHARE was specified */
bool hasRowSecurity; /* rewriter has applied some RLS policy */
+ bool hasTargetSRF; /* has SRF in target list */
List *cteList; /* WITH list (of CommonTableExpr's) */
@@ -871,6 +873,8 @@ typedef struct RangeTblEntry
Bitmapset *insertedCols; /* columns needing INSERT permission */
Bitmapset *updatedCols; /* columns needing UPDATE permission */
List *securityQuals; /* any security barrier quals to apply */
+
+ List *deps;
} RangeTblEntry;
/*
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index be7c639..71d4e12 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -78,6 +78,8 @@ extern int NumRelids(Node *clause);
extern void CommuteOpExpr(OpExpr *clause);
extern void CommuteRowCompareExpr(RowCompareExpr *clause);
+extern void unsrfify(PlannerInfo *root);
+
extern Node *eval_const_expressions(PlannerInfo *root, Node *node);
extern Node *estimate_expression_value(PlannerInfo *root, Node *node);
diff --git a/src/include/parser/parse_node.h b/src/include/parser/parse_node.h
index e3e359c..c0eec33 100644
--- a/src/include/parser/parse_node.h
+++ b/src/include/parser/parse_node.h
@@ -152,6 +152,7 @@ struct ParseState
bool p_hasWindowFuncs;
bool p_hasSubLinks;
bool p_hasModifyingCTE;
+ bool p_hasTargetSRF;
bool p_is_insert;
bool p_locked_from_parent;
Relation p_target_relation;
--
2.8.1
0003-Remove-now-unused-tSRF-code.patchtext/x-patch; charset=us-asciiDownload
From 1da2ce38db1c10241d357db4b6dbed0002b456d3 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Wed, 3 Aug 2016 19:58:05 -0700
Subject: [PATCH 3/3] Remove now unused tSRF code.
Todo:
- check fmgr readme and comments for references
---
src/backend/catalog/index.c | 3 +-
src/backend/commands/copy.c | 2 +-
src/backend/commands/prepare.c | 3 +-
src/backend/commands/tablecmds.c | 3 +-
src/backend/commands/typecmds.c | 2 +-
src/backend/executor/execQual.c | 1125 +++++------------------------
src/backend/executor/execScan.c | 30 +-
src/backend/executor/execUtils.c | 6 -
src/backend/executor/nodeAgg.c | 45 +-
src/backend/executor/nodeBitmapHeapscan.c | 2 -
src/backend/executor/nodeCtescan.c | 2 -
src/backend/executor/nodeCustom.c | 2 -
src/backend/executor/nodeForeignscan.c | 2 -
src/backend/executor/nodeFunctionscan.c | 2 -
src/backend/executor/nodeGather.c | 25 +-
src/backend/executor/nodeGroup.c | 42 +-
src/backend/executor/nodeHash.c | 2 +-
src/backend/executor/nodeHashjoin.c | 52 +-
src/backend/executor/nodeIndexonlyscan.c | 2 -
src/backend/executor/nodeIndexscan.c | 11 +-
src/backend/executor/nodeLimit.c | 6 +-
src/backend/executor/nodeMergejoin.c | 59 +-
src/backend/executor/nodeModifyTable.c | 4 +-
src/backend/executor/nodeNestloop.c | 41 +-
src/backend/executor/nodeResult.c | 33 +-
src/backend/executor/nodeSamplescan.c | 8 +-
src/backend/executor/nodeSeqscan.c | 2 -
src/backend/executor/nodeSubplan.c | 31 +-
src/backend/executor/nodeSubqueryscan.c | 2 -
src/backend/executor/nodeTidscan.c | 8 +-
src/backend/executor/nodeValuesscan.c | 5 +-
src/backend/executor/nodeWindowAgg.c | 55 +-
src/backend/executor/nodeWorktablescan.c | 2 -
src/backend/optimizer/util/clauses.c | 2 +-
src/backend/optimizer/util/predtest.c | 2 +-
src/backend/utils/adt/domains.c | 2 +-
src/backend/utils/adt/xml.c | 4 +-
src/include/executor/executor.h | 9 +-
src/include/nodes/execnodes.h | 16 +-
src/pl/plpgsql/src/pl_exec.c | 5 +-
40 files changed, 242 insertions(+), 1417 deletions(-)
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 7b30e46..0be278f 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -1788,8 +1788,7 @@ FormIndexDatum(IndexInfo *indexInfo,
elog(ERROR, "wrong number of index expressions");
iDatum = ExecEvalExprSwitchContext((ExprState *) lfirst(indexpr_item),
GetPerTupleExprContext(estate),
- &isNull,
- NULL);
+ &isNull);
indexpr_item = lnext(indexpr_item);
}
values[i] = iDatum;
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index f45b330..28466ac 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -3172,7 +3172,7 @@ NextCopyFrom(CopyState cstate, ExprContext *econtext,
Assert(CurrentMemoryContext == econtext->ecxt_per_tuple_memory);
values[defmap[i]] = ExecEvalExpr(defexprs[i], econtext,
- &nulls[defmap[i]], NULL);
+ &nulls[defmap[i]]);
}
return true;
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index cec37ce..451c8d5 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -404,8 +404,7 @@ EvaluateParams(PreparedStatement *pstmt, List *params,
prm->pflags = PARAM_FLAG_CONST;
prm->value = ExecEvalExprSwitchContext(n,
GetPerTupleExprContext(estate),
- &prm->isnull,
- NULL);
+ &prm->isnull);
i++;
}
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 86e9814..92e468d 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -4151,8 +4151,7 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
values[ex->attnum - 1] = ExecEvalExpr(ex->exprstate,
econtext,
- &isnull[ex->attnum - 1],
- NULL);
+ &isnull[ex->attnum - 1]);
}
/*
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index ce04211..755af68 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2741,7 +2741,7 @@ validateDomainConstraint(Oid domainoid, char *ccbin)
conResult = ExecEvalExprSwitchContext(exprstate,
econtext,
- &isNull, NULL);
+ &isNull);
if (!isNull && !DatumGetBool(conResult))
{
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index 8896455..41f7307 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -62,128 +62,119 @@
/* static function decls */
static Datum ExecEvalArrayRef(ArrayRefExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static bool isAssignmentIndirectionExpr(ExprState *exprstate);
static Datum ExecEvalAggref(AggrefExprState *aggref,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWindowFunc(WindowFuncExprState *wfunc,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalParamExtern(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static void init_fcache(Oid foid, Oid input_collation, FuncExprState *fcache,
- MemoryContext fcacheCxt, bool needDescForSets);
-static void ShutdownFuncExpr(Datum arg);
+ MemoryContext fcacheCxt);
static TupleDesc get_cached_rowtype(Oid type_id, int32 typmod,
TupleDesc *cache_field, ExprContext *econtext);
static void ShutdownTupleDescRef(Datum arg);
-static ExprDoneCond ExecEvalFuncArgs(FunctionCallInfo fcinfo,
+static void ExecEvalFuncArgs(FunctionCallInfo fcinfo,
List *argList, ExprContext *econtext);
-static void ExecPrepareTuplestoreResult(FuncExprState *fcache,
- ExprContext *econtext,
- Tuplestorestate *resultStore,
- TupleDesc resultDesc);
static void tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc);
-static Datum ExecMakeFunctionResult(FuncExprState *fcache,
- ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
static Datum ExecMakeFunctionResultNoSets(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFunc(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalOper(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalDistinct(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCaseTestExpr(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalArray(ArrayExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRow(RowExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRowCompare(RowCompareExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoalesce(CoalesceExprState *coalesceExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalMinMax(MinMaxExprState *minmaxExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNullIf(FuncExprState *nullIfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNullTest(NullTestState *nstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalBooleanTest(GenericExprState *bstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceToDomain(CoerceToDomainState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceToDomainValue(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFieldSelect(FieldSelectState *fstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFieldStore(FieldStoreState *fstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRelabelType(GenericExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
/* ----------------------------------------------------------------
@@ -194,8 +185,7 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
* Each of the following routines having the signature
* Datum ExecEvalFoo(ExprState *expression,
* ExprContext *econtext,
- * bool *isNull,
- * ExprDoneCond *isDone);
+ * bool *isNull);
* is responsible for evaluating one type or subtype of ExprState node.
* They are normally called via the ExecEvalExpr macro, which makes use of
* the function pointer set up when the ExprState node was built by
@@ -219,22 +209,6 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
* return value: Datum value of result
* *isNull: set to TRUE if result is NULL (actual return value is
* meaningless if so); set to FALSE if non-null result
- * *isDone: set to indicator of set-result status
- *
- * A caller that can only accept a singleton (non-set) result should pass
- * NULL for isDone; if the expression computes a set result then an error
- * will be reported via ereport. If the caller does pass an isDone pointer
- * then *isDone is set to one of these three states:
- * ExprSingleResult singleton result (not a set)
- * ExprMultipleResult return value is one element of a set
- * ExprEndResult there are no more elements in the set
- * When ExprMultipleResult is returned, the caller should invoke
- * ExecEvalExpr() repeatedly until ExprEndResult is returned. ExprEndResult
- * is returned after the last real set element. For convenience isNull will
- * always be set TRUE when ExprEndResult is returned, but this should not be
- * taken as indicating a NULL element of the set. Note that these return
- * conventions allow us to distinguish among a singleton NULL, a NULL element
- * of a set, and an empty set.
*
* The caller should already have switched into the temporary memory
* context econtext->ecxt_per_tuple_memory. The convenience entry point
@@ -259,8 +233,7 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
static Datum
ExecEvalArrayRef(ArrayRefExprState *astate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayRef *arrayRef = (ArrayRef *) astate->xprstate.expr;
Datum array_source;
@@ -277,8 +250,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
array_source = ExecEvalExpr(astate->refexpr,
econtext,
- isNull,
- isDone);
+ isNull);
/*
* If refexpr yields NULL, and it's a fetch, then result is NULL. In the
@@ -286,8 +258,6 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
*/
if (*isNull)
{
- if (isDone && *isDone == ExprEndResult)
- return (Datum) NULL; /* end of set result */
if (!isAssignment)
return (Datum) NULL;
}
@@ -313,8 +283,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
upper.indx[i++] = DatumGetInt32(ExecEvalExpr(eltstate,
econtext,
- &eisnull,
- NULL));
+ &eisnull));
/* If any index expr yields NULL, result is NULL or error */
if (eisnull)
{
@@ -349,8 +318,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
lower.indx[j++] = DatumGetInt32(ExecEvalExpr(eltstate,
econtext,
- &eisnull,
- NULL));
+ &eisnull));
/* If any index expr yields NULL, result is NULL or error */
if (eisnull)
{
@@ -437,8 +405,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
*/
sourceData = ExecEvalExpr(astate->refassgnexpr,
econtext,
- &eisnull,
- NULL);
+ &eisnull);
econtext->caseValue_datum = save_datum;
econtext->caseValue_isNull = save_isNull;
@@ -541,11 +508,8 @@ isAssignmentIndirectionExpr(ExprState *exprstate)
*/
static Datum
ExecEvalAggref(AggrefExprState *aggref, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
-
if (econtext->ecxt_aggvalues == NULL) /* safety check */
elog(ERROR, "no aggregates in this expression context");
@@ -562,11 +526,8 @@ ExecEvalAggref(AggrefExprState *aggref, ExprContext *econtext,
*/
static Datum
ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
-
if (econtext->ecxt_aggvalues == NULL) /* safety check */
elog(ERROR, "no window functions in this expression context");
@@ -587,15 +548,12 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
*/
static Datum
ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) exprstate->expr;
TupleTableSlot *slot;
AttrNumber attnum;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Get the input slot and attribute number we want */
switch (variable->varno)
{
@@ -676,15 +634,12 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) exprstate->expr;
TupleTableSlot *slot;
AttrNumber attnum;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Get the input slot and attribute number we want */
switch (variable->varno)
{
@@ -724,7 +679,7 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
@@ -732,9 +687,6 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
MemoryContext oldcontext;
bool needslow = false;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* This was checked by ExecInitExpr */
Assert(variable->varattno == InvalidAttrNumber);
@@ -940,7 +892,7 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
/* Fetch the value */
return (*wrvstate->xprstate.evalfunc) ((ExprState *) wrvstate, econtext,
- isNull, isDone);
+ isNull);
}
/* ----------------------------------------------------------------
@@ -951,14 +903,12 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
HeapTupleHeader dtuple;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = false;
/* Get the input slot we want */
@@ -1007,7 +957,7 @@ ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
@@ -1017,8 +967,6 @@ ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
HeapTupleHeader dtuple;
int i;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = false;
/* Get the input slot we want */
@@ -1096,13 +1044,10 @@ ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Const *con = (Const *) exprstate->expr;
- if (isDone)
- *isDone = ExprSingleResult;
-
*isNull = con->constisnull;
return con->constvalue;
}
@@ -1115,15 +1060,12 @@ ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Param *expression = (Param *) exprstate->expr;
int thisParamId = expression->paramid;
ParamExecData *prm;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* PARAM_EXEC params (internal executor parameters) are stored in the
* ecxt_param_exec_vals array, and can be accessed by array index.
@@ -1148,15 +1090,12 @@ ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalParamExtern(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Param *expression = (Param *) exprstate->expr;
int thisParamId = expression->paramid;
ParamListInfo paramInfo = econtext->ecxt_param_list_info;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* PARAM_EXTERN parameters must be sought in ecxt_param_list_info.
*/
@@ -1322,7 +1261,7 @@ GetAttributeByName(HeapTupleHeader tuple, const char *attname, bool *isNull)
*/
static void
init_fcache(Oid foid, Oid input_collation, FuncExprState *fcache,
- MemoryContext fcacheCxt, bool needDescForSets)
+ MemoryContext fcacheCxt)
{
AclResult aclresult;
@@ -1355,88 +1294,9 @@ init_fcache(Oid foid, Oid input_collation, FuncExprState *fcache,
list_length(fcache->args),
input_collation, NULL, NULL);
- /* If function returns set, prepare expected tuple descriptor */
- if (fcache->func.fn_retset && needDescForSets)
- {
- TypeFuncClass functypclass;
- Oid funcrettype;
- TupleDesc tupdesc;
- MemoryContext oldcontext;
-
- functypclass = get_expr_result_type(fcache->func.fn_expr,
- &funcrettype,
- &tupdesc);
-
- /* Must save tupdesc in fcache's context */
- oldcontext = MemoryContextSwitchTo(fcacheCxt);
-
- if (functypclass == TYPEFUNC_COMPOSITE)
- {
- /* Composite data type, e.g. a table's row type */
- Assert(tupdesc);
- /* Must copy it out of typcache for safety */
- fcache->funcResultDesc = CreateTupleDescCopy(tupdesc);
- fcache->funcReturnsTuple = true;
- }
- else if (functypclass == TYPEFUNC_SCALAR)
- {
- /* Base data type, i.e. scalar */
- tupdesc = CreateTemplateTupleDesc(1, false);
- TupleDescInitEntry(tupdesc,
- (AttrNumber) 1,
- NULL,
- funcrettype,
- -1,
- 0);
- fcache->funcResultDesc = tupdesc;
- fcache->funcReturnsTuple = false;
- }
- else if (functypclass == TYPEFUNC_RECORD)
- {
- /* This will work if function doesn't need an expectedDesc */
- fcache->funcResultDesc = NULL;
- fcache->funcReturnsTuple = true;
- }
- else
- {
- /* Else, we will fail if function needs an expectedDesc */
- fcache->funcResultDesc = NULL;
- }
-
- MemoryContextSwitchTo(oldcontext);
- }
- else
- fcache->funcResultDesc = NULL;
-
/* Initialize additional state */
fcache->funcResultStore = NULL;
fcache->funcResultSlot = NULL;
- fcache->setArgsValid = false;
- fcache->shutdown_reg = false;
-}
-
-/*
- * callback function in case a FuncExpr returning a set needs to be shut down
- * before it has been run to completion
- */
-static void
-ShutdownFuncExpr(Datum arg)
-{
- FuncExprState *fcache = (FuncExprState *) DatumGetPointer(arg);
-
- /* If we have a slot, make sure it's let go of any tuplestore pointer */
- if (fcache->funcResultSlot)
- ExecClearTuple(fcache->funcResultSlot);
-
- /* Release any open tuplestore */
- if (fcache->funcResultStore)
- tuplestore_end(fcache->funcResultStore);
- fcache->funcResultStore = NULL;
-
- /* Clear any active set-argument state */
- fcache->setArgsValid = false;
-
- /* execUtils will deregister the callback... */
fcache->shutdown_reg = false;
}
@@ -1498,123 +1358,26 @@ ShutdownTupleDescRef(Datum arg)
/*
* Evaluate arguments for a function.
*/
-static ExprDoneCond
+static void
ExecEvalFuncArgs(FunctionCallInfo fcinfo,
List *argList,
ExprContext *econtext)
{
- ExprDoneCond argIsDone;
int i;
ListCell *arg;
- argIsDone = ExprSingleResult; /* default assumption */
-
i = 0;
foreach(arg, argList)
{
ExprState *argstate = (ExprState *) lfirst(arg);
- ExprDoneCond thisArgIsDone;
fcinfo->arg[i] = ExecEvalExpr(argstate,
econtext,
- &fcinfo->argnull[i],
- &thisArgIsDone);
-
- if (thisArgIsDone != ExprSingleResult)
- {
- /*
- * We allow only one argument to have a set value; we'd need much
- * more complexity to keep track of multiple set arguments (cf.
- * ExecTargetList) and it doesn't seem worth it.
- */
- if (argIsDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("functions and operators can take at most one set argument")));
- argIsDone = thisArgIsDone;
- }
+ &fcinfo->argnull[i]);
i++;
}
Assert(i == fcinfo->nargs);
-
- return argIsDone;
-}
-
-/*
- * ExecPrepareTuplestoreResult
- *
- * Subroutine for ExecMakeFunctionResult: prepare to extract rows from a
- * tuplestore function result. We must set up a funcResultSlot (unless
- * already done in a previous call cycle) and verify that the function
- * returned the expected tuple descriptor.
- */
-static void
-ExecPrepareTuplestoreResult(FuncExprState *fcache,
- ExprContext *econtext,
- Tuplestorestate *resultStore,
- TupleDesc resultDesc)
-{
- fcache->funcResultStore = resultStore;
-
- if (fcache->funcResultSlot == NULL)
- {
- /* Create a slot so we can read data out of the tuplestore */
- TupleDesc slotDesc;
- MemoryContext oldcontext;
-
- oldcontext = MemoryContextSwitchTo(fcache->func.fn_mcxt);
-
- /*
- * If we were not able to determine the result rowtype from context,
- * and the function didn't return a tupdesc, we have to fail.
- */
- if (fcache->funcResultDesc)
- slotDesc = fcache->funcResultDesc;
- else if (resultDesc)
- {
- /* don't assume resultDesc is long-lived */
- slotDesc = CreateTupleDescCopy(resultDesc);
- }
- else
- {
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("function returning setof record called in "
- "context that cannot accept type record")));
- slotDesc = NULL; /* keep compiler quiet */
- }
-
- fcache->funcResultSlot = MakeSingleTupleTableSlot(slotDesc);
- MemoryContextSwitchTo(oldcontext);
- }
-
- /*
- * If function provided a tupdesc, cross-check it. We only really need to
- * do this for functions returning RECORD, but might as well do it always.
- */
- if (resultDesc)
- {
- if (fcache->funcResultDesc)
- tupledesc_match(fcache->funcResultDesc, resultDesc);
-
- /*
- * If it is a dynamically-allocated TupleDesc, free it: it is
- * typically allocated in a per-query context, so we must avoid
- * leaking it across multiple usages.
- */
- if (resultDesc->tdrefcount == -1)
- FreeTupleDesc(resultDesc);
- }
-
- /* Register cleanup callback if we didn't already */
- if (!fcache->shutdown_reg)
- {
- RegisterExprContextCallback(econtext,
- ShutdownFuncExpr,
- PointerGetDatum(fcache));
- fcache->shutdown_reg = true;
- }
}
/*
@@ -1668,330 +1431,15 @@ tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc)
}
/*
- * ExecMakeFunctionResult
- *
- * Evaluate the arguments to a function and then the function itself.
- * init_fcache is presumed already run on the FuncExprState.
- *
- * This function handles the most general case, wherein the function or
- * one of its arguments can return a set.
- */
-static Datum
-ExecMakeFunctionResult(FuncExprState *fcache,
- ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
-{
- List *arguments;
- Datum result;
- FunctionCallInfo fcinfo;
- PgStat_FunctionCallUsage fcusage;
- ReturnSetInfo rsinfo; /* for functions returning sets */
- ExprDoneCond argDone;
- bool hasSetArg;
- int i;
-
-restart:
-
- /* Guard against stack overflow due to overly complex expressions */
- check_stack_depth();
-
- /*
- * If a previous call of the function returned a set result in the form of
- * a tuplestore, continue reading rows from the tuplestore until it's
- * empty.
- */
- if (fcache->funcResultStore)
- {
- Assert(isDone); /* it was provided before ... */
- if (tuplestore_gettupleslot(fcache->funcResultStore, true, false,
- fcache->funcResultSlot))
- {
- *isDone = ExprMultipleResult;
- if (fcache->funcReturnsTuple)
- {
- /* We must return the whole tuple as a Datum. */
- *isNull = false;
- return ExecFetchSlotTupleDatum(fcache->funcResultSlot);
- }
- else
- {
- /* Extract the first column and return it as a scalar. */
- return slot_getattr(fcache->funcResultSlot, 1, isNull);
- }
- }
- /* Exhausted the tuplestore, so clean up */
- tuplestore_end(fcache->funcResultStore);
- fcache->funcResultStore = NULL;
- /* We are done unless there was a set-valued argument */
- if (!fcache->setHasSetArg)
- {
- *isDone = ExprEndResult;
- *isNull = true;
- return (Datum) 0;
- }
- /* If there was, continue evaluating the argument values */
- Assert(!fcache->setArgsValid);
- }
-
- /*
- * arguments is a list of expressions to evaluate before passing to the
- * function manager. We skip the evaluation if it was already done in the
- * previous call (ie, we are continuing the evaluation of a set-valued
- * function). Otherwise, collect the current argument values into fcinfo.
- */
- fcinfo = &fcache->fcinfo_data;
- arguments = fcache->args;
- if (!fcache->setArgsValid)
- {
- argDone = ExecEvalFuncArgs(fcinfo, arguments, econtext);
- if (argDone == ExprEndResult)
- {
- /* input is an empty set, so return an empty set. */
- *isNull = true;
- if (isDone)
- *isDone = ExprEndResult;
- else
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
- return (Datum) 0;
- }
- hasSetArg = (argDone != ExprSingleResult);
- }
- else
- {
- /* Re-use callinfo from previous evaluation */
- hasSetArg = fcache->setHasSetArg;
- /* Reset flag (we may set it again below) */
- fcache->setArgsValid = false;
- }
-
- /*
- * Now call the function, passing the evaluated parameter values.
- */
- if (fcache->func.fn_retset || hasSetArg)
- {
- /*
- * We need to return a set result. Complain if caller not ready to
- * accept one.
- */
- if (isDone == NULL)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
-
- /*
- * Prepare a resultinfo node for communication. If the function
- * doesn't itself return set, we don't pass the resultinfo to the
- * function, but we need to fill it in anyway for internal use.
- */
- if (fcache->func.fn_retset)
- fcinfo->resultinfo = (Node *) &rsinfo;
- rsinfo.type = T_ReturnSetInfo;
- rsinfo.econtext = econtext;
- rsinfo.expectedDesc = fcache->funcResultDesc;
- rsinfo.allowedModes = (int) (SFRM_ValuePerCall | SFRM_Materialize);
- /* note we do not set SFRM_Materialize_Random or _Preferred */
- rsinfo.returnMode = SFRM_ValuePerCall;
- /* isDone is filled below */
- rsinfo.setResult = NULL;
- rsinfo.setDesc = NULL;
-
- /*
- * This loop handles the situation where we have both a set argument
- * and a set-valued function. Once we have exhausted the function's
- * value(s) for a particular argument value, we have to get the next
- * argument value and start the function over again. We might have to
- * do it more than once, if the function produces an empty result set
- * for a particular input value.
- */
- for (;;)
- {
- /*
- * If function is strict, and there are any NULL arguments, skip
- * calling the function (at least for this set of args).
- */
- bool callit = true;
-
- if (fcache->func.fn_strict)
- {
- for (i = 0; i < fcinfo->nargs; i++)
- {
- if (fcinfo->argnull[i])
- {
- callit = false;
- break;
- }
- }
- }
-
- if (callit)
- {
- pgstat_init_function_usage(fcinfo, &fcusage);
-
- fcinfo->isnull = false;
- rsinfo.isDone = ExprSingleResult;
- result = FunctionCallInvoke(fcinfo);
- *isNull = fcinfo->isnull;
- *isDone = rsinfo.isDone;
-
- pgstat_end_function_usage(&fcusage,
- rsinfo.isDone != ExprMultipleResult);
- }
- else if (fcache->func.fn_retset)
- {
- /* for a strict SRF, result for NULL is an empty set */
- result = (Datum) 0;
- *isNull = true;
- *isDone = ExprEndResult;
- }
- else
- {
- /* for a strict non-SRF, result for NULL is a NULL */
- result = (Datum) 0;
- *isNull = true;
- *isDone = ExprSingleResult;
- }
-
- /* Which protocol does function want to use? */
- if (rsinfo.returnMode == SFRM_ValuePerCall)
- {
- if (*isDone != ExprEndResult)
- {
- /*
- * Got a result from current argument. If function itself
- * returns set, save the current argument values to re-use
- * on the next call.
- */
- if (fcache->func.fn_retset &&
- *isDone == ExprMultipleResult)
- {
- fcache->setHasSetArg = hasSetArg;
- fcache->setArgsValid = true;
- /* Register cleanup callback if we didn't already */
- if (!fcache->shutdown_reg)
- {
- RegisterExprContextCallback(econtext,
- ShutdownFuncExpr,
- PointerGetDatum(fcache));
- fcache->shutdown_reg = true;
- }
- }
-
- /*
- * Make sure we say we are returning a set, even if the
- * function itself doesn't return sets.
- */
- if (hasSetArg)
- *isDone = ExprMultipleResult;
- break;
- }
- }
- else if (rsinfo.returnMode == SFRM_Materialize)
- {
- /* check we're on the same page as the function author */
- if (rsinfo.isDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
- errmsg("table-function protocol for materialize mode was not followed")));
- if (rsinfo.setResult != NULL)
- {
- /* prepare to return values from the tuplestore */
- ExecPrepareTuplestoreResult(fcache, econtext,
- rsinfo.setResult,
- rsinfo.setDesc);
- /* remember whether we had set arguments */
- fcache->setHasSetArg = hasSetArg;
- /* loop back to top to start returning from tuplestore */
- goto restart;
- }
- /* if setResult was left null, treat it as empty set */
- *isDone = ExprEndResult;
- *isNull = true;
- result = (Datum) 0;
- }
- else
- ereport(ERROR,
- (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
- errmsg("unrecognized table-function returnMode: %d",
- (int) rsinfo.returnMode)));
-
- /* Else, done with this argument */
- if (!hasSetArg)
- break; /* input not a set, so done */
-
- /* Re-eval args to get the next element of the input set */
- argDone = ExecEvalFuncArgs(fcinfo, arguments, econtext);
-
- if (argDone != ExprMultipleResult)
- {
- /* End of argument set, so we're done. */
- *isNull = true;
- *isDone = ExprEndResult;
- result = (Datum) 0;
- break;
- }
-
- /*
- * If we reach here, loop around to run the function on the new
- * argument.
- */
- }
- }
- else
- {
- /*
- * Non-set case: much easier.
- *
- * In common cases, this code path is unreachable because we'd have
- * selected ExecMakeFunctionResultNoSets instead. However, it's
- * possible to get here if an argument sometimes produces set results
- * and sometimes scalar results. For example, a CASE expression might
- * call a set-returning function in only some of its arms.
- */
- if (isDone)
- *isDone = ExprSingleResult;
-
- /*
- * If function is strict, and there are any NULL arguments, skip
- * calling the function and return NULL.
- */
- if (fcache->func.fn_strict)
- {
- for (i = 0; i < fcinfo->nargs; i++)
- {
- if (fcinfo->argnull[i])
- {
- *isNull = true;
- return (Datum) 0;
- }
- }
- }
-
- pgstat_init_function_usage(fcinfo, &fcusage);
-
- fcinfo->isnull = false;
- result = FunctionCallInvoke(fcinfo);
- *isNull = fcinfo->isnull;
-
- pgstat_end_function_usage(&fcusage, true);
- }
-
- return result;
-}
-
-/*
* ExecMakeFunctionResultNoSets
*
- * Simplified version of ExecMakeFunctionResult that can only handle
- * non-set cases. Hand-tuned for speed.
+ * Portion of ExecMakeFunctionResult that does not need initialization.
+ * Hand-tuned for speed.
*/
static Datum
ExecMakeFunctionResultNoSets(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
ListCell *arg;
Datum result;
@@ -2002,9 +1450,6 @@ ExecMakeFunctionResultNoSets(FuncExprState *fcache,
/* Guard against stack overflow due to overly complex expressions */
check_stack_depth();
- if (isDone)
- *isDone = ExprSingleResult;
-
/* inlined, simplified version of ExecEvalFuncArgs */
fcinfo = &fcache->fcinfo_data;
i = 0;
@@ -2014,8 +1459,7 @@ ExecMakeFunctionResultNoSets(FuncExprState *fcache,
fcinfo->arg[i] = ExecEvalExpr(argstate,
econtext,
- &fcinfo->argnull[i],
- NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -2112,7 +1556,6 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
IsA(funcexpr->expr, FuncExpr))
{
FuncExprState *fcache = (FuncExprState *) funcexpr;
- ExprDoneCond argDone;
/*
* This path is similar to ExecMakeFunctionResult.
@@ -2127,7 +1570,7 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
FuncExpr *func = (FuncExpr *) fcache->xprstate.expr;
init_fcache(func->funcid, func->inputcollid, fcache,
- econtext->ecxt_per_query_memory, false);
+ econtext->ecxt_per_query_memory);
}
returnsSet = fcache->func.fn_retset;
InitFunctionCallInfoData(fcinfo, &(fcache->func),
@@ -2147,15 +1590,9 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
*/
MemoryContextReset(argContext);
oldcontext = MemoryContextSwitchTo(argContext);
- argDone = ExecEvalFuncArgs(&fcinfo, fcache->args, econtext);
+ ExecEvalFuncArgs(&fcinfo, fcache->args, econtext);
MemoryContextSwitchTo(oldcontext);
- /* We don't allow sets in the arguments of the table function */
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
-
/*
* If function is strict, and there are any NULL arguments, skip
* calling the function and act like it returned NULL (or an empty
@@ -2215,8 +1652,9 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
}
else
{
+ rsinfo.isDone = ExprSingleResult;
result = ExecEvalExpr(funcexpr, econtext,
- &fcinfo.isnull, &rsinfo.isDone);
+ &fcinfo.isnull);
}
/* Which protocol does function want to use? */
@@ -2410,15 +1848,14 @@ no_function_result:
static Datum
ExecEvalFunc(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* This is called only the first time through */
FuncExpr *func = (FuncExpr *) fcache->xprstate.expr;
/* Initialize function lookup info */
init_fcache(func->funcid, func->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory);
if (fcache->func.fn_retset)
{
@@ -2427,22 +1864,9 @@ ExecEvalFunc(FuncExprState *fcache,
errmsg("set-valued function called in context that cannot accept a set")));
}
- /*
- * We need to invoke ExecMakeFunctionResult if either the function itself
- * or any of its input expressions can return a set. Otherwise, invoke
- * ExecMakeFunctionResultNoSets. In either case, change the evalfunc
- * pointer to go directly there on subsequent uses.
- */
- if (fcache->func.fn_retset || expression_returns_set((Node *) func->args))
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResult;
- return ExecMakeFunctionResult(fcache, econtext, isNull, isDone);
- }
- else
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
- }
+ /* Change the evalfunc pointer, to skip the above initialization. */
+ fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -2452,32 +1876,25 @@ ExecEvalFunc(FuncExprState *fcache,
static Datum
ExecEvalOper(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* This is called only the first time through */
OpExpr *op = (OpExpr *) fcache->xprstate.expr;
/* Initialize function lookup info */
init_fcache(op->opfuncid, op->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory);
- /*
- * We need to invoke ExecMakeFunctionResult if either the function itself
- * or any of its input expressions can return a set. Otherwise, invoke
- * ExecMakeFunctionResultNoSets. In either case, change the evalfunc
- * pointer to go directly there on subsequent uses.
- */
- if (fcache->func.fn_retset || expression_returns_set((Node *) op->args))
+ if (fcache->func.fn_retset)
{
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResult;
- return ExecMakeFunctionResult(fcache, econtext, isNull, isDone);
- }
- else
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
}
+
+ /* Change the evalfunc pointer, to skip the above initialization. */
+ fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -2494,17 +1911,13 @@ ExecEvalOper(FuncExprState *fcache,
static Datum
ExecEvalDistinct(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
/* Set default values for result flags: non-null, not a set result */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -2514,7 +1927,7 @@ ExecEvalDistinct(FuncExprState *fcache,
DistinctExpr *op = (DistinctExpr *) fcache->xprstate.expr;
init_fcache(op->opfuncid, op->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory);
Assert(!fcache->func.fn_retset);
}
@@ -2522,11 +1935,7 @@ ExecEvalDistinct(FuncExprState *fcache,
* Evaluate arguments
*/
fcinfo = &fcache->fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, fcache->args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("IS DISTINCT FROM does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, fcache->args, econtext);
Assert(fcinfo->nargs == 2);
if (fcinfo->argnull[0] && fcinfo->argnull[1])
@@ -2562,7 +1971,7 @@ ExecEvalDistinct(FuncExprState *fcache,
static Datum
ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ScalarArrayOpExpr *opexpr = (ScalarArrayOpExpr *) sstate->fxprstate.xprstate.expr;
bool useOr = opexpr->useOr;
@@ -2571,7 +1980,6 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
Datum result;
bool resultnull;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
int i;
int16 typlen;
bool typbyval;
@@ -2582,8 +1990,6 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
/* Set default values for result flags: non-null, not a set result */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -2591,7 +1997,7 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
if (sstate->fxprstate.func.fn_oid == InvalidOid)
{
init_fcache(opexpr->opfuncid, opexpr->inputcollid, &sstate->fxprstate,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory);
Assert(!sstate->fxprstate.func.fn_retset);
}
@@ -2599,11 +2005,7 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
* Evaluate arguments
*/
fcinfo = &sstate->fxprstate.fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, sstate->fxprstate.args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("op ANY/ALL (array) does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, sstate->fxprstate.args, econtext);
Assert(fcinfo->nargs == 2);
/*
@@ -2749,15 +2151,12 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
*/
static Datum
ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ExprState *clause = linitial(notclause->args);
Datum expr_value;
- if (isDone)
- *isDone = ExprSingleResult;
-
- expr_value = ExecEvalExpr(clause, econtext, isNull, NULL);
+ expr_value = ExecEvalExpr(clause, econtext, isNull);
/*
* if the expression evaluates to null, then we just cascade the null back
@@ -2779,15 +2178,12 @@ ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
*/
static Datum
ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = orExpr->args;
ListCell *clause;
bool AnyNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
AnyNull = false;
/*
@@ -2808,7 +2204,7 @@ ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
ExprState *clausestate = (ExprState *) lfirst(clause);
Datum clause_value;
- clause_value = ExecEvalExpr(clausestate, econtext, isNull, NULL);
+ clause_value = ExecEvalExpr(clausestate, econtext, isNull);
/*
* if we have a non-null true result, then return it.
@@ -2830,15 +2226,12 @@ ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
*/
static Datum
ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = andExpr->args;
ListCell *clause;
bool AnyNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
AnyNull = false;
/*
@@ -2855,7 +2248,7 @@ ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
ExprState *clausestate = (ExprState *) lfirst(clause);
Datum clause_value;
- clause_value = ExecEvalExpr(clausestate, econtext, isNull, NULL);
+ clause_value = ExecEvalExpr(clausestate, econtext, isNull);
/*
* if we have a non-null false result, then return it.
@@ -2881,7 +2274,7 @@ ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
static Datum
ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ConvertRowtypeExpr *convert = (ConvertRowtypeExpr *) cstate->xprstate.expr;
HeapTuple result;
@@ -2889,7 +2282,7 @@ ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
HeapTupleHeader tuple;
HeapTupleData tmptup;
- tupDatum = ExecEvalExpr(cstate->arg, econtext, isNull, isDone);
+ tupDatum = ExecEvalExpr(cstate->arg, econtext, isNull);
/* this test covers the isDone exception too: */
if (*isNull)
@@ -2965,16 +2358,13 @@ ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
*/
static Datum
ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = caseExpr->args;
ListCell *clause;
Datum save_datum;
bool save_isNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* If there's a test expression, we have to evaluate it and save the value
* where the CaseTestExpr placeholders can find it. We must save and
@@ -2988,8 +2378,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
{
econtext->caseValue_datum = ExecEvalExpr(caseExpr->arg,
econtext,
- &econtext->caseValue_isNull,
- NULL);
+ &econtext->caseValue_isNull);
}
/*
@@ -3004,8 +2393,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
clause_value = ExecEvalExpr(wclause->expr,
econtext,
- isNull,
- NULL);
+ isNull);
/*
* if we have a true test, then we return the result, since the case
@@ -3018,8 +2406,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
econtext->caseValue_isNull = save_isNull;
return ExecEvalExpr(wclause->result,
econtext,
- isNull,
- isDone);
+ isNull);
}
}
@@ -3030,8 +2417,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
{
return ExecEvalExpr(caseExpr->defresult,
econtext,
- isNull,
- isDone);
+ isNull);
}
*isNull = true;
@@ -3046,10 +2432,8 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
static Datum
ExecEvalCaseTestExpr(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = econtext->caseValue_isNull;
return econtext->caseValue_datum;
}
@@ -3066,17 +2450,13 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
static Datum
ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
int result = 0;
int attnum = 0;
Bitmapset *grouped_cols = gstate->aggstate->grouped_cols;
ListCell *lc;
- if (isDone)
- *isDone = ExprSingleResult;
-
*isNull = false;
foreach(lc, (gstate->clauses))
@@ -3098,7 +2478,7 @@ ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
*/
static Datum
ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayExpr *arrayExpr = (ArrayExpr *) astate->xprstate.expr;
ArrayType *result;
@@ -3110,8 +2490,6 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
/* Set default values for result flags: non-null, not a set result */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
if (!arrayExpr->multidims)
{
@@ -3136,7 +2514,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
{
ExprState *e = (ExprState *) lfirst(element);
- dvalues[i] = ExecEvalExpr(e, econtext, &dnulls[i], NULL);
+ dvalues[i] = ExecEvalExpr(e, econtext, &dnulls[i]);
i++;
}
@@ -3186,7 +2564,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
ArrayType *array;
int this_ndims;
- arraydatum = ExecEvalExpr(e, econtext, &eisnull, NULL);
+ arraydatum = ExecEvalExpr(e, econtext, &eisnull);
/* temporarily ignore null subarrays */
if (eisnull)
{
@@ -3325,7 +2703,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
static Datum
ExecEvalRow(RowExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
HeapTuple tuple;
Datum *values;
@@ -3336,8 +2714,6 @@ ExecEvalRow(RowExprState *rstate,
/* Set default values for result flags: non-null, not a set result */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/* Allocate workspace */
natts = rstate->tupdesc->natts;
@@ -3353,7 +2729,7 @@ ExecEvalRow(RowExprState *rstate,
{
ExprState *e = (ExprState *) lfirst(arg);
- values[i] = ExecEvalExpr(e, econtext, &isnull[i], NULL);
+ values[i] = ExecEvalExpr(e, econtext, &isnull[i]);
i++;
}
@@ -3372,7 +2748,7 @@ ExecEvalRow(RowExprState *rstate,
static Datum
ExecEvalRowCompare(RowCompareExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
bool result;
RowCompareType rctype = ((RowCompareExpr *) rstate->xprstate.expr)->rctype;
@@ -3381,8 +2757,6 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
ListCell *r;
int i;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
i = 0;
@@ -3396,9 +2770,9 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
rstate->collations[i],
NULL, NULL);
locfcinfo.arg[0] = ExecEvalExpr(le, econtext,
- &locfcinfo.argnull[0], NULL);
+ &locfcinfo.argnull[0]);
locfcinfo.arg[1] = ExecEvalExpr(re, econtext,
- &locfcinfo.argnull[1], NULL);
+ &locfcinfo.argnull[1]);
if (rstate->funcs[i].fn_strict &&
(locfcinfo.argnull[0] || locfcinfo.argnull[1]))
return (Datum) 0; /* force NULL result */
@@ -3442,20 +2816,17 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
*/
static Datum
ExecEvalCoalesce(CoalesceExprState *coalesceExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ListCell *arg;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Simply loop through until something NOT NULL is found */
foreach(arg, coalesceExpr->args)
{
ExprState *e = (ExprState *) lfirst(arg);
Datum value;
- value = ExecEvalExpr(e, econtext, isNull, NULL);
+ value = ExecEvalExpr(e, econtext, isNull);
if (!*isNull)
return value;
}
@@ -3471,7 +2842,7 @@ ExecEvalCoalesce(CoalesceExprState *coalesceExpr, ExprContext *econtext,
*/
static Datum
ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result = (Datum) 0;
MinMaxExpr *minmax = (MinMaxExpr *) minmaxExpr->xprstate.expr;
@@ -3480,8 +2851,6 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
FunctionCallInfoData locfcinfo;
ListCell *arg;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
InitFunctionCallInfoData(locfcinfo, &minmaxExpr->cfunc, 2,
@@ -3496,7 +2865,7 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
bool valueIsNull;
int32 cmpresult;
- value = ExecEvalExpr(e, econtext, &valueIsNull, NULL);
+ value = ExecEvalExpr(e, econtext, &valueIsNull);
if (valueIsNull)
continue; /* ignore NULL inputs */
@@ -3531,7 +2900,7 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
*/
static Datum
ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
XmlExpr *xexpr = (XmlExpr *) xmlExpr->xprstate.expr;
Datum value;
@@ -3539,8 +2908,6 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
ListCell *arg;
ListCell *narg;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
switch (xexpr->op)
@@ -3553,7 +2920,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
{
ExprState *e = (ExprState *) lfirst(arg);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (!isnull)
values = lappend(values, DatumGetPointer(value));
}
@@ -3578,7 +2945,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
ExprState *e = (ExprState *) lfirst(arg);
char *argname = strVal(lfirst(narg));
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (!isnull)
{
appendStringInfo(&buf, "<%s>%s</%s>",
@@ -3621,13 +2988,13 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 2);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
data = DatumGetTextP(value);
e = (ExprState *) lsecond(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull) /* probably can't happen */
return (Datum) 0;
preserve_whitespace = DatumGetBool(value);
@@ -3651,7 +3018,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
if (xmlExpr->args)
{
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
arg = NULL;
else
@@ -3678,20 +3045,20 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 3);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
data = DatumGetXmlP(value);
e = (ExprState *) lsecond(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
version = NULL;
else
version = DatumGetTextP(value);
e = (ExprState *) lthird(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
standalone = DatumGetInt32(value);
*isNull = false;
@@ -3710,7 +3077,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 1);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
@@ -3728,7 +3095,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 1);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
else
@@ -3755,14 +3122,10 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
static Datum
ExecEvalNullIf(FuncExprState *nullIfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
-
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -3772,7 +3135,7 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
NullIfExpr *op = (NullIfExpr *) nullIfExpr->xprstate.expr;
init_fcache(op->opfuncid, op->inputcollid, nullIfExpr,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory);
Assert(!nullIfExpr->func.fn_retset);
}
@@ -3780,11 +3143,7 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
* Evaluate arguments
*/
fcinfo = &nullIfExpr->fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, nullIfExpr->args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("NULLIF does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, nullIfExpr->args, econtext);
Assert(fcinfo->nargs == 2);
/* if either argument is NULL they can't be equal */
@@ -3814,16 +3173,12 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
static Datum
ExecEvalNullTest(NullTestState *nstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
NullTest *ntest = (NullTest *) nstate->xprstate.expr;
Datum result;
- result = ExecEvalExpr(nstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(nstate->arg, econtext, isNull);
if (ntest->argisrow && !(*isNull))
{
@@ -3923,16 +3278,12 @@ ExecEvalNullTest(NullTestState *nstate,
static Datum
ExecEvalBooleanTest(GenericExprState *bstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
BooleanTest *btest = (BooleanTest *) bstate->xprstate.expr;
Datum result;
- result = ExecEvalExpr(bstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(bstate->arg, econtext, isNull);
switch (btest->booltesttype)
{
@@ -4008,16 +3359,13 @@ ExecEvalBooleanTest(GenericExprState *bstate,
*/
static Datum
ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
CoerceToDomain *ctest = (CoerceToDomain *) cstate->xprstate.expr;
Datum result;
ListCell *l;
- result = ExecEvalExpr(cstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(cstate->arg, econtext, isNull);
/* Make sure we have up-to-date constraints */
UpdateDomainConstraintRef(cstate->constraint_ref);
@@ -4055,8 +3403,8 @@ ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
econtext->domainValue_datum = result;
econtext->domainValue_isNull = *isNull;
- conResult = ExecEvalExpr(con->check_expr,
- econtext, &conIsNull, NULL);
+ conResult = ExecEvalExpr(con->check_expr, econtext,
+ &conIsNull);
if (!conIsNull &&
!DatumGetBool(conResult))
@@ -4091,10 +3439,8 @@ ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
static Datum
ExecEvalCoerceToDomainValue(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = econtext->domainValue_isNull;
return econtext->domainValue_datum;
}
@@ -4108,8 +3454,7 @@ ExecEvalCoerceToDomainValue(ExprState *exprstate,
static Datum
ExecEvalFieldSelect(FieldSelectState *fstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
FieldSelect *fselect = (FieldSelect *) fstate->xprstate.expr;
AttrNumber fieldnum = fselect->fieldnum;
@@ -4122,7 +3467,7 @@ ExecEvalFieldSelect(FieldSelectState *fstate,
Form_pg_attribute attr;
HeapTupleData tmptup;
- tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull, isDone);
+ tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull);
/* this test covers the isDone exception too: */
if (*isNull)
@@ -4187,8 +3532,7 @@ ExecEvalFieldSelect(FieldSelectState *fstate,
static Datum
ExecEvalFieldStore(FieldStoreState *fstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
FieldStore *fstore = (FieldStore *) fstate->xprstate.expr;
HeapTuple tuple;
@@ -4201,10 +3545,7 @@ ExecEvalFieldStore(FieldStoreState *fstate,
ListCell *l1,
*l2;
- tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return tupDatum;
+ tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull);
/* Lookup tupdesc if first time through or after rescan */
tupDesc = get_cached_rowtype(fstore->resulttype, -1,
@@ -4264,8 +3605,7 @@ ExecEvalFieldStore(FieldStoreState *fstate,
values[fieldnum - 1] = ExecEvalExpr(newval,
econtext,
- &isnull[fieldnum - 1],
- NULL);
+ &isnull[fieldnum - 1]);
}
econtext->caseValue_datum = save_datum;
@@ -4288,9 +3628,9 @@ ExecEvalFieldStore(FieldStoreState *fstate,
static Datum
ExecEvalRelabelType(GenericExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- return ExecEvalExpr(exprstate->arg, econtext, isNull, isDone);
+ return ExecEvalExpr(exprstate->arg, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -4302,16 +3642,13 @@ ExecEvalRelabelType(GenericExprState *exprstate,
static Datum
ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
Datum inputval;
char *string;
- inputval = ExecEvalExpr(iostate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return inputval; /* nothing to do */
+ inputval = ExecEvalExpr(iostate->arg, econtext, isNull);
if (*isNull)
string = NULL; /* output functions are not called on nulls */
@@ -4336,16 +3673,14 @@ ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
static Datum
ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) astate->xprstate.expr;
Datum result;
FunctionCallInfoData locfcinfo;
- result = ExecEvalExpr(astate->arg, econtext, isNull, isDone);
+ result = ExecEvalExpr(astate->arg, econtext, isNull);
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to do */
if (*isNull)
return result; /* nothing to do */
@@ -4413,7 +3748,7 @@ ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
*/
static Datum
ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
@@ -4430,14 +3765,13 @@ ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
Datum
ExecEvalExprSwitchContext(ExprState *expression,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
Datum retDatum;
MemoryContext oldContext;
oldContext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
- retDatum = ExecEvalExpr(expression, econtext, isNull, isDone);
+ retDatum = ExecEvalExpr(expression, econtext, isNull);
MemoryContextSwitchTo(oldContext);
return retDatum;
}
@@ -5293,7 +4627,7 @@ ExecQual(List *qual, ExprContext *econtext, bool resultForNull)
Datum expr_value;
bool isNull;
- expr_value = ExecEvalExpr(clause, econtext, &isNull, NULL);
+ expr_value = ExecEvalExpr(clause, econtext, &isNull);
if (isNull)
{
@@ -5351,17 +4685,9 @@ ExecCleanTargetListLength(List *targetlist)
/*
* ExecTargetList
* Evaluates a targetlist with respect to the given
- * expression context. Returns TRUE if we were able to create
- * a result, FALSE if we have exhausted a set-valued expression.
+ * expression context.
*
* Results are stored into the passed values and isnull arrays.
- * The caller must provide an itemIsDone array that persists across calls.
- *
- * As with ExecEvalExpr, the caller should pass isDone = NULL if not
- * prepared to deal with sets of result tuples. Otherwise, a return
- * of *isDone = ExprMultipleResult signifies a set element, and a return
- * of *isDone = ExprEndResult signifies end of the set of tuple.
- * We assume that *isDone has been initialized to ExprSingleResult by caller.
*
* Since fields of the result tuple might be multiply referenced in higher
* plan nodes, we have to force any read/write expanded values to read-only
@@ -5370,19 +4696,16 @@ ExecCleanTargetListLength(List *targetlist)
* actually-multiply-referenced Vars and insert an expression node that
* would do that only where really required.
*/
-static bool
+static void
ExecTargetList(List *targetlist,
TupleDesc tupdesc,
ExprContext *econtext,
Datum *values,
- bool *isnull,
- ExprDoneCond *itemIsDone,
- ExprDoneCond *isDone)
+ bool *isnull)
{
Form_pg_attribute *att = tupdesc->attrs;
MemoryContext oldContext;
ListCell *tl;
- bool haveDoneSets;
/*
* Run in short-lived per-tuple context while computing expressions.
@@ -5392,8 +4715,6 @@ ExecTargetList(List *targetlist,
/*
* evaluate all the expressions in the target list
*/
- haveDoneSets = false; /* any exhausted set exprs in tlist? */
-
foreach(tl, targetlist)
{
GenericExprState *gstate = (GenericExprState *) lfirst(tl);
@@ -5402,117 +4723,15 @@ ExecTargetList(List *targetlist,
values[resind] = ExecEvalExpr(gstate->arg,
econtext,
- &isnull[resind],
- &itemIsDone[resind]);
+ &isnull[resind]);
values[resind] = MakeExpandedObjectReadOnly(values[resind],
isnull[resind],
att[resind]->attlen);
-
- if (itemIsDone[resind] != ExprSingleResult)
- {
- /* We have a set-valued expression in the tlist */
- if (isDone == NULL)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
- if (itemIsDone[resind] == ExprMultipleResult)
- {
- /* we have undone sets in the tlist, set flag */
- *isDone = ExprMultipleResult;
- }
- else
- {
- /* we have done sets in the tlist, set flag for that */
- haveDoneSets = true;
- }
- }
- }
-
- if (haveDoneSets)
- {
- /*
- * note: can't get here unless we verified isDone != NULL
- */
- if (*isDone == ExprSingleResult)
- {
- /*
- * all sets are done, so report that tlist expansion is complete.
- */
- *isDone = ExprEndResult;
- MemoryContextSwitchTo(oldContext);
- return false;
- }
- else
- {
- /*
- * We have some done and some undone sets. Restart the done ones
- * so that we can deliver a tuple (if possible).
- */
- foreach(tl, targetlist)
- {
- GenericExprState *gstate = (GenericExprState *) lfirst(tl);
- TargetEntry *tle = (TargetEntry *) gstate->xprstate.expr;
- AttrNumber resind = tle->resno - 1;
-
- if (itemIsDone[resind] == ExprEndResult)
- {
- values[resind] = ExecEvalExpr(gstate->arg,
- econtext,
- &isnull[resind],
- &itemIsDone[resind]);
-
- values[resind] = MakeExpandedObjectReadOnly(values[resind],
- isnull[resind],
- att[resind]->attlen);
-
- if (itemIsDone[resind] == ExprEndResult)
- {
- /*
- * Oh dear, this item is returning an empty set. Guess
- * we can't make a tuple after all.
- */
- *isDone = ExprEndResult;
- break;
- }
- }
- }
-
- /*
- * If we cannot make a tuple because some sets are empty, we still
- * have to cycle the nonempty sets to completion, else resources
- * will not be released from subplans etc.
- *
- * XXX is that still necessary?
- */
- if (*isDone == ExprEndResult)
- {
- foreach(tl, targetlist)
- {
- GenericExprState *gstate = (GenericExprState *) lfirst(tl);
- TargetEntry *tle = (TargetEntry *) gstate->xprstate.expr;
- AttrNumber resind = tle->resno - 1;
-
- while (itemIsDone[resind] == ExprMultipleResult)
- {
- values[resind] = ExecEvalExpr(gstate->arg,
- econtext,
- &isnull[resind],
- &itemIsDone[resind]);
- /* no need for MakeExpandedObjectReadOnly */
- }
- }
-
- MemoryContextSwitchTo(oldContext);
- return false;
- }
- }
}
/* Report success */
MemoryContextSwitchTo(oldContext);
-
- return true;
}
/*
@@ -5529,7 +4748,7 @@ ExecTargetList(List *targetlist,
* result slot.
*/
TupleTableSlot *
-ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
+ExecProject(ProjectionInfo *projInfo)
{
TupleTableSlot *slot;
ExprContext *econtext;
@@ -5546,10 +4765,6 @@ ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
slot = projInfo->pi_slot;
econtext = projInfo->pi_exprContext;
- /* Assume single result row until proven otherwise */
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* Clear any former contents of the result slot. This makes it safe for
* us to use the slot's Datum/isnull arrays as workspace. (Also, we can
@@ -5617,21 +4832,15 @@ ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
}
/*
- * If there are any generic expressions, evaluate them. It's possible
- * that there are set-returning functions in such expressions; if so and
- * we have reached the end of the set, we return the result slot, which we
- * already marked empty.
+ * If there are any generic expressions, evaluate them.
*/
if (projInfo->pi_targetlist)
{
- if (!ExecTargetList(projInfo->pi_targetlist,
- slot->tts_tupleDescriptor,
- econtext,
- slot->tts_values,
- slot->tts_isnull,
- projInfo->pi_itemIsDone,
- isDone))
- return slot; /* no more result rows, return empty slot */
+ ExecTargetList(projInfo->pi_targetlist,
+ slot->tts_tupleDescriptor,
+ econtext,
+ slot->tts_values,
+ slot->tts_isnull);
}
/*
diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c
index fb0013d..eb224b4 100644
--- a/src/backend/executor/execScan.c
+++ b/src/backend/executor/execScan.c
@@ -125,8 +125,6 @@ ExecScan(ScanState *node,
ExprContext *econtext;
List *qual;
ProjectionInfo *projInfo;
- ExprDoneCond isDone;
- TupleTableSlot *resultSlot;
/*
* Fetch data from node
@@ -146,21 +144,6 @@ ExecScan(ScanState *node,
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- Assert(projInfo); /* can't get here if not projecting */
- resultSlot = ExecProject(projInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a scan tuple.
@@ -214,15 +197,9 @@ ExecScan(ScanState *node,
{
/*
* Form a projection tuple, store it in the result tuple slot
- * and return it --- unless we find we can project no tuples
- * from this scan tuple, in which case continue scan.
+ * and return it.
*/
- resultSlot = ExecProject(projInfo, &isDone);
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ return ExecProject(projInfo);
}
else
{
@@ -352,9 +329,6 @@ ExecScanReScan(ScanState *node)
{
EState *estate = node->ps.state;
- /* Stop projecting any tuples from SRFs in the targetlist */
- node->ps.ps_TupFromTlist = false;
-
/* Rescan EvalPlanQual tuple if we're inside an EvalPlanQual recheck */
if (estate->es_epqScanDone != NULL)
{
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index e937cf8..ded073a 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -592,12 +592,6 @@ ExecBuildProjectionInfo(List *targetList,
projInfo->pi_numSimpleVars = numSimpleVars;
projInfo->pi_directMap = directMap;
- if (exprlist == NIL)
- projInfo->pi_itemIsDone = NULL; /* not needed */
- else
- projInfo->pi_itemIsDone = (ExprDoneCond *)
- palloc(len * sizeof(ExprDoneCond));
-
return projInfo;
}
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 1ec2515..046e1b2 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -859,13 +859,13 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
bool isnull;
res = ExecEvalExprSwitchContext(filter, aggstate->tmpcontext,
- &isnull, NULL);
+ &isnull);
if (isnull || !DatumGetBool(res))
continue;
}
/* Evaluate the current input expressions for this aggregate */
- slot = ExecProject(pertrans->evalproj, NULL);
+ slot = ExecProject(pertrans->evalproj);
if (pertrans->numSortCols > 0)
{
@@ -951,7 +951,7 @@ combine_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
FunctionCallInfo fcinfo = &pertrans->transfn_fcinfo;
/* Evaluate the current input expressions for this aggregate */
- slot = ExecProject(pertrans->evalproj, NULL);
+ slot = ExecProject(pertrans->evalproj);
Assert(slot->tts_nvalid >= 1);
/*
@@ -1325,8 +1325,7 @@ finalize_aggregate(AggState *aggstate,
fcinfo.arg[i] = ExecEvalExpr(expr,
aggstate->ss.ps.ps_ExprContext,
- &fcinfo.argnull[i],
- NULL);
+ &fcinfo.argnull[i]);
anynull |= fcinfo.argnull[i];
i++;
}
@@ -1579,20 +1578,10 @@ project_aggregates(AggState *aggstate)
if (ExecQual(aggstate->ss.ps.qual, econtext, false))
{
/*
- * Form and return or store a projection tuple using the aggregate
- * results and the representative input tuple.
+ * Form and return projection tuple using the aggregate results and
+ * the representative input tuple.
*/
- ExprDoneCond isDone;
- TupleTableSlot *result;
-
- result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- aggstate->ss.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(aggstate->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(aggstate, 1);
@@ -1803,22 +1792,6 @@ ExecAgg(AggState *node)
TupleTableSlot *result;
/*
- * Check to see if we're still projecting out tuples from a previous agg
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ss.ps.ps_TupFromTlist)
- {
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
* (We must do the ps_TupFromTlist check first, because in some cases
* agg_done gets set before we emit the final aggregate tuple, and we have
* to finish running SRFs for it.)
@@ -2443,8 +2416,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&aggstate->ss.ps);
ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
- aggstate->ss.ps.ps_TupFromTlist = false;
-
/*
* get the count of aggregates in targetlist and quals
*/
@@ -3411,8 +3382,6 @@ ExecReScanAgg(AggState *node)
node->agg_done = false;
- node->ss.ps.ps_TupFromTlist = false;
-
if (aggnode->aggstrategy == AGG_HASHED)
{
/*
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 449aacb..16381f6 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -575,8 +575,6 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeCtescan.c b/src/backend/executor/nodeCtescan.c
index 3c2f684..1acb166 100644
--- a/src/backend/executor/nodeCtescan.c
+++ b/src/backend/executor/nodeCtescan.c
@@ -265,8 +265,6 @@ ExecInitCteScan(CteScan *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&scanstate->ss.ps);
ExecAssignScanProjectionInfo(&scanstate->ss);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
return scanstate;
}
diff --git a/src/backend/executor/nodeCustom.c b/src/backend/executor/nodeCustom.c
index 322abca..b465252 100644
--- a/src/backend/executor/nodeCustom.c
+++ b/src/backend/executor/nodeCustom.c
@@ -48,8 +48,6 @@ ExecInitCustomScan(CustomScan *cscan, EState *estate, int eflags)
/* create expression context for node */
ExecAssignExprContext(estate, &css->ss.ps);
- css->ss.ps.ps_TupFromTlist = false;
-
/* initialize child expressions */
css->ss.ps.targetlist = (List *)
ExecInitExpr((Expr *) cscan->scan.plan.targetlist,
diff --git a/src/backend/executor/nodeForeignscan.c b/src/backend/executor/nodeForeignscan.c
index d886aaf..3762843 100644
--- a/src/backend/executor/nodeForeignscan.c
+++ b/src/backend/executor/nodeForeignscan.c
@@ -152,8 +152,6 @@ ExecInitForeignScan(ForeignScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeFunctionscan.c b/src/backend/executor/nodeFunctionscan.c
index a03f6e7..7d0fe14 100644
--- a/src/backend/executor/nodeFunctionscan.c
+++ b/src/backend/executor/nodeFunctionscan.c
@@ -331,8 +331,6 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* tuple table initialization
*/
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 438d1b2..51754c8 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -99,8 +99,6 @@ ExecInitGather(Gather *node, EState *estate, int eflags)
outerNode = outerPlan(node);
outerPlanState(gatherstate) = ExecInitNode(outerNode, estate, eflags);
- gatherstate->ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
@@ -131,8 +129,6 @@ ExecGather(GatherState *node)
TupleTableSlot *fslot = node->funnel_slot;
int i;
TupleTableSlot *slot;
- TupleTableSlot *resultSlot;
- ExprDoneCond isDone;
ExprContext *econtext;
/*
@@ -198,20 +194,6 @@ ExecGather(GatherState *node)
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note we can't do this
* until we're done projecting. This will also clear any previous tuple
@@ -239,13 +221,8 @@ ExecGather(GatherState *node)
* back around for another tuple
*/
econtext->ecxt_outertuple = slot;
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ return ExecProject(node->ps.ps_ProjInfo);
}
return slot;
diff --git a/src/backend/executor/nodeGroup.c b/src/backend/executor/nodeGroup.c
index dcf5175..2f55c70 100644
--- a/src/backend/executor/nodeGroup.c
+++ b/src/backend/executor/nodeGroup.c
@@ -50,23 +50,6 @@ ExecGroup(GroupState *node)
grpColIdx = ((Group *) node->ss.ps.plan)->grpColIdx;
/*
- * Check to see if we're still projecting out tuples from a previous group
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ss.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
* The ScanTupleSlot holds the (copied) first tuple of each group.
*/
firsttupleslot = node->ss.ss_ScanTupleSlot;
@@ -107,16 +90,7 @@ ExecGroup(GroupState *node)
/*
* Form and return a projection tuple using the first input tuple.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ss.ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(node, 1);
@@ -170,16 +144,7 @@ ExecGroup(GroupState *node)
/*
* Form and return a projection tuple using the first input tuple.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ss.ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(node, 1);
@@ -246,8 +211,6 @@ ExecInitGroup(Group *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&grpstate->ss.ps);
ExecAssignProjectionInfo(&grpstate->ss.ps, NULL);
- grpstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Precompute fmgr lookup data for inner loop
*/
@@ -283,7 +246,6 @@ ExecReScanGroup(GroupState *node)
PlanState *outerPlan = outerPlanState(node);
node->grp_done = FALSE;
- node->ss.ps.ps_TupFromTlist = false;
/* must clear first tuple */
ExecClearTuple(node->ss.ss_ScanTupleSlot);
diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c
index 9ed09a7..e008a51 100644
--- a/src/backend/executor/nodeHash.c
+++ b/src/backend/executor/nodeHash.c
@@ -963,7 +963,7 @@ ExecHashGetHashValue(HashJoinTable hashtable,
/*
* Get the join attribute value of the tuple
*/
- keyval = ExecEvalExpr(keyexpr, econtext, &isNull, NULL);
+ keyval = ExecEvalExpr(keyexpr, econtext, &isNull);
/*
* If the attribute is NULL, and the join operator is strict, then
diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c
index 369e666..45c7be2 100644
--- a/src/backend/executor/nodeHashjoin.c
+++ b/src/backend/executor/nodeHashjoin.c
@@ -66,7 +66,6 @@ ExecHashJoin(HashJoinState *node)
List *joinqual;
List *otherqual;
ExprContext *econtext;
- ExprDoneCond isDone;
HashJoinTable hashtable;
TupleTableSlot *outerTupleSlot;
uint32 hashvalue;
@@ -83,22 +82,6 @@ ExecHashJoin(HashJoinState *node)
econtext = node->js.ps.ps_ExprContext;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -315,16 +298,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
{
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -354,16 +328,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
{
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -393,16 +358,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
{
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -586,7 +542,6 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags)
/* child Hash node needs to evaluate inner hash keys, too */
((HashState *) innerPlanState(hjstate))->hashkeys = rclauses;
- hjstate->js.ps.ps_TupFromTlist = false;
hjstate->hj_JoinState = HJ_BUILD_HASHTABLE;
hjstate->hj_MatchedOuter = false;
hjstate->hj_OuterNotEmpty = false;
@@ -1000,7 +955,6 @@ ExecReScanHashJoin(HashJoinState *node)
node->hj_CurSkewBucketNo = INVALID_SKEW_BUCKET_NO;
node->hj_CurTuple = NULL;
- node->js.ps.ps_TupFromTlist = false;
node->hj_MatchedOuter = false;
node->hj_FirstOuterTupleSlot = NULL;
diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c
index 4f6f91c..edd45661 100644
--- a/src/backend/executor/nodeIndexonlyscan.c
+++ b/src/backend/executor/nodeIndexonlyscan.c
@@ -412,8 +412,6 @@ ExecInitIndexOnlyScan(IndexOnlyScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &indexstate->ss.ps);
- indexstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 3143bd9..d1b1c23 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -336,8 +336,7 @@ EvalOrderByExpressions(IndexScanState *node, ExprContext *econtext)
node->iss_OrderByValues[i] = ExecEvalExpr(orderby,
econtext,
- &node->iss_OrderByNulls[i],
- NULL);
+ &node->iss_OrderByNulls[i]);
i++;
}
@@ -590,8 +589,7 @@ ExecIndexEvalRuntimeKeys(ExprContext *econtext,
*/
scanvalue = ExecEvalExpr(key_expr,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
{
scan_key->sk_argument = scanvalue;
@@ -648,8 +646,7 @@ ExecIndexEvalArrayKeys(ExprContext *econtext,
*/
arraydatum = ExecEvalExpr(array_expr,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
{
result = false;
@@ -837,8 +834,6 @@ ExecInitIndexScan(IndexScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &indexstate->ss.ps);
- indexstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*
diff --git a/src/backend/executor/nodeLimit.c b/src/backend/executor/nodeLimit.c
index faf32e1..7fed1c0 100644
--- a/src/backend/executor/nodeLimit.c
+++ b/src/backend/executor/nodeLimit.c
@@ -239,8 +239,7 @@ recompute_limits(LimitState *node)
{
val = ExecEvalExprSwitchContext(node->limitOffset,
econtext,
- &isNull,
- NULL);
+ &isNull);
/* Interpret NULL offset as no offset */
if (isNull)
node->offset = 0;
@@ -263,8 +262,7 @@ recompute_limits(LimitState *node)
{
val = ExecEvalExprSwitchContext(node->limitCount,
econtext,
- &isNull,
- NULL);
+ &isNull);
/* Interpret NULL count as no count (LIMIT ALL) */
if (isNull)
{
diff --git a/src/backend/executor/nodeMergejoin.c b/src/backend/executor/nodeMergejoin.c
index 6db09b8..340a2a9 100644
--- a/src/backend/executor/nodeMergejoin.c
+++ b/src/backend/executor/nodeMergejoin.c
@@ -313,7 +313,7 @@ MJEvalOuterValues(MergeJoinState *mergestate)
MergeJoinClause clause = &mergestate->mj_Clauses[i];
clause->ldatum = ExecEvalExpr(clause->lexpr, econtext,
- &clause->lisnull, NULL);
+ &clause->lisnull);
if (clause->lisnull)
{
/* match is impossible; can we end the join early? */
@@ -360,7 +360,7 @@ MJEvalInnerValues(MergeJoinState *mergestate, TupleTableSlot *innerslot)
MergeJoinClause clause = &mergestate->mj_Clauses[i];
clause->rdatum = ExecEvalExpr(clause->rexpr, econtext,
- &clause->risnull, NULL);
+ &clause->risnull);
if (clause->risnull)
{
/* match is impossible; can we end the join early? */
@@ -465,19 +465,10 @@ MJFillOuter(MergeJoinState *node)
* qualification succeeded. now form the desired projection tuple and
* return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
MJ_printf("ExecMergeJoin: returning outer fill tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -506,19 +497,9 @@ MJFillInner(MergeJoinState *node)
* qualification succeeded. now form the desired projection tuple and
* return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
MJ_printf("ExecMergeJoin: returning inner fill tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -642,23 +623,6 @@ ExecMergeJoin(MergeJoinState *node)
doFillInner = node->mj_FillInner;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -856,20 +820,9 @@ ExecMergeJoin(MergeJoinState *node)
* qualification succeeded. now form the desired
* projection tuple and return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
MJ_printf("ExecMergeJoin: returning tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo,
- &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -1629,7 +1582,6 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags)
* initialize join state
*/
mergestate->mj_JoinState = EXEC_MJ_INITIALIZE_OUTER;
- mergestate->js.ps.ps_TupFromTlist = false;
mergestate->mj_MatchedOuter = false;
mergestate->mj_MatchedInner = false;
mergestate->mj_OuterTupleSlot = NULL;
@@ -1684,7 +1636,6 @@ ExecReScanMergeJoin(MergeJoinState *node)
ExecClearTuple(node->mj_MarkedTupleSlot);
node->mj_JoinState = EXEC_MJ_INITIALIZE_OUTER;
- node->js.ps.ps_TupFromTlist = false;
node->mj_MatchedOuter = false;
node->mj_MatchedInner = false;
node->mj_OuterTupleSlot = NULL;
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index af7b26c..0e6187b 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -175,7 +175,7 @@ ExecProcessReturning(ResultRelInfo *resultRelInfo,
econtext->ecxt_outertuple = planSlot;
/* Compute the RETURNING expressions */
- return ExecProject(projectReturning, NULL);
+ return ExecProject(projectReturning);
}
/*
@@ -1216,7 +1216,7 @@ ExecOnConflictUpdate(ModifyTableState *mtstate,
}
/* Project the new tuple version */
- ExecProject(resultRelInfo->ri_onConflictSetProj, NULL);
+ ExecProject(resultRelInfo->ri_onConflictSetProj);
/*
* Note that it is possible that the target tuple has been modified in
diff --git a/src/backend/executor/nodeNestloop.c b/src/backend/executor/nodeNestloop.c
index 555fa09..5d30e75 100644
--- a/src/backend/executor/nodeNestloop.c
+++ b/src/backend/executor/nodeNestloop.c
@@ -82,23 +82,6 @@ ExecNestLoop(NestLoopState *node)
econtext = node->js.ps.ps_ExprContext;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -201,19 +184,10 @@ ExecNestLoop(NestLoopState *node)
* the slot containing the result tuple using
* ExecProject().
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
ENL1_printf("qualification succeeded, projecting tuple");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -259,19 +233,10 @@ ExecNestLoop(NestLoopState *node)
* qualification was satisfied so we project and return the
* slot containing the result tuple using ExecProject().
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
ENL1_printf("qualification succeeded, projecting tuple");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -377,7 +342,6 @@ ExecInitNestLoop(NestLoop *node, EState *estate, int eflags)
/*
* finally, wipe the current outer tuple clean.
*/
- nlstate->js.ps.ps_TupFromTlist = false;
nlstate->nl_NeedNewOuter = true;
nlstate->nl_MatchedOuter = false;
@@ -441,7 +405,6 @@ ExecReScanNestLoop(NestLoopState *node)
* outer Vars are used as run-time keys...
*/
- node->js.ps.ps_TupFromTlist = false;
node->nl_NeedNewOuter = true;
node->nl_MatchedOuter = false;
}
diff --git a/src/backend/executor/nodeResult.c b/src/backend/executor/nodeResult.c
index 4007b76..3901351 100644
--- a/src/backend/executor/nodeResult.c
+++ b/src/backend/executor/nodeResult.c
@@ -67,10 +67,8 @@ TupleTableSlot *
ExecResult(ResultState *node)
{
TupleTableSlot *outerTupleSlot;
- TupleTableSlot *resultSlot;
PlanState *outerPlan;
ExprContext *econtext;
- ExprDoneCond isDone;
econtext = node->ps.ps_ExprContext;
@@ -92,20 +90,6 @@ ExecResult(ResultState *node)
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a scan tuple.
@@ -147,18 +131,8 @@ ExecResult(ResultState *node)
node->rs_done = true;
}
- /*
- * form the result tuple using ExecProject(), and return it --- unless
- * the projection produces an empty set, in which case we must loop
- * back to see if there are more outerPlan tuples.
- */
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ /* form the result tuple using ExecProject(), and return it */
+ return ExecProject(node->ps.ps_ProjInfo);
}
return NULL;
@@ -228,8 +202,6 @@ ExecInitResult(Result *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &resstate->ps);
- resstate->ps.ps_TupFromTlist = false;
-
/*
* tuple table initialization
*/
@@ -295,7 +267,6 @@ void
ExecReScanResult(ResultState *node)
{
node->rs_done = false;
- node->ps.ps_TupFromTlist = false;
node->rs_checkqual = (node->resconstantqual == NULL) ? false : true;
/*
diff --git a/src/backend/executor/nodeSamplescan.c b/src/backend/executor/nodeSamplescan.c
index 9ce7c02..64396e1 100644
--- a/src/backend/executor/nodeSamplescan.c
+++ b/src/backend/executor/nodeSamplescan.c
@@ -188,8 +188,6 @@ ExecInitSampleScan(SampleScan *node, EState *estate, int eflags)
*/
InitScanRelation(scanstate, estate, eflags);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
@@ -299,8 +297,7 @@ tablesample_init(SampleScanState *scanstate)
params[i] = ExecEvalExprSwitchContext(argstate,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLESAMPLE_ARGUMENT),
@@ -312,8 +309,7 @@ tablesample_init(SampleScanState *scanstate)
{
datum = ExecEvalExprSwitchContext(scanstate->repeatable,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLESAMPLE_REPEAT),
diff --git a/src/backend/executor/nodeSeqscan.c b/src/backend/executor/nodeSeqscan.c
index 00bf3a5..477dc42 100644
--- a/src/backend/executor/nodeSeqscan.c
+++ b/src/backend/executor/nodeSeqscan.c
@@ -206,8 +206,6 @@ ExecInitSeqScan(SeqScan *node, EState *estate, int eflags)
*/
InitScanRelation(scanstate, estate, eflags);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c
index e503494..5800ca8 100644
--- a/src/backend/executor/nodeSubplan.c
+++ b/src/backend/executor/nodeSubplan.c
@@ -41,12 +41,10 @@
static Datum ExecSubPlan(SubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecAlternativeSubPlan(AlternativeSubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecHashSubPlan(SubPlanState *node,
ExprContext *econtext,
bool *isNull);
@@ -69,15 +67,12 @@ static bool slotNoNulls(TupleTableSlot *slot);
static Datum
ExecSubPlan(SubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
SubPlan *subplan = (SubPlan *) node->xprstate.expr;
/* Set default values for result flags: non-null, not a set result */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/* Sanity checks */
if (subplan->subLinkType == CTE_SUBLINK)
@@ -128,7 +123,7 @@ ExecHashSubPlan(SubPlanState *node,
* have to set the econtext to use (hack alert!).
*/
node->projLeft->pi_exprContext = econtext;
- slot = ExecProject(node->projLeft, NULL);
+ slot = ExecProject(node->projLeft);
/*
* Note: because we are typically called in a per-tuple context, we have
@@ -285,8 +280,7 @@ ExecScanSubPlan(SubPlanState *node,
prm->value = ExecEvalExprSwitchContext((ExprState *) lfirst(pvar),
econtext,
- &(prm->isnull),
- NULL);
+ &(prm->isnull));
planstate->chgParam = bms_add_member(planstate->chgParam, paramid);
}
@@ -403,7 +397,7 @@ ExecScanSubPlan(SubPlanState *node,
}
rowresult = ExecEvalExprSwitchContext(node->testexpr, econtext,
- &rownull, NULL);
+ &rownull);
if (subLinkType == ANY_SUBLINK)
{
@@ -570,7 +564,7 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext)
&(prmdata->isnull));
col++;
}
- slot = ExecProject(node->projRight, NULL);
+ slot = ExecProject(node->projRight);
/*
* If result contains any nulls, store separately or not at all.
@@ -987,8 +981,7 @@ ExecSetParamPlan(SubPlanState *node, ExprContext *econtext)
prm->value = ExecEvalExprSwitchContext((ExprState *) lfirst(pvar),
econtext,
- &(prm->isnull),
- NULL);
+ &(prm->isnull));
planstate->chgParam = bms_add_member(planstate->chgParam, paramid);
}
@@ -1224,8 +1217,7 @@ ExecInitAlternativeSubPlan(AlternativeSubPlan *asplan, PlanState *parent)
static Datum
ExecAlternativeSubPlan(AlternativeSubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* Just pass control to the active subplan */
SubPlanState *activesp = (SubPlanState *) list_nth(node->subplans,
@@ -1233,8 +1225,5 @@ ExecAlternativeSubPlan(AlternativeSubPlanState *node,
Assert(IsA(activesp, SubPlanState));
- return ExecSubPlan(activesp,
- econtext,
- isNull,
- isDone);
+ return ExecSubPlan(activesp, econtext, isNull);
}
diff --git a/src/backend/executor/nodeSubqueryscan.c b/src/backend/executor/nodeSubqueryscan.c
index 9bafc62..4de7024 100644
--- a/src/backend/executor/nodeSubqueryscan.c
+++ b/src/backend/executor/nodeSubqueryscan.c
@@ -138,8 +138,6 @@ ExecInitSubqueryScan(SubqueryScan *node, EState *estate, int eflags)
*/
subquerystate->subplan = ExecInitNode(node->subplan, estate, eflags);
- subquerystate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize scan tuple type (needed by ExecAssignScanProjectionInfo)
*/
diff --git a/src/backend/executor/nodeTidscan.c b/src/backend/executor/nodeTidscan.c
index 2604103..e1c736c 100644
--- a/src/backend/executor/nodeTidscan.c
+++ b/src/backend/executor/nodeTidscan.c
@@ -104,8 +104,7 @@ TidListCreate(TidScanState *tidstate)
itemptr = (ItemPointer)
DatumGetPointer(ExecEvalExprSwitchContext(exstate,
econtext,
- &isNull,
- NULL));
+ &isNull));
if (!isNull &&
ItemPointerIsValid(itemptr) &&
ItemPointerGetBlockNumber(itemptr) < nblocks)
@@ -133,8 +132,7 @@ TidListCreate(TidScanState *tidstate)
exstate = (ExprState *) lsecond(saexstate->fxprstate.args);
arraydatum = ExecEvalExprSwitchContext(exstate,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
continue;
itemarray = DatumGetArrayTypeP(arraydatum);
@@ -469,8 +467,6 @@ ExecInitTidScan(TidScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &tidstate->ss.ps);
- tidstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeValuesscan.c b/src/backend/executor/nodeValuesscan.c
index 9c03f8a..18c8ae9 100644
--- a/src/backend/executor/nodeValuesscan.c
+++ b/src/backend/executor/nodeValuesscan.c
@@ -140,8 +140,7 @@ ValuesNext(ValuesScanState *node)
values[resind] = ExecEvalExpr(estate,
econtext,
- &isnull[resind],
- NULL);
+ &isnull[resind]);
/*
* We must force any R/W expanded datums to read-only state, in
@@ -272,8 +271,6 @@ ExecInitValuesScan(ValuesScan *node, EState *estate, int eflags)
scanstate->exprlists[i++] = (List *) lfirst(vtl);
}
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c
index d4c88a1..fc111fa 100644
--- a/src/backend/executor/nodeWindowAgg.c
+++ b/src/backend/executor/nodeWindowAgg.c
@@ -256,7 +256,7 @@ advance_windowaggregate(WindowAggState *winstate,
if (filter)
{
bool isnull;
- Datum res = ExecEvalExpr(filter, econtext, &isnull, NULL);
+ Datum res = ExecEvalExpr(filter, econtext, &isnull);
if (isnull || !DatumGetBool(res))
{
@@ -272,7 +272,7 @@ advance_windowaggregate(WindowAggState *winstate,
ExprState *argstate = (ExprState *) lfirst(arg);
fcinfo->arg[i] = ExecEvalExpr(argstate, econtext,
- &fcinfo->argnull[i], NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -418,7 +418,7 @@ advance_windowaggregate_base(WindowAggState *winstate,
if (filter)
{
bool isnull;
- Datum res = ExecEvalExpr(filter, econtext, &isnull, NULL);
+ Datum res = ExecEvalExpr(filter, econtext, &isnull);
if (isnull || !DatumGetBool(res))
{
@@ -434,7 +434,7 @@ advance_windowaggregate_base(WindowAggState *winstate,
ExprState *argstate = (ExprState *) lfirst(arg);
fcinfo->arg[i] = ExecEvalExpr(argstate, econtext,
- &fcinfo->argnull[i], NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -1558,8 +1558,6 @@ update_frametailpos(WindowObject winobj, TupleTableSlot *slot)
TupleTableSlot *
ExecWindowAgg(WindowAggState *winstate)
{
- TupleTableSlot *result;
- ExprDoneCond isDone;
ExprContext *econtext;
int i;
int numfuncs;
@@ -1568,23 +1566,6 @@ ExecWindowAgg(WindowAggState *winstate)
return NULL;
/*
- * Check to see if we're still projecting out tuples from a previous
- * output tuple (because there is a function-returning-set in the
- * projection expressions). If so, try to project another one.
- */
- if (winstate->ss.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(winstate->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- winstate->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
* Compute frame offset values, if any, during first call.
*/
if (winstate->all_first)
@@ -1601,8 +1582,7 @@ ExecWindowAgg(WindowAggState *winstate)
Assert(winstate->startOffset != NULL);
value = ExecEvalExprSwitchContext(winstate->startOffset,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
@@ -1627,8 +1607,7 @@ ExecWindowAgg(WindowAggState *winstate)
Assert(winstate->endOffset != NULL);
value = ExecEvalExprSwitchContext(winstate->endOffset,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
@@ -1651,7 +1630,6 @@ ExecWindowAgg(WindowAggState *winstate)
winstate->all_first = false;
}
-restart:
if (winstate->buffer == NULL)
{
/* Initialize for first partition and set current row = 0 */
@@ -1743,17 +1721,8 @@ restart:
* evaluated with respect to that row.
*/
econtext->ecxt_outertuple = winstate->ss.ss_ScanTupleSlot;
- result = ExecProject(winstate->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprEndResult)
- {
- /* SRF in tlist returned no rows, so advance to next input tuple */
- goto restart;
- }
-
- winstate->ss.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
+ return ExecProject(winstate->ss.ps.ps_ProjInfo);
}
/* -----------------
@@ -1867,8 +1836,6 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&winstate->ss.ps);
ExecAssignProjectionInfo(&winstate->ss.ps, NULL);
- winstate->ss.ps.ps_TupFromTlist = false;
-
/* Set up data for comparing tuples */
if (node->partNumCols > 0)
winstate->partEqfunctions = execTuplesMatchPrepare(node->partNumCols,
@@ -2061,8 +2028,6 @@ ExecReScanWindowAgg(WindowAggState *node)
ExprContext *econtext = node->ss.ps.ps_ExprContext;
node->all_done = false;
-
- node->ss.ps.ps_TupFromTlist = false;
node->all_first = true;
/* release tuplestore et al */
@@ -2685,7 +2650,7 @@ WinGetFuncArgInPartition(WindowObject winobj, int argno,
}
econtext->ecxt_outertuple = slot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
}
@@ -2784,7 +2749,7 @@ WinGetFuncArgInFrame(WindowObject winobj, int argno,
}
econtext->ecxt_outertuple = slot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
}
@@ -2814,5 +2779,5 @@ WinGetFuncArgCurrent(WindowObject winobj, int argno, bool *isnull)
econtext->ecxt_outertuple = winstate->ss.ss_ScanTupleSlot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
diff --git a/src/backend/executor/nodeWorktablescan.c b/src/backend/executor/nodeWorktablescan.c
index cfed6e6..dbb8ea3 100644
--- a/src/backend/executor/nodeWorktablescan.c
+++ b/src/backend/executor/nodeWorktablescan.c
@@ -174,8 +174,6 @@ ExecInitWorkTableScan(WorkTableScan *node, EState *estate, int eflags)
*/
ExecAssignResultTypeFromTL(&scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
return scanstate;
}
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 7e60694..ab30f7e 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -5171,7 +5171,7 @@ evaluate_expr(Expr *expr, Oid result_type, int32 result_typmod,
*/
const_val = ExecEvalExprSwitchContext(exprstate,
GetPerTupleExprContext(estate),
- &const_is_null, NULL);
+ &const_is_null);
/* Get info needed about result datatype */
get_typlenbyval(result_type, &resultTypLen, &resultTypByVal);
diff --git a/src/backend/optimizer/util/predtest.c b/src/backend/optimizer/util/predtest.c
index 2c2efb1..0c59fe8 100644
--- a/src/backend/optimizer/util/predtest.c
+++ b/src/backend/optimizer/util/predtest.c
@@ -1596,7 +1596,7 @@ operator_predicate_proof(Expr *predicate, Node *clause, bool refute_it)
/* And execute it. */
test_result = ExecEvalExprSwitchContext(test_exprstate,
GetPerTupleExprContext(estate),
- &isNull, NULL);
+ &isNull);
/* Get back to outer memory context */
MemoryContextSwitchTo(oldcontext);
diff --git a/src/backend/utils/adt/domains.c b/src/backend/utils/adt/domains.c
index 19ee4ce..c568c6c 100644
--- a/src/backend/utils/adt/domains.c
+++ b/src/backend/utils/adt/domains.c
@@ -164,7 +164,7 @@ domain_check_input(Datum value, bool isnull, DomainIOData *my_extra)
conResult = ExecEvalExprSwitchContext(con->check_expr,
econtext,
- &conIsNull, NULL);
+ &conIsNull);
if (!conIsNull &&
!DatumGetBool(conResult))
diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c
index 7ed5bcb..65bf6ad 100644
--- a/src/backend/utils/adt/xml.c
+++ b/src/backend/utils/adt/xml.c
@@ -603,7 +603,7 @@ xmlelement(XmlExprState *xmlExpr, ExprContext *econtext)
bool isnull;
char *str;
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
str = NULL;
else
@@ -620,7 +620,7 @@ xmlelement(XmlExprState *xmlExpr, ExprContext *econtext)
bool isnull;
char *str;
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
/* here we can just forget NULL elements immediately */
if (!isnull)
{
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 39521ed..7f6a2bc 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -69,8 +69,8 @@
* now it's just a macro invoking the function pointed to by an ExprState
* node. Beware of double evaluation of the ExprState argument!
*/
-#define ExecEvalExpr(expr, econtext, isNull, isDone) \
- ((*(expr)->evalfunc) (expr, econtext, isNull, isDone))
+#define ExecEvalExpr(expr, econtext, isNull) \
+ ((*(expr)->evalfunc) (expr, econtext, isNull))
/* Hook for plugins to get control in ExecutorStart() */
@@ -240,14 +240,13 @@ extern Tuplestorestate *ExecMakeTableFunctionResult(ExprState *funcexpr,
TupleDesc expectedDesc,
bool randomAccess);
extern Datum ExecEvalExprSwitchContext(ExprState *expression, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
extern ExprState *ExecInitExpr(Expr *node, PlanState *parent);
extern ExprState *ExecPrepareExpr(Expr *node, EState *estate);
extern bool ExecQual(List *qual, ExprContext *econtext, bool resultForNull);
extern int ExecTargetListLength(List *targetlist);
extern int ExecCleanTargetListLength(List *targetlist);
-extern TupleTableSlot *ExecProject(ProjectionInfo *projInfo,
- ExprDoneCond *isDone);
+extern TupleTableSlot *ExecProject(ProjectionInfo *projInfo);
/*
* prototypes from functions in execScan.c
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index e7fd7bd..043f969 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -243,7 +243,6 @@ typedef struct ProjectionInfo
List *pi_targetlist;
ExprContext *pi_exprContext;
TupleTableSlot *pi_slot;
- ExprDoneCond *pi_itemIsDone;
bool pi_directMap;
int pi_numSimpleVars;
int *pi_varSlotOffsets;
@@ -569,8 +568,7 @@ typedef struct ExprState ExprState;
typedef Datum (*ExprStateEvalFunc) (ExprState *expression,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
struct ExprState
{
@@ -692,21 +690,13 @@ typedef struct FuncExprState
TupleTableSlot *funcResultSlot;
/*
- * In some cases we need to compute a tuple descriptor for the function's
- * output. If so, it's stored here.
- */
- TupleDesc funcResultDesc;
- bool funcReturnsTuple; /* valid when funcResultDesc isn't
- * NULL */
-
- /*
* setArgsValid is true when we are evaluating a set-returning function
* that uses value-per-call mode and we are in the middle of a call
* series; we want to pass the same argument values to the function again
* (and again, until it returns ExprEndResult). This indicates that
* fcinfo_data already contains valid argument data.
*/
- bool setArgsValid;
+ bool setArgsValid2;
/*
* Flag to remember whether we found a set-valued argument to the
@@ -1057,8 +1047,6 @@ typedef struct PlanState
TupleTableSlot *ps_ResultTupleSlot; /* slot for my result tuples */
ExprContext *ps_ExprContext; /* node's expression-evaluation context */
ProjectionInfo *ps_ProjInfo; /* info for doing tuple projection */
- bool ps_TupFromTlist;/* state flag for processing set-valued
- * functions in targetlist */
} PlanState;
/* ----------------
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
index 586ff1f..3ae4489 100644
--- a/src/pl/plpgsql/src/pl_exec.c
+++ b/src/pl/plpgsql/src/pl_exec.c
@@ -5443,8 +5443,7 @@ exec_eval_simple_expr(PLpgSQL_execstate *estate,
*/
*result = ExecEvalExpr(expr->expr_simple_state,
econtext,
- isNull,
- NULL);
+ isNull);
/* Assorted cleanup */
expr->expr_simple_in_use = false;
@@ -6112,7 +6111,7 @@ exec_cast_value(PLpgSQL_execstate *estate,
cast_entry->cast_in_use = true;
value = ExecEvalExpr(cast_entry->cast_exprstate, econtext,
- isnull, NULL);
+ isnull);
cast_entry->cast_in_use = false;
--
2.8.1
On 2016-08-03 20:22:03 -0700, Andres Freund wrote:
On 2016-08-02 16:30:55 -0700, Andres Freund wrote:
Besides that I'm structurally wondering whether turning the original
query into a subquery is the right thing to do. It requires some kind of
ugly munching of Query->*, and has the above problem.It does not seem like it should be that hard, certainly no worse than
subquery pullup. Want to show code?It's not super hard, there's some stuff like pushing/not-pushing
various sortgrouprefs to the subquery. But I think we can live with it.Let me clean up the code some, hope to have something today or
tomorrow.Here we go. This *clearly* is a POC, not more. But it mostly works.
0001 - adds some test, some of those change after the later patches
0002 - main SRF via ROWS FROM () implementation
0003 - Large patch removing now unused code. Most satisfying.The interesting bit is obviously 0002. What it basically does is, at the beginning
of subquery_planner():
1) unsrfify:
move the jointree into a subquery
2) unsrfify_reference_subquery_mutator:
process the old targetlist to reference the new subquery. If a
TargetEntry doesn't contain a set, it's entirely moved into the
subquery. Otherwise all Vars/Aggrefs/... it references are moved to
the subquery, and referenced in the outer query's target list.
3) unsrfify_implement_srfs_mutator:
Replace set returning functions in the targetlist with references to
a new FUNCTION RTE. All non-nested tSRFs are part of the same RTE
(i.e. the least common multiple behaviour is gone). all tSRFs in
arguments are implemented as another FUNCTION RTE.I discovered that we allow SRFs in UPDATE target lists. It's not clear
to me what that's supposed to mean. Nor how exactly to implement that,
given expand_targetlist(). Right now that fails with the patch, because
it re-inserts Var's for the relation replaced by the subquery.Note that I've not bothered to fix up the regression test output - I'm
certain that explain output and such will still change.Biggest questions / tasks:
* General approach
* DML handling
* Operator implementation
* SETOF record handling
* correct handling of lateral dependency from RTE to subquery to force
evaluation order, instead of my RangeTblEntry->deps hack.
* lot of cleanupComments?
Tom, do you think this is roughly going in the right direction? My plan
here is to develop two patches, to come before this:
a) Allow to avoid using a tuplestore for SRF_PERCALL SRFs in ROWS FROM -
otherwise our performance would regress noticeably in some cases.
b) Allow ROWS FROM() to return SETOF RECORD type SRFs as one column,
instead of expanded. That's important to be able move SETOF RECORD
returning functions in the targetlist into ROWS FROM, which otherwise
requires an explicit column list.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-08-17 17:41:28 -0700, Andres Freund wrote:
Tom, do you think this is roughly going in the right direction? My plan
here is to develop two patches, to come before this:a) Allow to avoid using a tuplestore for SRF_PERCALL SRFs in ROWS FROM -
otherwise our performance would regress noticeably in some cases.
b) Allow ROWS FROM() to return SETOF RECORD type SRFs as one column,
instead of expanded. That's important to be able move SETOF RECORD
returning functions in the targetlist into ROWS FROM, which otherwise
requires an explicit column list.
I'm working on these. Atm ExecMakeTableFunctionResult() resides in
execQual.c - I'm inlining it into nodeFunctionscan.c now, because
there's no other callers, and having it separate seems to bring no
benefit.
Please speak soon up if you disagree.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2016-05-23 09:26:03 +0800, Craig Ringer wrote:
SRFs-in-tlist are a lot faster for lockstep iteration etc. They're also
much simpler to write, though if the result result rowcount differs
unexpectedly between the functions you get exciting and unexpected
behaviour.WITH ORDINALITY provides what I think is the last of the functionality
needed to replace SRFs-in-from, but at a syntatactic complexity and
performance cost. The following example demonstrates that, though it
doesn't do anything that needs LATERAL etc. I'm aware the following aren't
semantically identical if the rowcounts differ.
I think here you're just missing ROWS FROM (generate_series(..), generate_series(...))
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-08-17 17:41:28 -0700, Andres Freund wrote:
Tom, do you think this is roughly going in the right direction?
I've not had time to look at this patch, I'm afraid. If you still
want me to, I can make time in a day or so.
I'm working on these. Atm ExecMakeTableFunctionResult() resides in
execQual.c - I'm inlining it into nodeFunctionscan.c now, because
there's no other callers, and having it separate seems to bring no
benefit.
Please speak soon up if you disagree.
I think ExecMakeTableFunctionResult was placed in execQual.c because
it seemed to belong there alongside the support for SRFs in tlists.
If that's going away then there's no good reason not to move the logic
to where it's used.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
as noted in [1]http://archives.postgresql.org/message-id/20160801082346.nfp2g7mg74alifdc%40alap3.anarazel.de I started hacking on removing the current implementation
of SRFs in the targetlist (tSRFs henceforth). IM discussion brought the
need for a description of the problem, need and approach to light.
There are several reasons for wanting to get rid of tSRFs. The primary
ones in my opinion are that the current behaviour of several SRFs in one
targetlist is confusing, and that the implementation burden currently is
all over the executor. Especially the latter is what is motivating me
working on this, because it blocks my work on making the executor faster
for queries involving significant amounts of tuples. Batching is hard
if random places in the querytree can icnrease the number of tuples.
The basic idea, hinted at in several threads, is, at plan time, to convert a query like
SELECT generate_series(1, 10);
into
SELECT generate_series FROM ROWS FROM(generate_series(1, 10));
thereby avoiding the complications in the executor (c.f. execQual.c
handling of isDone/ExprMultipleResult and supporting code in many
executor nodes / node->*.ps.ps_TupFromTlist).
There are several design questions along the way:
1) How to deal with the least-common-multiple behaviour of tSRFs. E.g.
=# SELECT generate_series(1, 3), generate_series(1,2);
returning
┌─────────────────┬─────────────────┐
│ generate_series │ generate_series │
├─────────────────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 2 │
│ 3 │ 1 │
│ 1 │ 2 │
│ 2 │ 1 │
│ 3 │ 2 │
└─────────────────┴─────────────────┘
(6 rows)
but
=# SELECT generate_series(1, 3), generate_series(5,7);
returning
┌─────────────────┬─────────────────┐
│ generate_series │ generate_series │
├─────────────────┼─────────────────┤
│ 1 │ 5 │
│ 2 │ 6 │
│ 3 │ 7 │
└─────────────────┴─────────────────┘
discussion in this thread came, according to my reading, to the
conclusion that that behaviour is just confusing and that the ROWS FROM
behaviour of
=# SELECT * FROM ROWS FROM(generate_series(1, 3), generate_series(1,2));
┌─────────────────┬─────────────────┐
│ generate_series │ generate_series │
├─────────────────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 2 │
│ 3 │ (null) │
└─────────────────┴─────────────────┘
(3 rows)
makes more sense. We also discussed erroring out if two SRFs return
differing amount of rows, but that seems not to be preferred so far. And
we can easily add it if we want.
2) A naive conversion to ROWS FROM, like in the example in the
introductory paragraph, can change the output, when implemented as a
join from ROWS FROM to the rest of the query, rather than the other
way round. E.g.
=# EXPLAIN SELECT * FROM few, ROWS FROM(generate_series(1,10));
┌──────────────────────────────────────────────────────────────────────────────┐
│ QUERY PLAN │
├──────────────────────────────────────────────────────────────────────────────┤
│ Nested Loop (cost=0.00..36.03 rows=2000 width=8) │
│ -> Function Scan on generate_series (cost=0.00..10.00 rows=1000 width=4) │
│ -> Materialize (cost=0.00..1.03 rows=2 width=4) │
│ -> Seq Scan on few (cost=0.00..1.02 rows=2 width=4) │
└──────────────────────────────────────────────────────────────────────────────┘
(4 rows)
=# SELECT * FROM few, ROWS FROM(generate_series(1,3));
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 1 │
│ 1 │ 2 │
│ 2 │ 2 │
│ 1 │ 3 │
│ 2 │ 3 │
└────┴─────────────────┘
(6 rows)
surely isn't what was intended. So the join order needs to be enforced.
3) tSRFs are evaluated after GROUP BY, and window functions:
=# SELECT generate_series(1, count(*)) FROM (VALUES(1),(2),(10)) f;
┌─────────────────┐
│ generate_series │
├─────────────────┤
│ 1 │
│ 2 │
│ 3 │
└─────────────────┘
which means we have to push the "original" query into a subquery, with
the ROWS FROM laterally referencing the subquery:
SELECT generate_series FROM (SELECT count(*) FROM (VALUES(1),(2),(10)) f) s, ROWS FROM (generate_series(1,s.count));
4) The evaluation order of tSRFs in combination with ORDER BY is a bit
confusing. Namely tSRFs are implemented after ORDER BY has been
evaluated, unless the ORDER BY references the SRF.
E.g.
=# SELECT few.id, generate_series FROM ROWS FROM(generate_series(1,3)),few ORDER BY few.id DESC;
might return
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 24 │ 3 │
│ 24 │ 2 │
│ 24 │ 1 │
..
instead of
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 24 │ 1 │
│ 24 │ 2 │
│ 24 │ 3 │
as before.
which means we'll sometimes have to push down the ORDER BY into the
subquery (when not referencing tSRFs, so they're evaluated first),
sometimes evaluate them on the outside (if tSRFs are referenced)
5) tSRFs can have tSRFs as argument, e.g.:
=# SELECT generate_series(1, generate_series(1,3));
┌─────────────────┐
│ generate_series │
├─────────────────┤
│ 1 │
│ 1 │
│ 2 │
│ 1 │
│ 2 │
│ 3 │
└─────────────────┘
that can quite easily be implemented by having the "nested" tSRF
evaluate as a separate ROWS FROM expression.
Which even allows us to implement the previously forbidden
=# SELECT generate_series(generate_series(1,3), generate_series(2,4));
ERROR: 0A000: functions and operators can take at most one set argument
- not that I think that's of great value ;)
6) SETOF record type functions cannot directly be used in ROWS FROM() -
as ROWS FROM "expands" records returned by functions. When converting
something like
CREATE OR REPLACE FUNCTION setof_record_sql() RETURNS SETOF record LANGUAGE sql AS $$SELECT 1 AS a, 2 AS b UNION ALL SELECT 1, 2;$$;
SELECT setof_record_sql();
we don't have that available though.
The best way to handle that seems to be to introduce the ability for
ROWS FROM not to expand the record returned by a column. I'm currently
thinking that something like ROWS FROM(setof_record_sql() AS ()) would
do the trick. That'd also considerably simplify the handling of
functions returning known composite types - my current POC patch
generates a ROW(a,b,..) type expression for those.
I'm open to better syntax suggestions.
7) ROWS FROM () / functions in the FROM list are currently signifcantly
slower than the equivalent in the target list (for SFRM_ValuePerCall
SRFs at least):
=# COPY (SELECT generate_series(1,10000000)) TO '/dev/null';
COPY 10000000
Time: 1311.469 ms
=# COPY (SELECT * FROM generate_series(1,10000000)) TO '/dev/null';
LOG: 00000: temporary file: path "base/pgsql_tmp/pgsql_tmp702.0", size 140000000
LOCATION: FileClose, fd.c:1484
COPY 10000000
Time: 2173.282 ms
for SRFM_Materialize SRFs there's no meaningufl difference:
CREATE FUNCTION plpgsql_generate_series(bigint, bigint) RETURNS SETOF bigint LANGUAGE plpgsql AS $$BEGIN RETURN QUERY SELECT generate_series($1, $2);END;$$;
=# COPY (SELECT plpgsql_generate_series(1,10000000)) TO '/dev/null';
LOG: 00000: temporary file: path "base/pgsql_tmp/pgsql_tmp702.2", size 180000000
COPY 10000000
Time: 3058.437 ms
=# COPY (SELECT * FROM plpgsql_generate_series(1,10000000)) TO '/dev/null';LOG: 00000: temporary file: path "base/pgsql_tmp/pgsql_tmp702.1", size 180000000
COPY 10000000
Time: 2964.661 ms
that makes sense, because nodeFunctionscan.c, via
ExecMakeTableFunctionResult, forces materialization of ValuePerCall
SRFs.
ISTM that we need should fix that by allowing ValuePerCall without
materialization, as long as EXEC_FLAG_BACKWARD isn't required.
I've implemented ([2]http://archives.postgresql.org/message-id/20160804032203.jprhdkx273sqhksd%40alap3.anarazel.de) a prototype of this. My basic approach is:
I) During parse-analysis, remember whether a query has any tSRFs
(Query->hasTargetSRF). That avoids doing a useless pass over the
query, if no tSRFs are present.
II) At the beginning of subquery_planner(), before doing any work
operating on subqueries and such, implement SRFs if ->hasTargetSRF().
(unsrfify() in the POC)
III) Unconditionally move the "current" query into a subquery. For that
do a mutator pass over the query, replacing Vars/Aggrefs/... in the
original targetlist with Var references to the new subquery.
(unsrfify_reference_subquery_mutator() in the POC)
IV) Do a pass over the outer query's targetlist, and implement any tSRFs
using a ROWS FROM() RTE (or multiple ones in case of nested tSRFs).
(unsrfify_implement_srfs_mutator() in the POC)
that seems to mostly work well.
The behaviour changes this implies are:
a) Least-common-multiple behaviour, as in (1) above, is gone. I think
that's good.
b) We currently allow tSRFs in UPDATE ... SET expressions. I don't
actually know what that's supposed to mean. I'm inclined
a;
=# CREATE TABLE blarg AS SELECT 1::int a;
SELECT 1
=# UPDATE blarg SET a = generate_series(2,3);
UPDATE 1
=# SELECT * FROM blarg ;
┌───┐
│ a │
├───┤
│ 2 │
└───┘
I'm inclined to think that that's a bad idea, and should rather be
forbidden.
c) COALESCE/CASE have, so far, shortcut tSRF expansion. E.g.
SELECT id, COALESCE(1, generate_series(1,2)) FROM (VALUES(1),(2)) few(id);
returns only two rows, despite the generate_series(). But by
implementing the generate_series as a ROWS FROM, it'd return four.
I think that's ok.
d) Not a problem with the patch per-se, but I'm doubful that that's ok:
=# SELECT 1 ORDER BY generate_series(1, 10);
returns 10 rows ;) - maybe we should forbid that?
As the patch currently stands, the diffstat is
56 files changed, 953 insertions(+), 1599 deletions(-)
which isn't bad. I'd guess that a few more lines are needed, but I'd
still bet it's a net negative code-wise.
Regards,
Andres Freund
[1]: http://archives.postgresql.org/message-id/20160801082346.nfp2g7mg74alifdc%40alap3.anarazel.de
[2]: http://archives.postgresql.org/message-id/20160804032203.jprhdkx273sqhksd%40alap3.anarazel.de
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2016-08-22 16:20:58 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-08-17 17:41:28 -0700, Andres Freund wrote:
Tom, do you think this is roughly going in the right direction?
I've not had time to look at this patch, I'm afraid. If you still
want me to, I can make time in a day or so.
That'd greatly be appreciated. I think polishing the POC up to
committable patch will be a considerable amount of work, and I'd like
design feedback before that.
I'm working on these. Atm ExecMakeTableFunctionResult() resides in
execQual.c - I'm inlining it into nodeFunctionscan.c now, because
there's no other callers, and having it separate seems to bring no
benefit.Please speak soon up if you disagree.
I think ExecMakeTableFunctionResult was placed in execQual.c because
it seemed to belong there alongside the support for SRFs in tlists.
If that's going away then there's no good reason not to move the logic
to where it's used.
Cool, then we agree.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 23/08/16 09:40, Andres Freund wrote:
Hi,
as noted in [1] I started hacking on removing the current implementation
of SRFs in the targetlist (tSRFs henceforth). IM discussion brought the
need for a description of the problem, need and approach to light.There are several reasons for wanting to get rid of tSRFs. The primary
ones in my opinion are that the current behaviour of several SRFs in one
targetlist is confusing, and that the implementation burden currently is
all over the executor. Especially the latter is what is motivating me
working on this, because it blocks my work on making the executor faster
for queries involving significant amounts of tuples. Batching is hard
if random places in the querytree can icnrease the number of tuples.The basic idea, hinted at in several threads, is, at plan time, to convert a query like
SELECT generate_series(1, 10);
into
SELECT generate_series FROM ROWS FROM(generate_series(1, 10));thereby avoiding the complications in the executor (c.f. execQual.c
handling of isDone/ExprMultipleResult and supporting code in many
executor nodes / node->*.ps.ps_TupFromTlist).There are several design questions along the way:
1) How to deal with the least-common-multiple behaviour of tSRFs. E.g.
=# SELECT generate_series(1, 3), generate_series(1,2);
returning
┌─────────────────┬─────────────────┐
│ generate_series │ generate_series │
├─────────────────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 2 │
│ 3 │ 1 │
│ 1 │ 2 │
│ 2 │ 1 │
│ 3 │ 2 │
└─────────────────┴─────────────────┘
(6 rows)
but
=# SELECT generate_series(1, 3), generate_series(5,7);
returning
┌─────────────────┬─────────────────┐
│ generate_series │ generate_series │
├─────────────────┼─────────────────┤
│ 1 │ 5 │
│ 2 │ 6 │
│ 3 │ 7 │
└─────────────────┴─────────────────┘discussion in this thread came, according to my reading, to the
conclusion that that behaviour is just confusing and that the ROWS FROM
behaviour of
=# SELECT * FROM ROWS FROM(generate_series(1, 3), generate_series(1,2));
┌─────────────────┬─────────────────┐
│ generate_series │ generate_series │
├─────────────────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 2 │
│ 3 │ (null) │
└─────────────────┴─────────────────┘
(3 rows)makes more sense.
I had always implicitly assumed that having 2 generated sequences would
act as equivalent to:
SELECT
sa,
sb
FROM
ROWS FROM(generate_series(1, 3)) AS sa,
ROWS FROM(generate_series(5, 7)) AS sb
ORDER BY
sa,
sb;
sa | sb
----+----
1 | 5
1 | 6
1 | 7
2 | 5
2 | 6
2 | 7
3 | 5
3 | 6
3 | 7
Obviously I was wrong - but to me, my implicit assumption makes more sense!
[...]
Cheers,
Gavin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-08-17 17:41:28 -0700, Andres Freund wrote:
a) Allow to avoid using a tuplestore for SRF_PERCALL SRFs in ROWS FROM -
otherwise our performance would regress noticeably in some cases.
To demonstrate the problem:
master:
=# COPY (SELECT generate_series(1, 50000000)) TO '/dev/null';
COPY 50000000
Time: 6859.830 ms
=# COPY (SELECT * FROM generate_series(1, 50000000)) TO '/dev/null';
COPY 50000000
Time: 11314.507 ms
getting rid of the materialization indeed fixes the problem:
dev:
=# COPY (SELECT generate_series(1, 50000000)) TO
'/dev/null';
COPY 50000000
Time: 5757.547 ms
=# COPY (SELECT * FROM generate_series(1, 50000000)) TO
'/dev/null';
COPY 50000000
Time: 5842.524 ms
I've currently implemented this by having nodeFunctionscan.c store
enough state in FunctionScanPerFuncState to continue the ValuePerCall
protocol. That all seems to work well, without big problems.
The open issue here is whether / how we want to deal with
EXEC_FLAG_REWIND and EXEC_FLAG_BACKWARD. Currently that, with some added
complications, is implemented in nodeFunctionscan.c itself. But for
ValuePerCall SRFs that doesn't directly work anymore.
ISTM that the easiest way here is actually to rip out support for
EXEC_FLAG_REWIND/EXEC_FLAG_BACKWARD from nodeFunctionscan.c. If the plan
requires that, the planner will slap a Material node on-top. Which will
even be more efficient when ROWS FROM for multiple SRFs, or WITH
ORDINALITY, are used. Alternatively we can continue to create a
tuplestore for ValuePerCall when eflags indicates that's required. But
personally I don't see that as an advantageous course.
Comments?
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
Attached is a significantly updated patch series (see the mail one up
for details about what this is, I don't want to quote it in its
entirety).
There's still some corner cases (DISTINCT + SRF, UNION/INTERSECT with
SRF) to test / implement and a good bit of code cleanup to do. But
feature wise it's pretty much complete.
It currently consists of the following patches:
0001-Add-some-more-targetlist-srf-tests.patch
Add some test.
0002-Shore-up-some-weird-corner-cases-for-targetlist-SRFs.patch
Forbid UPDATE ... SET foo = SRF() and ORDER BY / GROUP BY containing
SRFs that would change the number of returned rows. Without the
latter e.g. SELECT 1 ORDER BY generate_series(1,10); returns 10 rows.
0003-Avoid-materializing-SRFs-in-the-FROM-list.patch
To avoid performance regressions from moving SRFM_ValuePerCall SRFs to
ROWS FROM, nodeFunctionscan.c needs to support not materializing
output.
In my present patch I've *ripped out* the support for materialization
in nodeFunctionscan.c entirely. That means that rescans referencing
volatile functions can change their behaviour (if a function is
rescanned, without having it's parameters changed), and that native
backward scan support is gone. I don't think that's actually an issue.
This temporarily duplicates a bit of code from execQual.c, but that's
removed again in 0006.
0004-Allow-ROWS-FROM-to-return-functions-as-single-record.patch
To allow transforming SELECT record_srf(); nodeFunctionscan.c needs to
learn to return the result as a record. I chose
ROWS FROM (record_srf() AS ()) as the syntax for that. It doesn't
necessarily have to be SQL exposed, but it does make testing easier.
0005-Basic-implementation-of-targetlist-SRFs-via-ROWS-FRO.patch
Convert all targetlist SRFs to ROWS FROM() expression, referencing the
original query (without the SRFs) via a subquery.
Note that this changes the behaviour of queries in a few cases. Namely
the "least-common-multiple" behaviour of targetlist SRFs is gone, a
function can now accept multiple set returning functions as input, SRF
references in COALESCE / CASE are evaluated a bit more "eagerly". The
last one I have to think a bit more about.
0006-Remove-unused-code-related-to-targetlist-SRFs.patch
Now that there's no targetlist SRFs at execution time anymore, rip out
executor and planner code related to that. There's possibly more, but
that's what I could find in a couple passes of searching around.
This actually speeds up tpch-h queries by roughly 4% for me.
My next steps are to work on cleaning up the code a bit more, and
increase the regression coverage.
Input is greatly welcome.
Greetings,
Andres Freund
Attachments:
0001-Add-some-more-targetlist-srf-tests.patchtext/x-patch; charset=us-asciiDownload
From ec5658a5723281bc48d649313ca37507a45f1ca3 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Wed, 3 Aug 2016 18:29:42 -0700
Subject: [PATCH 1/6] Add some more targetlist srf tests.
---
src/test/regress/expected/tsrf.out | 214 +++++++++++++++++++++++++++++++++++++
src/test/regress/parallel_schedule | 2 +-
src/test/regress/serial_schedule | 1 +
src/test/regress/sql/tsrf.sql | 67 ++++++++++++
4 files changed, 283 insertions(+), 1 deletion(-)
create mode 100644 src/test/regress/expected/tsrf.out
create mode 100644 src/test/regress/sql/tsrf.sql
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
new file mode 100644
index 0000000..e4e6059
--- /dev/null
+++ b/src/test/regress/expected/tsrf.out
@@ -0,0 +1,214 @@
+--
+-- tsrf - targetlist set returning function tests
+--
+-- simple srf
+SELECT generate_series(1, 3);
+ generate_series
+-----------------
+ 1
+ 2
+ 3
+(3 rows)
+
+-- parallel iteration
+SELECT generate_series(1, 3), generate_series(3,5);
+ generate_series | generate_series
+-----------------+-----------------
+ 1 | 3
+ 2 | 4
+ 3 | 5
+(3 rows)
+
+-- parallel iteration, different number of rows
+SELECT generate_series(1, 2), generate_series(1,4);
+ generate_series | generate_series
+-----------------+-----------------
+ 1 | 1
+ 2 | 2
+ 1 | 3
+ 2 | 4
+(4 rows)
+
+-- srf, with SRF argument
+SELECT generate_series(1, generate_series(1, 3));
+ generate_series
+-----------------
+ 1
+ 1
+ 2
+ 1
+ 2
+ 3
+(6 rows)
+
+-- srf, with two SRF arguments
+SELECT generate_series(generate_series(1,3), generate_series(2, 4));
+ERROR: functions and operators can take at most one set argument
+CREATE TABLE few(id int, dataa text, datab text);
+INSERT INTO few VALUES(1, 'a', 'foo'),(2, 'a', 'bar'),(3, 'b', 'bar');
+-- SRF output order of sorting is maintained, if SRF is not referenced
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id DESC;
+ id | g
+----+---
+ 3 | 1
+ 3 | 2
+ 3 | 3
+ 2 | 1
+ 2 | 2
+ 2 | 3
+ 1 | 1
+ 1 | 2
+ 1 | 3
+(9 rows)
+
+-- but SRFs can be referenced in sort
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, g DESC;
+ id | g
+----+---
+ 1 | 3
+ 1 | 2
+ 1 | 1
+ 2 | 3
+ 2 | 2
+ 2 | 1
+ 3 | 3
+ 3 | 2
+ 3 | 1
+(9 rows)
+
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, generate_series(1,3) DESC;
+ id | g
+----+---
+ 1 | 3
+ 1 | 2
+ 1 | 1
+ 2 | 3
+ 2 | 2
+ 2 | 1
+ 3 | 3
+ 3 | 2
+ 3 | 1
+(9 rows)
+
+-- it's weird to have ORDER BYs that increase the number of results
+SELECT few.id FROM few ORDER BY id, generate_series(1,3) DESC;
+ id
+----
+ 1
+ 1
+ 1
+ 2
+ 2
+ 2
+ 3
+ 3
+ 3
+(9 rows)
+
+-- SRFs are computed after aggregation
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa;
+ dataa | count | min | max | unnest
+-------+-------+-----+-----+--------
+ a | 1 | 1 | 1 | 1
+ a | 1 | 1 | 1 | 1
+ a | 1 | 1 | 1 | 3
+(3 rows)
+
+-- unless referenced in GROUP BY clause
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
+ dataa | count | min | max | unnest
+-------+-------+-----+-----+--------
+ a | 2 | 1 | 1 | 1
+ a | 1 | 1 | 1 | 3
+(2 rows)
+
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, 5;
+ dataa | count | min | max | unnest
+-------+-------+-----+-----+--------
+ a | 2 | 1 | 1 | 1
+ a | 1 | 1 | 1 | 3
+(2 rows)
+
+-- it's weird to GROUP BYs that increase the number of results
+SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa;
+ dataa | count | min | max
+-------+-------+-----+-----
+ b | 1 | 3 | 3
+ a | 2 | 1 | 2
+(2 rows)
+
+SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
+ dataa | count | min | max
+-------+-------+-----+-----
+ b | 2 | 3 | 3
+ a | 4 | 1 | 2
+ b | 1 | 3 | 3
+ a | 2 | 1 | 2
+(4 rows)
+
+-- SRFs are computed after window functions
+SELECT id,lag(id) OVER(), count(*) OVER(), generate_series(1,3) FROM few;
+ id | lag | count | generate_series
+----+-----+-------+-----------------
+ 1 | | 3 | 1
+ 1 | | 3 | 2
+ 1 | | 3 | 3
+ 2 | 1 | 3 | 1
+ 2 | 1 | 3 | 2
+ 2 | 1 | 3 | 3
+ 3 | 2 | 3 | 1
+ 3 | 2 | 3 | 2
+ 3 | 2 | 3 | 3
+(9 rows)
+
+-- sorting + grouping
+SELECT few.dataa, count(*), min(id), max(id), generate_series(1,3) FROM few GROUP BY few.dataa ORDER BY 5;
+ dataa | count | min | max | generate_series
+-------+-------+-----+-----+-----------------
+ b | 1 | 3 | 3 | 1
+ a | 2 | 1 | 2 | 1
+ b | 1 | 3 | 3 | 2
+ a | 2 | 1 | 2 | 2
+ b | 1 | 3 | 3 | 3
+ a | 2 | 1 | 2 | 3
+(6 rows)
+
+-- grouping sets are a bit special, they produce NULLs in columns not actually NULL
+SELECT dataa, datab b, count(*) FROM few GROUP BY CUBE(dataa, datab) ORDER BY 1,2,3;
+ dataa | b | count
+-------+-----+-------
+ a | bar | 1
+ a | foo | 1
+ a | | 2
+ b | bar | 1
+ b | | 1
+ | bar | 2
+ | foo | 1
+ | | 3
+(8 rows)
+
+-- data modification
+CREATE TABLE fewmore AS SELECT generate_series(1,3) AS data;
+INSERT INTO fewmore VALUES(generate_series(4,5));
+SELECT * FROM fewmore;
+ data
+------
+ 1
+ 2
+ 3
+ 4
+ 5
+(5 rows)
+
+-- nonsensically that seems to be allowed
+UPDATE fewmore SET data = generate_series(4,9);
+-- SRFs are now allowed in RETURNING
+INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
+ERROR: set-valued function called in context that cannot accept a set
+-- nor aggregate arguments
+SELECT count(generate_series(1,3)) FROM few;
+ERROR: set-valued function called in context that cannot accept a set
+-- nor proper VALUES
+VALUES(1, generate_series(1,2));
+ERROR: set-valued function called in context that cannot accept a set
+-- test DISTINCT ON, LIMIT/OFFSET, correlated subqueries
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 1cb5dfc..4135aae 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -92,7 +92,7 @@ test: brin gin gist spgist privileges init_privs security_label collate matview
test: alter_generic alter_operator misc psql async dbsize misc_functions
# rules cannot run concurrently with any test that creates a view
-test: rules psql_crosstab amutils
+test: rules psql_crosstab amutils tsrf
# run by itself so it can run parallel workers
test: select_parallel
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 8958d8c..93b4f00 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -127,6 +127,7 @@ test: rules
test: psql_crosstab
test: select_parallel
test: amutils
+test: tsrf
test: select_views
test: portals_p2
test: foreign_key
diff --git a/src/test/regress/sql/tsrf.sql b/src/test/regress/sql/tsrf.sql
new file mode 100644
index 0000000..0ef41ab
--- /dev/null
+++ b/src/test/regress/sql/tsrf.sql
@@ -0,0 +1,67 @@
+--
+-- tsrf - targetlist set returning function tests
+--
+
+-- simple srf
+SELECT generate_series(1, 3);
+
+-- parallel iteration
+SELECT generate_series(1, 3), generate_series(3,5);
+
+-- parallel iteration, different number of rows
+SELECT generate_series(1, 2), generate_series(1,4);
+
+-- srf, with SRF argument
+SELECT generate_series(1, generate_series(1, 3));
+
+-- srf, with two SRF arguments
+SELECT generate_series(generate_series(1,3), generate_series(2, 4));
+
+CREATE TABLE few(id int, dataa text, datab text);
+INSERT INTO few VALUES(1, 'a', 'foo'),(2, 'a', 'bar'),(3, 'b', 'bar');
+
+-- SRF output order of sorting is maintained, if SRF is not referenced
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id DESC;
+
+-- but SRFs can be referenced in sort
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, g DESC;
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, generate_series(1,3) DESC;
+
+-- it's weird to have ORDER BYs that increase the number of results
+SELECT few.id FROM few ORDER BY id, generate_series(1,3) DESC;
+
+-- SRFs are computed after aggregation
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa;
+-- unless referenced in GROUP BY clause
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, 5;
+
+-- it's weird to GROUP BYs that increase the number of results
+SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa;
+SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
+
+-- SRFs are computed after window functions
+SELECT id,lag(id) OVER(), count(*) OVER(), generate_series(1,3) FROM few;
+
+-- sorting + grouping
+SELECT few.dataa, count(*), min(id), max(id), generate_series(1,3) FROM few GROUP BY few.dataa ORDER BY 5;
+
+-- grouping sets are a bit special, they produce NULLs in columns not actually NULL
+SELECT dataa, datab b, count(*) FROM few GROUP BY CUBE(dataa, datab) ORDER BY 1,2,3;
+
+-- data modification
+CREATE TABLE fewmore AS SELECT generate_series(1,3) AS data;
+INSERT INTO fewmore VALUES(generate_series(4,5));
+SELECT * FROM fewmore;
+
+-- nonsensically that seems to be allowed
+UPDATE fewmore SET data = generate_series(4,9);
+
+-- SRFs are now allowed in RETURNING
+INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
+-- nor aggregate arguments
+SELECT count(generate_series(1,3)) FROM few;
+-- nor proper VALUES
+VALUES(1, generate_series(1,2));
+
+-- test DISTINCT ON, LIMIT/OFFSET, correlated subqueries
--
2.9.3
0002-Shore-up-some-weird-corner-cases-for-targetlist-SRFs.patchtext/x-patch; charset=us-asciiDownload
From 4dc1c9e5014c9fb421cd9ecd24200d5668ff651f Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Sat, 27 Aug 2016 12:36:13 -0700
Subject: [PATCH 2/6] Shore up some weird corner cases for targetlist SRFs.
---
src/backend/executor/README | 13 +++++--------
src/backend/parser/analyze.c | 8 ++++++++
src/backend/parser/parse_clause.c | 7 +++++++
src/test/regress/expected/tsrf.out | 36 ++++++++++++------------------------
src/test/regress/sql/tsrf.sql | 6 +++---
5 files changed, 35 insertions(+), 35 deletions(-)
diff --git a/src/backend/executor/README b/src/backend/executor/README
index 8afa1e3..141ddc2 100644
--- a/src/backend/executor/README
+++ b/src/backend/executor/README
@@ -192,11 +192,8 @@ relations, such as a ValuesScan or FunctionScan. For these, since there
is no equivalent of TID, the only practical solution seems to be to include
the entire row value in the join output row.
-We disallow set-returning functions in the targetlist of SELECT FOR UPDATE,
-so as to ensure that at most one tuple can be returned for any particular
-set of scan tuples. Otherwise we'd get duplicates due to the original
-query returning the same set of scan tuples multiple times. (Note: there
-is no explicit prohibition on SRFs in UPDATE, but the net effect will be
-that only the first result row of an SRF counts, because all subsequent
-rows will result in attempts to re-update an already updated target row.
-This is historical behavior and seems not worth changing.)
+We disallow set-returning functions in the targetlist of UPDATE and
+SELECT FOR UPDATE, so as to ensure that at most one tuple can be
+returned for any particular set of scan tuples. Otherwise we'd get
+duplicates due to the original query returning the same set of scan
+tuples multiple times.
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index eac86cc..cf5bc86 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -2233,6 +2233,14 @@ transformUpdateTargetList(ParseState *pstate, List *origTlist)
RelationGetRelationName(pstate->p_target_relation)),
parser_errposition(pstate, origTarget->location)));
+ /* nonsensical, we wouldn't know which of the returned rows to use */
+ if (expression_returns_set((Node *) tle->expr))
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set"),
+ parser_errposition(pstate,
+ exprLocation((Node *) tle->expr))));
+
updateTargetListEntry(pstate, tle, origTarget->name,
attrno,
origTarget->indirection,
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 751de4b..9b7fcc3 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -1794,6 +1794,13 @@ findTargetlistEntrySQL99(ParseState *pstate, Node *node, List **tlist,
return tle;
}
+ if (expression_returns_set(expr))
+ /* FIXME: decent error message */
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("GROUP BY / ORDER BY cannot reference set returning function not present in target list"),
+ parser_errposition(pstate, exprLocation(expr))));
+
/*
* If no matches, construct a new target entry which is appended to the
* end of the target list. This target is given resjunk = TRUE so that it
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
index e4e6059..f520a91 100644
--- a/src/test/regress/expected/tsrf.out
+++ b/src/test/regress/expected/tsrf.out
@@ -90,21 +90,11 @@ SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, generate_series(1,3)
3 | 1
(9 rows)
--- it's weird to have ORDER BYs that increase the number of results
+-- it's weird to have ORDER BYs that increase the number of results (error)
SELECT few.id FROM few ORDER BY id, generate_series(1,3) DESC;
- id
-----
- 1
- 1
- 1
- 2
- 2
- 2
- 3
- 3
- 3
-(9 rows)
-
+ERROR: GROUP BY / ORDER BY cannot reference set returning function not present in target list
+LINE 1: SELECT few.id FROM few ORDER BY id, generate_series(1,3) DES...
+ ^
-- SRFs are computed after aggregation
SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa;
dataa | count | min | max | unnest
@@ -129,7 +119,7 @@ SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few
a | 1 | 1 | 1 | 3
(2 rows)
--- it's weird to GROUP BYs that increase the number of results
+-- it's weird to GROUP BYs that increase the number of results (2nd is an error)
SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa;
dataa | count | min | max
-------+-------+-----+-----
@@ -138,14 +128,9 @@ SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa;
(2 rows)
SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
- dataa | count | min | max
--------+-------+-----+-----
- b | 2 | 3 | 3
- a | 4 | 1 | 2
- b | 1 | 3 | 3
- a | 2 | 1 | 2
-(4 rows)
-
+ERROR: GROUP BY / ORDER BY cannot reference set returning function not present in target list
+LINE 1: ...*), min(id), max(id) FROM few GROUP BY few.dataa, unnest('{1...
+ ^
-- SRFs are computed after window functions
SELECT id,lag(id) OVER(), count(*) OVER(), generate_series(1,3) FROM few;
id | lag | count | generate_series
@@ -200,8 +185,11 @@ SELECT * FROM fewmore;
5
(5 rows)
--- nonsensically that seems to be allowed
+-- it'd not be clear which value to use for the update (error)
UPDATE fewmore SET data = generate_series(4,9);
+ERROR: set-valued function called in context that cannot accept a set
+LINE 1: UPDATE fewmore SET data = generate_series(4,9);
+ ^
-- SRFs are now allowed in RETURNING
INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
ERROR: set-valued function called in context that cannot accept a set
diff --git a/src/test/regress/sql/tsrf.sql b/src/test/regress/sql/tsrf.sql
index 0ef41ab..b52ec07 100644
--- a/src/test/regress/sql/tsrf.sql
+++ b/src/test/regress/sql/tsrf.sql
@@ -27,7 +27,7 @@ SELECT few.id, generate_series(1,3) g FROM few ORDER BY id DESC;
SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, g DESC;
SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, generate_series(1,3) DESC;
--- it's weird to have ORDER BYs that increase the number of results
+-- it's weird to have ORDER BYs that increase the number of results (error)
SELECT few.id FROM few ORDER BY id, generate_series(1,3) DESC;
-- SRFs are computed after aggregation
@@ -36,7 +36,7 @@ SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few
SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, 5;
--- it's weird to GROUP BYs that increase the number of results
+-- it's weird to GROUP BYs that increase the number of results (2nd is an error)
SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa;
SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
@@ -54,7 +54,7 @@ CREATE TABLE fewmore AS SELECT generate_series(1,3) AS data;
INSERT INTO fewmore VALUES(generate_series(4,5));
SELECT * FROM fewmore;
--- nonsensically that seems to be allowed
+-- it'd not be clear which value to use for the update (error)
UPDATE fewmore SET data = generate_series(4,9);
-- SRFs are now allowed in RETURNING
--
2.9.3
0003-Avoid-materializing-SRFs-in-the-FROM-list.patchtext/x-patch; charset=us-asciiDownload
From 7f46602167b591943fe35a85739a3a1d901aa3f6 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 25 Aug 2016 11:10:05 -0700
Subject: [PATCH 3/6] Avoid materializing SRFs in the FROM list.
So far SFRM_ValuePerCall (and ones optionally using ValuePerCall) set
returning functions have been eagerly materialized when in the from
list (i.e. when nodeFunctionscan.c is used). In contrast to that SRFs
in the target list support ValuePerCall. The materialization has a
significant overhead, in a number of simple example cases more than 2x
have been measured.
While that's an annoying performance difference on its own, it becomes
particularly problematic with the upcoming work to implement targetlist
SRFs via ROWS FROM.
Thus implement support for querying SFRM_ValuePerCall SRFs without
materialization.
As supporting backward scans isn't possible without materialization for
ValuePerCall, and the complication for conditionally materializing don't
seem worthwhile, instead drop backward scanning support for FunctionScan
nodes (which will, when necessary, be implemented by the planner by
using a Material node).
This moves the required support code from execQual.c to
nodeFunctionscan.c. While not a clear win now (including requiring some
duplication), it becomes more advantageous lateron, when SRF support is
removed from execQual.c entirely.
It's worthwhile to call out that the removed materialization support
implies a behavioural change in cases were some functions in ROWS FROM
are dependant on a changed PARAM_EXEC parameter, other's not. Previously
only dependent functions were recomputed, now all.
TODO:
- add some more tests
- remove ugly double gotos
- add some function header comments
---
src/backend/executor/execAmi.c | 2 -
src/backend/executor/execQual.c | 381 +----------------
src/backend/executor/nodeFunctionscan.c | 685 +++++++++++++++++++++++++------
src/include/executor/executor.h | 9 +-
src/test/regress/expected/pg_lsn.out | 13 +-
src/test/regress/expected/plpgsql.out | 2 +-
src/test/regress/expected/rangefuncs.out | 8 +-
src/test/regress/sql/plpgsql.sql | 2 +-
8 files changed, 584 insertions(+), 518 deletions(-)
diff --git a/src/backend/executor/execAmi.c b/src/backend/executor/execAmi.c
index 2587ef7..ea2f09e 100644
--- a/src/backend/executor/execAmi.c
+++ b/src/backend/executor/execAmi.c
@@ -475,7 +475,6 @@ ExecSupportsBackwardScan(Plan *node)
case T_SeqScan:
case T_TidScan:
- case T_FunctionScan:
case T_ValuesScan:
case T_CteScan:
return TargetListSupportsBackwardScan(node->targetlist);
@@ -579,7 +578,6 @@ ExecMaterializesOutput(NodeTag plantype)
switch (plantype)
{
case T_Material:
- case T_FunctionScan:
case T_CteScan:
case T_WorkTableScan:
case T_Sort:
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index 743e7d6..79589d0 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -91,14 +91,10 @@ static Datum ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
bool *isNull, ExprDoneCond *isDone);
static Datum ExecEvalParamExtern(ExprState *exprstate, ExprContext *econtext,
bool *isNull, ExprDoneCond *isDone);
-static void init_fcache(Oid foid, Oid input_collation, FuncExprState *fcache,
- MemoryContext fcacheCxt, bool needDescForSets);
static void ShutdownFuncExpr(Datum arg);
static TupleDesc get_cached_rowtype(Oid type_id, int32 typmod,
TupleDesc *cache_field, ExprContext *econtext);
static void ShutdownTupleDescRef(Datum arg);
-static ExprDoneCond ExecEvalFuncArgs(FunctionCallInfo fcinfo,
- List *argList, ExprContext *econtext);
static void ExecPrepareTuplestoreResult(FuncExprState *fcache,
ExprContext *econtext,
Tuplestorestate *resultStore,
@@ -1323,11 +1319,11 @@ GetAttributeByName(HeapTupleHeader tuple, const char *attname, bool *isNull)
}
/*
- * init_fcache - initialize a FuncExprState node during first use
+ * ExecInitFcache - initialize a FuncExprState node during first use
*/
-static void
-init_fcache(Oid foid, Oid input_collation, FuncExprState *fcache,
- MemoryContext fcacheCxt, bool needDescForSets)
+void
+ExecInitFcache(Oid foid, Oid input_collation, FuncExprState *fcache,
+ MemoryContext fcacheCxt, bool needDescForSets)
{
AclResult aclresult;
@@ -1503,7 +1499,7 @@ ShutdownTupleDescRef(Datum arg)
/*
* Evaluate arguments for a function.
*/
-static ExprDoneCond
+ExprDoneCond
ExecEvalFuncArgs(FunctionCallInfo fcinfo,
List *argList,
ExprContext *econtext)
@@ -2052,353 +2048,6 @@ ExecMakeFunctionResultNoSets(FuncExprState *fcache,
}
-/*
- * ExecMakeTableFunctionResult
- *
- * Evaluate a table function, producing a materialized result in a Tuplestore
- * object.
- */
-Tuplestorestate *
-ExecMakeTableFunctionResult(ExprState *funcexpr,
- ExprContext *econtext,
- MemoryContext argContext,
- TupleDesc expectedDesc,
- bool randomAccess)
-{
- Tuplestorestate *tupstore = NULL;
- TupleDesc tupdesc = NULL;
- Oid funcrettype;
- bool returnsTuple;
- bool returnsSet = false;
- FunctionCallInfoData fcinfo;
- PgStat_FunctionCallUsage fcusage;
- ReturnSetInfo rsinfo;
- HeapTupleData tmptup;
- MemoryContext callerContext;
- MemoryContext oldcontext;
- bool direct_function_call;
- bool first_time = true;
-
- callerContext = CurrentMemoryContext;
-
- funcrettype = exprType((Node *) funcexpr->expr);
-
- returnsTuple = type_is_rowtype(funcrettype);
-
- /*
- * Prepare a resultinfo node for communication. We always do this even if
- * not expecting a set result, so that we can pass expectedDesc. In the
- * generic-expression case, the expression doesn't actually get to see the
- * resultinfo, but set it up anyway because we use some of the fields as
- * our own state variables.
- */
- rsinfo.type = T_ReturnSetInfo;
- rsinfo.econtext = econtext;
- rsinfo.expectedDesc = expectedDesc;
- rsinfo.allowedModes = (int) (SFRM_ValuePerCall | SFRM_Materialize | SFRM_Materialize_Preferred);
- if (randomAccess)
- rsinfo.allowedModes |= (int) SFRM_Materialize_Random;
- rsinfo.returnMode = SFRM_ValuePerCall;
- /* isDone is filled below */
- rsinfo.setResult = NULL;
- rsinfo.setDesc = NULL;
-
- /*
- * Normally the passed expression tree will be a FuncExprState, since the
- * grammar only allows a function call at the top level of a table
- * function reference. However, if the function doesn't return set then
- * the planner might have replaced the function call via constant-folding
- * or inlining. So if we see any other kind of expression node, execute
- * it via the general ExecEvalExpr() code; the only difference is that we
- * don't get a chance to pass a special ReturnSetInfo to any functions
- * buried in the expression.
- */
- if (funcexpr && IsA(funcexpr, FuncExprState) &&
- IsA(funcexpr->expr, FuncExpr))
- {
- FuncExprState *fcache = (FuncExprState *) funcexpr;
- ExprDoneCond argDone;
-
- /*
- * This path is similar to ExecMakeFunctionResult.
- */
- direct_function_call = true;
-
- /*
- * Initialize function cache if first time through
- */
- if (fcache->func.fn_oid == InvalidOid)
- {
- FuncExpr *func = (FuncExpr *) fcache->xprstate.expr;
-
- init_fcache(func->funcid, func->inputcollid, fcache,
- econtext->ecxt_per_query_memory, false);
- }
- returnsSet = fcache->func.fn_retset;
- InitFunctionCallInfoData(fcinfo, &(fcache->func),
- list_length(fcache->args),
- fcache->fcinfo_data.fncollation,
- NULL, (Node *) &rsinfo);
-
- /*
- * Evaluate the function's argument list.
- *
- * We can't do this in the per-tuple context: the argument values
- * would disappear when we reset that context in the inner loop. And
- * the caller's CurrentMemoryContext is typically a query-lifespan
- * context, so we don't want to leak memory there. We require the
- * caller to pass a separate memory context that can be used for this,
- * and can be reset each time through to avoid bloat.
- */
- MemoryContextReset(argContext);
- oldcontext = MemoryContextSwitchTo(argContext);
- argDone = ExecEvalFuncArgs(&fcinfo, fcache->args, econtext);
- MemoryContextSwitchTo(oldcontext);
-
- /* We don't allow sets in the arguments of the table function */
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
-
- /*
- * If function is strict, and there are any NULL arguments, skip
- * calling the function and act like it returned NULL (or an empty
- * set, in the returns-set case).
- */
- if (fcache->func.fn_strict)
- {
- int i;
-
- for (i = 0; i < fcinfo.nargs; i++)
- {
- if (fcinfo.argnull[i])
- goto no_function_result;
- }
- }
- }
- else
- {
- /* Treat funcexpr as a generic expression */
- direct_function_call = false;
- InitFunctionCallInfoData(fcinfo, NULL, 0, InvalidOid, NULL, NULL);
- }
-
- /*
- * Switch to short-lived context for calling the function or expression.
- */
- MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
-
- /*
- * Loop to handle the ValuePerCall protocol (which is also the same
- * behavior needed in the generic ExecEvalExpr path).
- */
- for (;;)
- {
- Datum result;
-
- CHECK_FOR_INTERRUPTS();
-
- /*
- * reset per-tuple memory context before each call of the function or
- * expression. This cleans up any local memory the function may leak
- * when called.
- */
- ResetExprContext(econtext);
-
- /* Call the function or expression one time */
- if (direct_function_call)
- {
- pgstat_init_function_usage(&fcinfo, &fcusage);
-
- fcinfo.isnull = false;
- rsinfo.isDone = ExprSingleResult;
- result = FunctionCallInvoke(&fcinfo);
-
- pgstat_end_function_usage(&fcusage,
- rsinfo.isDone != ExprMultipleResult);
- }
- else
- {
- result = ExecEvalExpr(funcexpr, econtext,
- &fcinfo.isnull, &rsinfo.isDone);
- }
-
- /* Which protocol does function want to use? */
- if (rsinfo.returnMode == SFRM_ValuePerCall)
- {
- /*
- * Check for end of result set.
- */
- if (rsinfo.isDone == ExprEndResult)
- break;
-
- /*
- * If first time through, build tuplestore for result. For a
- * scalar function result type, also make a suitable tupdesc.
- */
- if (first_time)
- {
- oldcontext = MemoryContextSwitchTo(econtext->ecxt_per_query_memory);
- tupstore = tuplestore_begin_heap(randomAccess, false, work_mem);
- rsinfo.setResult = tupstore;
- if (!returnsTuple)
- {
- tupdesc = CreateTemplateTupleDesc(1, false);
- TupleDescInitEntry(tupdesc,
- (AttrNumber) 1,
- "column",
- funcrettype,
- -1,
- 0);
- rsinfo.setDesc = tupdesc;
- }
- MemoryContextSwitchTo(oldcontext);
- }
-
- /*
- * Store current resultset item.
- */
- if (returnsTuple)
- {
- if (!fcinfo.isnull)
- {
- HeapTupleHeader td = DatumGetHeapTupleHeader(result);
-
- if (tupdesc == NULL)
- {
- /*
- * This is the first non-NULL result from the
- * function. Use the type info embedded in the
- * rowtype Datum to look up the needed tupdesc. Make
- * a copy for the query.
- */
- oldcontext = MemoryContextSwitchTo(econtext->ecxt_per_query_memory);
- tupdesc = lookup_rowtype_tupdesc_copy(HeapTupleHeaderGetTypeId(td),
- HeapTupleHeaderGetTypMod(td));
- rsinfo.setDesc = tupdesc;
- MemoryContextSwitchTo(oldcontext);
- }
- else
- {
- /*
- * Verify all later returned rows have same subtype;
- * necessary in case the type is RECORD.
- */
- if (HeapTupleHeaderGetTypeId(td) != tupdesc->tdtypeid ||
- HeapTupleHeaderGetTypMod(td) != tupdesc->tdtypmod)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("rows returned by function are not all of the same row type")));
- }
-
- /*
- * tuplestore_puttuple needs a HeapTuple not a bare
- * HeapTupleHeader, but it doesn't need all the fields.
- */
- tmptup.t_len = HeapTupleHeaderGetDatumLength(td);
- tmptup.t_data = td;
-
- tuplestore_puttuple(tupstore, &tmptup);
- }
- else
- {
- /*
- * NULL result from a tuple-returning function; expand it
- * to a row of all nulls. We rely on the expectedDesc to
- * form such rows. (Note: this would be problematic if
- * tuplestore_putvalues saved the tdtypeid/tdtypmod from
- * the provided descriptor, since that might not match
- * what we get from the function itself. But it doesn't.)
- */
- int natts = expectedDesc->natts;
- bool *nullflags;
-
- nullflags = (bool *) palloc(natts * sizeof(bool));
- memset(nullflags, true, natts * sizeof(bool));
- tuplestore_putvalues(tupstore, expectedDesc, NULL, nullflags);
- }
- }
- else
- {
- /* Scalar-type case: just store the function result */
- tuplestore_putvalues(tupstore, tupdesc, &result, &fcinfo.isnull);
- }
-
- /*
- * Are we done?
- */
- if (rsinfo.isDone != ExprMultipleResult)
- break;
- }
- else if (rsinfo.returnMode == SFRM_Materialize)
- {
- /* check we're on the same page as the function author */
- if (!first_time || rsinfo.isDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
- errmsg("table-function protocol for materialize mode was not followed")));
- /* Done evaluating the set result */
- break;
- }
- else
- ereport(ERROR,
- (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
- errmsg("unrecognized table-function returnMode: %d",
- (int) rsinfo.returnMode)));
-
- first_time = false;
- }
-
-no_function_result:
-
- /*
- * If we got nothing from the function (ie, an empty-set or NULL result),
- * we have to create the tuplestore to return, and if it's a
- * non-set-returning function then insert a single all-nulls row. As
- * above, we depend on the expectedDesc to manufacture the dummy row.
- */
- if (rsinfo.setResult == NULL)
- {
- MemoryContextSwitchTo(econtext->ecxt_per_query_memory);
- tupstore = tuplestore_begin_heap(randomAccess, false, work_mem);
- rsinfo.setResult = tupstore;
- if (!returnsSet)
- {
- int natts = expectedDesc->natts;
- bool *nullflags;
-
- MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
- nullflags = (bool *) palloc(natts * sizeof(bool));
- memset(nullflags, true, natts * sizeof(bool));
- tuplestore_putvalues(tupstore, expectedDesc, NULL, nullflags);
- }
- }
-
- /*
- * If function provided a tupdesc, cross-check it. We only really need to
- * do this for functions returning RECORD, but might as well do it always.
- */
- if (rsinfo.setDesc)
- {
- tupledesc_match(expectedDesc, rsinfo.setDesc);
-
- /*
- * If it is a dynamically-allocated TupleDesc, free it: it is
- * typically allocated in a per-query context, so we must avoid
- * leaking it across multiple usages.
- */
- if (rsinfo.setDesc->tdrefcount == -1)
- FreeTupleDesc(rsinfo.setDesc);
- }
-
- MemoryContextSwitchTo(callerContext);
-
- /* All done, pass back the tuplestore */
- return rsinfo.setResult;
-}
-
-
/* ----------------------------------------------------------------
* ExecEvalFunc
* ExecEvalOper
@@ -2422,8 +2071,8 @@ ExecEvalFunc(FuncExprState *fcache,
FuncExpr *func = (FuncExpr *) fcache->xprstate.expr;
/* Initialize function lookup info */
- init_fcache(func->funcid, func->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ ExecInitFcache(func->funcid, func->inputcollid, fcache,
+ econtext->ecxt_per_query_memory, true);
/*
* We need to invoke ExecMakeFunctionResult if either the function itself
@@ -2457,8 +2106,8 @@ ExecEvalOper(FuncExprState *fcache,
OpExpr *op = (OpExpr *) fcache->xprstate.expr;
/* Initialize function lookup info */
- init_fcache(op->opfuncid, op->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ ExecInitFcache(op->opfuncid, op->inputcollid, fcache,
+ econtext->ecxt_per_query_memory, true);
/*
* We need to invoke ExecMakeFunctionResult if either the function itself
@@ -2511,8 +2160,8 @@ ExecEvalDistinct(FuncExprState *fcache,
{
DistinctExpr *op = (DistinctExpr *) fcache->xprstate.expr;
- init_fcache(op->opfuncid, op->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ ExecInitFcache(op->opfuncid, op->inputcollid, fcache,
+ econtext->ecxt_per_query_memory, true);
Assert(!fcache->func.fn_retset);
}
@@ -2588,8 +2237,8 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
*/
if (sstate->fxprstate.func.fn_oid == InvalidOid)
{
- init_fcache(opexpr->opfuncid, opexpr->inputcollid, &sstate->fxprstate,
- econtext->ecxt_per_query_memory, true);
+ ExecInitFcache(opexpr->opfuncid, opexpr->inputcollid, &sstate->fxprstate,
+ econtext->ecxt_per_query_memory, true);
Assert(!sstate->fxprstate.func.fn_retset);
}
@@ -3850,8 +3499,8 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
{
NullIfExpr *op = (NullIfExpr *) nullIfExpr->xprstate.expr;
- init_fcache(op->opfuncid, op->inputcollid, nullIfExpr,
- econtext->ecxt_per_query_memory, true);
+ ExecInitFcache(op->opfuncid, op->inputcollid, nullIfExpr,
+ econtext->ecxt_per_query_memory, true);
Assert(!nullIfExpr->func.fn_retset);
}
diff --git a/src/backend/executor/nodeFunctionscan.c b/src/backend/executor/nodeFunctionscan.c
index a03f6e7..4885f75 100644
--- a/src/backend/executor/nodeFunctionscan.c
+++ b/src/backend/executor/nodeFunctionscan.c
@@ -22,12 +22,20 @@
*/
#include "postgres.h"
+#include "access/htup_details.h"
#include "catalog/pg_type.h"
#include "executor/nodeFunctionscan.h"
#include "funcapi.h"
+#include "miscadmin.h"
#include "nodes/nodeFuncs.h"
+#include "parser/parse_coerce.h"
+#include "pgstat.h"
#include "utils/builtins.h"
+#include "utils/expandeddatum.h"
+#include "utils/lsyscache.h"
+#include "utils/typcache.h"
#include "utils/memutils.h"
+#include "utils/tuplestore.h"
/*
@@ -39,12 +47,19 @@ typedef struct FunctionScanPerFuncState
TupleDesc tupdesc; /* desc of the function result type */
int colcount; /* expected number of result columns */
Tuplestorestate *tstore; /* holds the function result set */
- int64 rowcount; /* # of rows in result set, -1 if not known */
TupleTableSlot *func_slot; /* function result slot (or NULL) */
+ bool started;
+ bool returnsTuple;
+ FunctionCallInfoData fcinfo;
+ ReturnSetInfo rsinfo;
} FunctionScanPerFuncState;
static TupleTableSlot *FunctionNext(FunctionScanState *node);
-
+static void ExecBeginFunctionResult(FunctionScanState *node,
+ FunctionScanPerFuncState *perfunc);
+static void ExecNextFunctionResult(FunctionScanState *node,
+ FunctionScanPerFuncState *perfunc);
+static void tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc);
/* ----------------------------------------------------------------
* Scan Support
@@ -63,7 +78,6 @@ FunctionNext(FunctionScanState *node)
ScanDirection direction;
TupleTableSlot *scanslot;
bool alldone;
- int64 oldpos;
int funcno;
int att;
@@ -74,59 +88,39 @@ FunctionNext(FunctionScanState *node)
direction = estate->es_direction;
scanslot = node->ss.ss_ScanTupleSlot;
+ Assert(ScanDirectionIsForward(direction));
+
if (node->simple)
{
/*
* Fast path for the trivial case: the function return type and scan
* result type are the same, so we fetch the function result straight
- * into the scan result slot. No need to update ordinality or
- * rowcounts either.
+ * into the scan result slot. No need to update ordinality either.
*/
- Tuplestorestate *tstore = node->funcstates[0].tstore;
+ FunctionScanPerFuncState *fs = &node->funcstates[0];
/*
- * If first time through, read all tuples from function and put them
- * in a tuplestore. Subsequent calls just fetch tuples from
- * tuplestore.
+ * If first time through, call the SRF. Subsequent calls read from a
+ * tuplestore (for SFRM_Materialize) or call the function again (if
+ * SFRM_ValuePerCall).
*/
- if (tstore == NULL)
- {
- node->funcstates[0].tstore = tstore =
- ExecMakeTableFunctionResult(node->funcstates[0].funcexpr,
- node->ss.ps.ps_ExprContext,
- node->argcontext,
- node->funcstates[0].tupdesc,
- node->eflags & EXEC_FLAG_BACKWARD);
+ if (!fs->started)
+ ExecBeginFunctionResult(node, fs);
+ else
+ ExecNextFunctionResult(node, fs);
- /*
- * paranoia - cope if the function, which may have constructed the
- * tuplestore itself, didn't leave it pointing at the start. This
- * call is fast, so the overhead shouldn't be an issue.
- */
- tuplestore_rescan(tstore);
- }
+ scanslot = fs->func_slot;
- /*
- * Get the next tuple from tuplestore.
- */
- (void) tuplestore_gettupleslot(tstore,
- ScanDirectionIsForward(direction),
- false,
- scanslot);
return scanslot;
}
/*
- * Increment or decrement ordinal counter before checking for end-of-data,
- * so that we can move off either end of the result by 1 (and no more than
- * 1) without losing correct count. See PortalRunSelect for why we can
- * assume that we won't be called repeatedly in the end-of-data state.
+ * Increment ordinal counter before checking for end-of-data, so that we
+ * can move off the end of the result by 1 (and no more than 1) without
+ * losing correct count. See PortalRunSelect for why we can assume that
+ * we won't be called repeatedly in the end-of-data state.
*/
- oldpos = node->ordinal;
- if (ScanDirectionIsForward(direction))
- node->ordinal++;
- else
- node->ordinal--;
+ node->ordinal++;
/*
* Main loop over functions.
@@ -144,55 +138,18 @@ FunctionNext(FunctionScanState *node)
int i;
/*
- * If first time through, read all tuples from function and put them
- * in a tuplestore. Subsequent calls just fetch tuples from
- * tuplestore.
+ * If first time through, call the SRF. Subsequent calls read from a
+ * tuplestore (for SFRM_Materialize) or call the function again (if
+ * SFRM_ValuePerCall).
*/
- if (fs->tstore == NULL)
- {
- fs->tstore =
- ExecMakeTableFunctionResult(fs->funcexpr,
- node->ss.ps.ps_ExprContext,
- node->argcontext,
- fs->tupdesc,
- node->eflags & EXEC_FLAG_BACKWARD);
-
- /*
- * paranoia - cope if the function, which may have constructed the
- * tuplestore itself, didn't leave it pointing at the start. This
- * call is fast, so the overhead shouldn't be an issue.
- */
- tuplestore_rescan(fs->tstore);
- }
-
- /*
- * Get the next tuple from tuplestore.
- *
- * If we have a rowcount for the function, and we know the previous
- * read position was out of bounds, don't try the read. This allows
- * backward scan to work when there are mixed row counts present.
- */
- if (fs->rowcount != -1 && fs->rowcount < oldpos)
- ExecClearTuple(fs->func_slot);
+ if (!fs->started)
+ ExecBeginFunctionResult(node, fs);
else
- (void) tuplestore_gettupleslot(fs->tstore,
- ScanDirectionIsForward(direction),
- false,
- fs->func_slot);
+ ExecNextFunctionResult(node, fs);
if (TupIsNull(fs->func_slot))
{
/*
- * If we ran out of data for this function in the forward
- * direction then we now know how many rows it returned. We need
- * to know this in order to handle backwards scans. The row count
- * we store is actually 1+ the actual number, because we have to
- * position the tuplestore 1 off its end sometimes.
- */
- if (ScanDirectionIsForward(direction) && fs->rowcount == -1)
- fs->rowcount = node->ordinal;
-
- /*
* populate the result cols with nulls
*/
for (i = 0; i < fs->colcount; i++)
@@ -307,21 +264,12 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
scanstate->ordinality = node->funcordinality;
scanstate->nfuncs = nfuncs;
- if (nfuncs == 1 && !node->funcordinality)
- scanstate->simple = true;
- else
+ if (nfuncs > 1 || node->funcordinality)
scanstate->simple = false;
+ else
+ scanstate->simple = true;
- /*
- * Ordinal 0 represents the "before the first row" position.
- *
- * We need to track ordinal position even when not adding an ordinality
- * column to the result, in order to handle backwards scanning properly
- * with multiple functions with different result sizes. (We can't position
- * any individual function's tuplestore any more than 1 place beyond its
- * end, so when scanning backwards, we need to know when to start
- * including the function in the scan again.)
- */
+ /* ordinal 0 represents the "before the first row" position */
scanstate->ordinal = 0;
/*
@@ -367,11 +315,12 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
/*
* Don't allocate the tuplestores; the actual calls to the functions
- * do that. NULL means that we have not called the function yet (or
- * need to call it again after a rescan).
+ * do that if necessary. started = false means that we have not
+ * called the function yet (or need to call it again after a rescan).
*/
fs->tstore = NULL;
- fs->rowcount = -1;
+ fs->started = false;
+ fs->rsinfo.setDesc = NULL;
/*
* Now determine if the function returns a simple or composite type,
@@ -390,6 +339,7 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
Assert(tupdesc->natts >= colcount);
/* Must copy it out of typcache for safety */
tupdesc = CreateTupleDescCopy(tupdesc);
+ fs->returnsTuple = true;
}
else if (functypclass == TYPEFUNC_SCALAR)
{
@@ -404,6 +354,7 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
TupleDescInitEntryCollation(tupdesc,
(AttrNumber) 1,
exprCollation(funcexpr));
+ fs->returnsTuple = false;
}
else if (functypclass == TYPEFUNC_RECORD)
{
@@ -418,6 +369,7 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
* case it doesn't.)
*/
BlessTupleDesc(tupdesc);
+ fs->returnsTuple = true;
}
else
{
@@ -439,7 +391,7 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
ExecSetSlotDescriptor(fs->func_slot, fs->tupdesc);
}
else
- fs->func_slot = NULL;
+ fs->func_slot = scanstate->ss.ss_ScanTupleSlot;
natts += colcount;
i++;
@@ -500,11 +452,13 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
ExecAssignScanProjectionInfo(&scanstate->ss);
/*
- * Create a memory context that ExecMakeTableFunctionResult can use to
- * evaluate function arguments in. We can't use the per-tuple context for
- * this because it gets reset too often; but we don't want to leak
- * evaluation results into the query-lifespan context either. We just
- * need one context, because we evaluate each function separately.
+ * Create a memory context that is used to evaluate function arguments in.
+ * We can't use the per-tuple context for this because it gets reset too
+ * often; but we don't want to leak evaluation results into the
+ * query-lifespan context either. We currently just use one context for
+ * all functions, they're evaluated at the same time anyway - most of the
+ * time creating separate contexts would use more memory, than being able
+ * to reset separately would save.
*/
scanstate->argcontext = AllocSetContextCreate(CurrentMemoryContext,
"Table function arguments",
@@ -564,58 +518,523 @@ ExecEndFunctionScan(FunctionScanState *node)
void
ExecReScanFunctionScan(FunctionScanState *node)
{
- FunctionScan *scan = (FunctionScan *) node->ss.ps.plan;
- int i;
- Bitmapset *chgparam = node->ss.ps.chgParam;
+ int i;
ExecClearTuple(node->ss.ps.ps_ResultTupleSlot);
+
for (i = 0; i < node->nfuncs; i++)
{
FunctionScanPerFuncState *fs = &node->funcstates[i];
if (fs->func_slot)
ExecClearTuple(fs->func_slot);
+
+ if (node->funcstates[i].tstore != NULL)
+ {
+ tuplestore_end(node->funcstates[i].tstore);
+ node->funcstates[i].tstore = NULL;
+ }
+
+ /*
+ * If it is a dynamically-allocated TupleDesc, free it: it is
+ * typically allocated in a per-query context, so we must avoid
+ * leaking it across multiple usages.
+ */
+ if (fs->rsinfo.setDesc && fs->rsinfo.setDesc->tdrefcount == -1)
+ {
+ FreeTupleDesc(fs->rsinfo.setDesc);
+ fs->rsinfo.setDesc = NULL;
+ }
+
+ fs->started = false;
}
ExecScanReScan(&node->ss);
+ /* Reset ordinality counter */
+ node->ordinal = 0;
+}
+
+
+static void
+ExecBeginFunctionResult(FunctionScanState *node,
+ FunctionScanPerFuncState *perfunc)
+{
+ bool returnsSet = false;
+ MemoryContext callerContext;
+ MemoryContext oldcontext;
+ bool direct_function_call;
+ ExprContext *econtext = node->ss.ps.ps_ExprContext;
+ ExprState *funcexpr = perfunc->funcexpr;
+ Datum result;
+
+ callerContext = CurrentMemoryContext;
+
+ Assert(perfunc->tupdesc != NULL);
+
/*
- * Here we have a choice whether to drop the tuplestores (and recompute
- * the function outputs) or just rescan them. We must recompute if an
- * expression contains changed parameters, else we rescan.
- *
- * XXX maybe we should recompute if the function is volatile? But in
- * general the executor doesn't conditionalize its actions on that.
+ * Prepare a resultinfo node for communication. We always do this even if
+ * not expecting a set result, so that we can pass expectedDesc. In the
+ * generic-expression case, the expression doesn't actually get to see the
+ * resultinfo, but set it up anyway because we use some of the fields as
+ * our own state variables.
*/
- if (chgparam)
+ perfunc->rsinfo.type = T_ReturnSetInfo;
+ perfunc->rsinfo.econtext = econtext;
+ perfunc->rsinfo.expectedDesc = perfunc->tupdesc;
+ perfunc->rsinfo.allowedModes = (int) (SFRM_ValuePerCall | SFRM_Materialize);
+ perfunc->rsinfo.returnMode = SFRM_ValuePerCall;
+ /* isDone is filled below */
+ perfunc->rsinfo.setResult = NULL;
+ perfunc->rsinfo.setDesc = NULL;
+ perfunc->tstore = NULL;
+
+ perfunc->started = true;
+
+ /*
+ * Normally the passed expression tree will be a FuncExprState, since the
+ * grammar only allows a function call at the top level of a table
+ * function reference. However, if the function doesn't return set then
+ * the planner might have replaced the function call via constant-folding
+ * or inlining. So if we see any other kind of expression node, execute
+ * it via the general ExecEvalExpr() code; the only difference is that we
+ * don't get a chance to pass a special ReturnSetInfo to any functions
+ * buried in the expression.
+ */
+ if (funcexpr && IsA(funcexpr, FuncExprState) &&
+ IsA(funcexpr->expr, FuncExpr))
{
- ListCell *lc;
+ FuncExprState *fcache = (FuncExprState *) funcexpr;
+ ExprDoneCond argDone;
- i = 0;
- foreach(lc, scan->functions)
+ /*
+ * This path is similar to ExecMakeFunctionResult.
+ */
+ direct_function_call = true;
+
+ /*
+ * Initialize function cache if first time through
+ */
+ if (fcache->func.fn_oid == InvalidOid)
{
- RangeTblFunction *rtfunc = (RangeTblFunction *) lfirst(lc);
+ FuncExpr *func = (FuncExpr *) fcache->xprstate.expr;
- if (bms_overlap(chgparam, rtfunc->funcparams))
+ ExecInitFcache(func->funcid, func->inputcollid, fcache,
+ econtext->ecxt_per_query_memory, false);
+ }
+ returnsSet = fcache->func.fn_retset;
+ InitFunctionCallInfoData(perfunc->fcinfo, &(fcache->func),
+ list_length(fcache->args),
+ fcache->fcinfo_data.fncollation,
+ NULL, (Node *) &perfunc->rsinfo);
+
+ /*
+ * Evaluate the function's argument list.
+ *
+ * We can't do this in the per-tuple context: the argument values
+ * would disappear when we reset that context in the inner loop. And
+ * the caller's CurrentMemoryContext is typically a query-lifespan
+ * context, so we don't want to leak memory there. We require the
+ * caller to pass a separate memory context that can be used for this,
+ * and can be reset each time the node is re-scanned.
+ */
+ oldcontext = MemoryContextSwitchTo(node->argcontext);
+ argDone = ExecEvalFuncArgs(&perfunc->fcinfo, fcache->args, econtext);
+ MemoryContextSwitchTo(oldcontext);
+
+ /* We don't allow sets in the arguments of the table function */
+ if (argDone != ExprSingleResult)
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
+
+ /*
+ * If function is strict, and there are any NULL arguments, skip
+ * calling the function and act like it returned NULL (or an empty
+ * set, in the returns-set case).
+ */
+ if (fcache->func.fn_strict)
+ {
+ int i;
+
+ for (i = 0; i < perfunc->fcinfo.nargs; i++)
{
- if (node->funcstates[i].tstore != NULL)
- {
- tuplestore_end(node->funcstates[i].tstore);
- node->funcstates[i].tstore = NULL;
- }
- node->funcstates[i].rowcount = -1;
+ if (perfunc->fcinfo.argnull[i])
+ goto no_function_result;
}
- i++;
+ }
+ }
+ else
+ {
+ /* Treat funcexpr as a generic expression */
+ direct_function_call = false;
+ InitFunctionCallInfoData(perfunc->fcinfo, NULL, 0, InvalidOid, NULL, NULL);
+ }
+
+ /*
+ * Switch to short-lived context for calling the function or expression.
+ */
+ MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
+
+ /*
+ * reset per-tuple memory context before each call of the function or
+ * expression. This cleans up any local memory the function may leak
+ * when called.
+ */
+ ResetExprContext(econtext);
+
+ /* Call the function or expression one time */
+ if (direct_function_call)
+ {
+ PgStat_FunctionCallUsage fcusage;
+
+ pgstat_init_function_usage(&perfunc->fcinfo, &fcusage);
+
+ perfunc->fcinfo.isnull = false;
+ perfunc->rsinfo.isDone = ExprSingleResult;
+ result = FunctionCallInvoke(&perfunc->fcinfo);
+
+ pgstat_end_function_usage(&fcusage,
+ perfunc->rsinfo.isDone != ExprMultipleResult);
+ }
+ else
+ {
+ perfunc->rsinfo.isDone = ExprSingleResult;
+ result = ExecEvalExpr(funcexpr, econtext,
+ &perfunc->fcinfo.isnull, NULL);
+
+ /* done after this, will use SFRM_ValuePerCall branch below */
+ }
+
+ /* Which protocol does function want to use? */
+ if (perfunc->rsinfo.returnMode == SFRM_ValuePerCall)
+ {
+ /*
+ * Check for end of result set.
+ */
+ if (perfunc->rsinfo.isDone == ExprEndResult)
+ goto no_function_result;
+
+ /*
+ * Store current resultset item.
+ */
+ if (perfunc->returnsTuple)
+ {
+ if (!perfunc->fcinfo.isnull)
+ {
+ HeapTupleHeader td = DatumGetHeapTupleHeader(result);
+ HeapTupleData tmptup;
+
+ if (perfunc->rsinfo.setDesc == NULL)
+ {
+ /*
+ * This is the first non-NULL result from the
+ * function. Use the type info embedded in the
+ * rowtype Datum to look up the needed tupdesc. Make
+ * a copy for the query.
+ */
+ oldcontext = MemoryContextSwitchTo(econtext->ecxt_per_query_memory);
+ perfunc->rsinfo.setDesc =
+ lookup_rowtype_tupdesc_copy(HeapTupleHeaderGetTypeId(td),
+ HeapTupleHeaderGetTypMod(td));
+ MemoryContextSwitchTo(oldcontext);
+
+ /*
+ * Cross-check tupdesc. We only really need to do this
+ * for functions returning RECORD, but might as well do it
+ * always.
+ */
+ tupledesc_match(perfunc->tupdesc, perfunc->rsinfo.setDesc);
+ }
+
+ tmptup.t_len = HeapTupleHeaderGetDatumLength(td);
+ tmptup.t_data = td;
+
+ ExecStoreTuple(&tmptup, perfunc->func_slot, InvalidBuffer, false);
+ /* materializing handles expanded and toasted datums */
+ /* XXX: would be nice if this could be optimized away */
+ ExecMaterializeSlot(perfunc->func_slot);
+ }
+ else
+ {
+ /*
+ * NULL result from a tuple-returning function; expand it
+ * to a row of all nulls.
+ */
+ ExecStoreAllNullTuple(perfunc->func_slot);
+ }
+ }
+ else
+ {
+ /*
+ * Scalar-type case: just store the function result
+ */
+ ExecClearTuple(perfunc->func_slot);
+ perfunc->func_slot->tts_values[0] = result;
+ perfunc->func_slot->tts_isnull[0] = perfunc->fcinfo.isnull;
+ ExecStoreVirtualTuple(perfunc->func_slot);
+
+ /* materializing handles expanded and toasted datums */
+ ExecMaterializeSlot(perfunc->func_slot);
+ }
+ }
+ else if (perfunc->rsinfo.returnMode == SFRM_Materialize)
+ {
+ EState *estate;
+ ScanDirection direction;
+
+ estate = node->ss.ps.state;
+ direction = estate->es_direction;
+
+ /* check we're on the same page as the function author */
+ if (perfunc->rsinfo.isDone != ExprSingleResult)
+ ereport(ERROR,
+ (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
+ errmsg("table-function protocol for materialize mode was not followed")));
+
+ if (perfunc->rsinfo.setResult != NULL)
+ {
+ perfunc->tstore = perfunc->rsinfo.setResult;
+
+ /*
+ * paranoia - cope if the function, which may have constructed the
+ * tuplestore itself, didn't leave it pointing at the start. This
+ * call is fast, so the overhead shouldn't be an issue.
+ */
+ tuplestore_rescan(perfunc->rsinfo.setResult);
+
+ /*
+ * If function provided a tupdesc, cross-check it. We only really need to
+ * do this for functions returning RECORD, but might as well do it always.
+ */
+ if (perfunc->rsinfo.setDesc)
+ {
+ tupledesc_match(perfunc->tupdesc, perfunc->rsinfo.setDesc);
+
+ /*
+ * If it is a dynamically-allocated TupleDesc, free it: it is
+ * typically allocated in a per-query context, so we must avoid
+ * leaking it across multiple usages.
+ */
+ if (perfunc->rsinfo.setDesc->tdrefcount == -1)
+ {
+ FreeTupleDesc(perfunc->rsinfo.setDesc);
+ perfunc->rsinfo.setDesc = NULL;
+ }
+ }
+
+ /* and return first row */
+ (void) tuplestore_gettupleslot(perfunc->rsinfo.setResult,
+ ScanDirectionIsForward(direction),
+ false,
+ perfunc->func_slot);
+ }
+ }
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
+ errmsg("unrecognized table-function returnMode: %d",
+ (int) perfunc->rsinfo.returnMode)));
+ goto done;
+
+no_function_result:
+ MemoryContextSwitchTo(callerContext);
+
+ /*
+ * If we got nothing from the function (ie, an empty-set or NULL
+ * result), we have to manufacture a result. I.e. if it's a
+ * non-set-returning function then return a single all-nulls row.
+ */
+ perfunc->rsinfo.isDone = ExprEndResult;
+ if (returnsSet)
+ ExecClearTuple(perfunc->func_slot);
+ else
+ ExecStoreAllNullTuple(perfunc->func_slot);
+done:
+ MemoryContextSwitchTo(callerContext);
+}
+
+static void
+ExecNextFunctionResult(FunctionScanState *node,
+ FunctionScanPerFuncState *perfunc)
+{
+ EState *estate;
+ ScanDirection direction;
+ MemoryContext callerContext;
+ ExprContext *econtext = node->ss.ps.ps_ExprContext;
+
+ estate = node->ss.ps.state;
+ direction = estate->es_direction;
+
+ callerContext = CurrentMemoryContext;
+
+ if (perfunc->tstore)
+ {
+ (void) tuplestore_gettupleslot(perfunc->tstore,
+ ScanDirectionIsForward(direction),
+ false,
+ perfunc->func_slot);
+ }
+ else if (perfunc->rsinfo.isDone == ExprSingleResult ||
+ perfunc->rsinfo.isDone == ExprEndResult)
+ {
+ ExecClearTuple(perfunc->func_slot);
+ }
+ else
+ {
+ Datum result;
+ PgStat_FunctionCallUsage fcusage;
+ ExprState *funcexpr PG_USED_FOR_ASSERTS_ONLY = perfunc->funcexpr;
+
+ /* ensure called in a sane context */
+ Assert(funcexpr && IsA(funcexpr, FuncExprState) &&
+ IsA(funcexpr->expr, FuncExpr));
+ Assert(perfunc->rsinfo.returnMode == SFRM_ValuePerCall);
+
+ /*
+ * Switch to short-lived context for calling the function or expression.
+ */
+ MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
+
+ /* next call in percall mode */
+ pgstat_init_function_usage(&perfunc->fcinfo, &fcusage);
+
+ perfunc->fcinfo.isnull = false;
+ perfunc->rsinfo.isDone = ExprSingleResult;
+ result = FunctionCallInvoke(&perfunc->fcinfo);
+
+ pgstat_end_function_usage(&fcusage,
+ perfunc->rsinfo.isDone != ExprMultipleResult);
+
+ Assert(perfunc->rsinfo.returnMode == SFRM_ValuePerCall);
+
+ if (perfunc->rsinfo.isDone == ExprEndResult)
+ {
+ ExecClearTuple(perfunc->func_slot);
+ goto out;
+ }
+
+ if (perfunc->returnsTuple)
+ {
+ if (!perfunc->fcinfo.isnull)
+ {
+ HeapTupleHeader td = DatumGetHeapTupleHeader(result);
+ HeapTupleData tmptup;
+ TupleDesc tupdesc;
+
+ if (perfunc->rsinfo.setDesc == NULL)
+ {
+ MemoryContext oldcontext;
+
+ /*
+ * This is the first non-NULL result from the
+ * function. Use the type info embedded in the
+ * rowtype Datum to look up the needed tupdesc. Make
+ * a copy for the query.
+ */
+ oldcontext = MemoryContextSwitchTo(econtext->ecxt_per_query_memory);
+ perfunc->rsinfo.setDesc =
+ lookup_rowtype_tupdesc_copy(HeapTupleHeaderGetTypeId(td),
+ HeapTupleHeaderGetTypMod(td));
+ MemoryContextSwitchTo(oldcontext);
+
+ /*
+ * Cross-check tupdesc. We only really need to do this
+ * for functions returning RECORD, but might as well do it
+ * always.
+ */
+ tupledesc_match(perfunc->tupdesc, perfunc->rsinfo.setDesc);
+ }
+
+ tupdesc = perfunc->rsinfo.setDesc;
+
+ /*
+ * Verify all later returned rows have same subtype;
+ * necessary in case the type is RECORD.
+ */
+ if (HeapTupleHeaderGetTypeId(td) != tupdesc->tdtypeid ||
+ HeapTupleHeaderGetTypMod(td) != tupdesc->tdtypmod)
+ ereport(ERROR,
+ (errcode(ERRCODE_DATATYPE_MISMATCH),
+ errmsg("rows returned by function are not all of the same row type")));
+
+ tmptup.t_len = HeapTupleHeaderGetDatumLength(td);
+ tmptup.t_data = td;
+
+ ExecStoreTuple(&tmptup, perfunc->func_slot, InvalidBuffer, false);
+ /* materializing handles expanded and toasted datums */
+ /* XXX: would be nice if this could be optimized away */
+ ExecMaterializeSlot(perfunc->func_slot);
+ }
+ else
+ {
+ ExecStoreAllNullTuple(perfunc->func_slot);
+ }
+ }
+ else
+ {
+ /* Scalar-type case: just store the function result */
+ ExecClearTuple(perfunc->func_slot);
+ perfunc->func_slot->tts_values[0] = result;
+ perfunc->func_slot->tts_isnull[0] = perfunc->fcinfo.isnull;
+ ExecStoreVirtualTuple(perfunc->func_slot);
+
+ /* materializing handles expanded and toasted datums */
+ ExecMaterializeSlot(perfunc->func_slot);
}
}
- /* Reset ordinality counter */
- node->ordinal = 0;
+out:
+ MemoryContextSwitchTo(callerContext);
+}
- /* Make sure we rewind any remaining tuplestores */
- for (i = 0; i < node->nfuncs; i++)
+
+/*
+ * Check that function result tuple type (src_tupdesc) matches or can
+ * be considered to match what the query expects (dst_tupdesc). If
+ * they don't match, ereport.
+ *
+ * We really only care about number of attributes and data type.
+ * Also, we can ignore type mismatch on columns that are dropped in the
+ * destination type, so long as the physical storage matches. This is
+ * helpful in some cases involving out-of-date cached plans.
+ */
+static void
+tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc)
+{
+ int i;
+
+ if (dst_tupdesc->natts != src_tupdesc->natts)
+ ereport(ERROR,
+ (errcode(ERRCODE_DATATYPE_MISMATCH),
+ errmsg("function return row and query-specified return row do not match"),
+ errdetail_plural("Returned row contains %d attribute, but query expects %d.",
+ "Returned row contains %d attributes, but query expects %d.",
+ src_tupdesc->natts,
+ src_tupdesc->natts, dst_tupdesc->natts)));
+
+ for (i = 0; i < dst_tupdesc->natts; i++)
{
- if (node->funcstates[i].tstore != NULL)
- tuplestore_rescan(node->funcstates[i].tstore);
+ Form_pg_attribute dattr = dst_tupdesc->attrs[i];
+ Form_pg_attribute sattr = src_tupdesc->attrs[i];
+
+ if (IsBinaryCoercible(sattr->atttypid, dattr->atttypid))
+ continue; /* no worries */
+ if (!dattr->attisdropped)
+ ereport(ERROR,
+ (errcode(ERRCODE_DATATYPE_MISMATCH),
+ errmsg("function return row and query-specified return row do not match"),
+ errdetail("Returned type %s at ordinal position %d, but query expects %s.",
+ format_type_be(sattr->atttypid),
+ i + 1,
+ format_type_be(dattr->atttypid))));
+
+ if (dattr->attlen != sattr->attlen ||
+ dattr->attalign != sattr->attalign)
+ ereport(ERROR,
+ (errcode(ERRCODE_DATATYPE_MISMATCH),
+ errmsg("function return row and query-specified return row do not match"),
+ errdetail("Physical storage mismatch on dropped attribute at ordinal position %d.",
+ i + 1)));
}
}
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 39521ed..7f11285 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -234,14 +234,13 @@ extern Datum GetAttributeByNum(HeapTupleHeader tuple, AttrNumber attrno,
bool *isNull);
extern Datum GetAttributeByName(HeapTupleHeader tuple, const char *attname,
bool *isNull);
-extern Tuplestorestate *ExecMakeTableFunctionResult(ExprState *funcexpr,
- ExprContext *econtext,
- MemoryContext argContext,
- TupleDesc expectedDesc,
- bool randomAccess);
extern Datum ExecEvalExprSwitchContext(ExprState *expression, ExprContext *econtext,
bool *isNull, ExprDoneCond *isDone);
+extern ExprDoneCond ExecEvalFuncArgs(FunctionCallInfo fcinfo,
+ List *argList, ExprContext *econtext);
extern ExprState *ExecInitExpr(Expr *node, PlanState *parent);
+extern void ExecInitFcache(Oid foid, Oid input_collation, FuncExprState *fcache,
+ MemoryContext fcacheCxt, bool needDescForSets);
extern ExprState *ExecPrepareExpr(Expr *node, EState *estate);
extern bool ExecQual(List *qual, ExprContext *econtext, bool resultForNull);
extern int ExecTargetListLength(List *targetlist);
diff --git a/src/test/regress/expected/pg_lsn.out b/src/test/regress/expected/pg_lsn.out
index 2854cfd..5ed0089 100644
--- a/src/test/regress/expected/pg_lsn.out
+++ b/src/test/regress/expected/pg_lsn.out
@@ -79,14 +79,15 @@ SELECT DISTINCT (i || '/' || j)::pg_lsn f
-> HashAggregate
Group Key: ((((i.i)::text || '/'::text) || (j.j)::text))::pg_lsn
-> Nested Loop
- -> Function Scan on generate_series k
- -> Materialize
- -> Nested Loop
+ -> Nested Loop
+ -> Function Scan on generate_series i
+ Filter: (i <= 10)
+ -> Materialize
-> Function Scan on generate_series j
Filter: ((j > 0) AND (j <= 10))
- -> Function Scan on generate_series i
- Filter: (i <= 10)
-(12 rows)
+ -> Materialize
+ -> Function Scan on generate_series k
+(13 rows)
SELECT DISTINCT (i || '/' || j)::pg_lsn f
FROM generate_series(1, 10) i,
diff --git a/src/test/regress/expected/plpgsql.out b/src/test/regress/expected/plpgsql.out
index a2c36e4..40fd285 100644
--- a/src/test/regress/expected/plpgsql.out
+++ b/src/test/regress/expected/plpgsql.out
@@ -3562,7 +3562,7 @@ select * from sc_test();
create or replace function sc_test() returns setof integer as $$
declare
- c cursor for select * from generate_series(1, 10);
+ c scroll cursor for select * from generate_series(1, 10);
x integer;
begin
open c;
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..a6496fc 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -1109,9 +1109,9 @@ SELECT * FROM (VALUES (1),(2),(3)) v(r), ROWS FROM( foo_sql(11,11), foo_mat(10+r
1 | 11 | 1 | 11 | 1
1 | | | 12 | 2
1 | | | 13 | 3
- 2 | 11 | 1 | 12 | 4
+ 2 | 11 | 2 | 12 | 4
2 | | | 13 | 5
- 3 | 11 | 1 | 13 | 6
+ 3 | 11 | 3 | 13 | 6
(6 rows)
SELECT setval('foo_rescan_seq1',1,false),setval('foo_rescan_seq2',1,false);
@@ -1126,9 +1126,9 @@ SELECT * FROM (VALUES (1),(2),(3)) v(r), ROWS FROM( foo_sql(10+r,13), foo_mat(11
1 | 11 | 1 | 11 | 1
1 | 12 | 2 | |
1 | 13 | 3 | |
- 2 | 12 | 4 | 11 | 1
+ 2 | 12 | 4 | 11 | 2
2 | 13 | 5 | |
- 3 | 13 | 6 | 11 | 1
+ 3 | 13 | 6 | 11 | 3
(6 rows)
SELECT setval('foo_rescan_seq1',1,false),setval('foo_rescan_seq2',1,false);
diff --git a/src/test/regress/sql/plpgsql.sql b/src/test/regress/sql/plpgsql.sql
index 776f229..2d358bb 100644
--- a/src/test/regress/sql/plpgsql.sql
+++ b/src/test/regress/sql/plpgsql.sql
@@ -2952,7 +2952,7 @@ select * from sc_test();
create or replace function sc_test() returns setof integer as $$
declare
- c cursor for select * from generate_series(1, 10);
+ c scroll cursor for select * from generate_series(1, 10);
x integer;
begin
open c;
--
2.9.3
0004-Allow-ROWS-FROM-to-return-functions-as-single-record.patchtext/x-patch; charset=us-asciiDownload
From 39b7ee10f7a5e13826bd77e7f9bb07c8a0d92653 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Fri, 26 Aug 2016 16:19:58 -0700
Subject: [PATCH 4/6] Allow ROWS FROM to return functions as single record
column.
Using an empty AS list inside ROWS FROM (e.g.
SELECT * FROM ROWS FROM(aclexplode('{=r/andres}') AS ())), previously
not permitted by the grammar, now returns the results of the function as
a single record/composite field.
This is primarily interesting to allow for the conversion of SELECT
record_returning_srf(); into ROWS FROM. Without a facility like this
there'd be no way to do that for functions returning record, and it'd be
more complicated for composite returning functions.
Todo:
- cleanup code a bit, move some of the changes to previous commit
---
src/backend/executor/nodeFunctionscan.c | 267 ++++++++++++++++++++++---------
src/backend/nodes/copyfuncs.c | 16 ++
src/backend/nodes/equalfuncs.c | 14 ++
src/backend/nodes/outfuncs.c | 14 ++
src/backend/nodes/readfuncs.c | 1 +
src/backend/optimizer/util/clauses.c | 4 +
src/backend/parser/gram.y | 43 +++--
src/backend/parser/parse_clause.c | 18 +--
src/backend/parser/parse_relation.c | 66 +++++++-
src/include/nodes/nodes.h | 1 +
src/include/nodes/parsenodes.h | 21 ++-
src/include/nodes/pg_list.h | 9 ++
src/include/parser/parse_relation.h | 1 +
src/test/regress/expected/rangefuncs.out | 85 ++++++++++
src/test/regress/sql/rangefuncs.sql | 18 +++
15 files changed, 468 insertions(+), 110 deletions(-)
diff --git a/src/backend/executor/nodeFunctionscan.c b/src/backend/executor/nodeFunctionscan.c
index 4885f75..1da4fde 100644
--- a/src/backend/executor/nodeFunctionscan.c
+++ b/src/backend/executor/nodeFunctionscan.c
@@ -44,12 +44,15 @@
typedef struct FunctionScanPerFuncState
{
ExprState *funcexpr; /* state of the expression being evaluated */
- TupleDesc tupdesc; /* desc of the function result type */
int colcount; /* expected number of result columns */
Tuplestorestate *tstore; /* holds the function result set */
TupleTableSlot *func_slot; /* function result slot (or NULL) */
+ TupleDesc func_desc; /* desc of the function result type */
+ TupleTableSlot *scan_slot; /* function result slot (or NULL) */
+ TupleDesc scan_desc; /* tupdesc for returning a tuple as one col */
bool started;
bool returnsTuple;
+ bool toRecord;
FunctionCallInfoData fcinfo;
ReturnSetInfo rsinfo;
} FunctionScanPerFuncState;
@@ -109,7 +112,7 @@ FunctionNext(FunctionScanState *node)
else
ExecNextFunctionResult(node, fs);
- scanslot = fs->func_slot;
+ scanslot = fs->scan_slot;
return scanslot;
}
@@ -125,7 +128,7 @@ FunctionNext(FunctionScanState *node)
/*
* Main loop over functions.
*
- * We fetch the function results into func_slots (which match the function
+ * We fetch the function results into scan_slots (which match the function
* return types), and then copy the values to scanslot (which matches the
* scan result type), setting the ordinal column (if any) as well.
*/
@@ -147,7 +150,7 @@ FunctionNext(FunctionScanState *node)
else
ExecNextFunctionResult(node, fs);
- if (TupIsNull(fs->func_slot))
+ if (TupIsNull(fs->scan_slot))
{
/*
* populate the result cols with nulls
@@ -164,12 +167,12 @@ FunctionNext(FunctionScanState *node)
/*
* we have a result, so just copy it to the result cols.
*/
- slot_getallattrs(fs->func_slot);
+ slot_getallattrs(fs->scan_slot);
for (i = 0; i < fs->colcount; i++)
{
- scanslot->tts_values[att] = fs->func_slot->tts_values[i];
- scanslot->tts_isnull[att] = fs->func_slot->tts_isnull[i];
+ scanslot->tts_values[att] = fs->scan_slot->tts_values[i];
+ scanslot->tts_isnull[att] = fs->scan_slot->tts_isnull[i];
att++;
}
@@ -309,7 +312,6 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
FunctionScanPerFuncState *fs = &scanstate->funcstates[i];
TypeFuncClass functypclass;
Oid funcrettype;
- TupleDesc tupdesc;
fs->funcexpr = ExecInitExpr((Expr *) funcexpr, (PlanState *) scanstate);
@@ -321,6 +323,11 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
fs->tstore = NULL;
fs->started = false;
fs->rsinfo.setDesc = NULL;
+ fs->toRecord = false;
+ fs->scan_desc = NULL;
+ fs->func_desc = NULL;
+ fs->scan_slot = NULL;
+ fs->func_slot = NULL;
/*
* Now determine if the function returns a simple or composite type,
@@ -330,45 +337,77 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
*/
functypclass = get_expr_result_type(funcexpr,
&funcrettype,
- &tupdesc);
+ &fs->func_desc);
- if (functypclass == TYPEFUNC_COMPOSITE)
+ if (rtfunc->funcasrecord)
{
- /* Composite data type, e.g. a table's row type */
- Assert(tupdesc);
- Assert(tupdesc->natts >= colcount);
- /* Must copy it out of typcache for safety */
- tupdesc = CreateTupleDescCopy(tupdesc);
- fs->returnsTuple = true;
- }
- else if (functypclass == TYPEFUNC_SCALAR)
- {
- /* Base data type, i.e. scalar */
- tupdesc = CreateTemplateTupleDesc(1, false);
- TupleDescInitEntry(tupdesc,
+ /*
+ * Returning a composite / record type as one column, instead of
+ * split up. Need to compute two tuple descs: One to pass to the
+ * function (important when return type is determined in catalog),
+ * and one specifying what is eventually returned to the user.
+ */
+ Assert(functypclass == TYPEFUNC_COMPOSITE ||
+ functypclass == TYPEFUNC_RECORD);
+
+ /*
+ * fs->func_desc contains what the function returns, if known,
+ * scan_desc what we make it return
+ */
+ if (fs->func_desc)
+ fs->func_desc = CreateTupleDescCopy(fs->func_desc);
+ fs->scan_desc = CreateTemplateTupleDesc(1, false);
+ TupleDescInitEntry(fs->scan_desc,
(AttrNumber) 1,
NULL, /* don't care about the name here */
funcrettype,
-1,
0);
- TupleDescInitEntryCollation(tupdesc,
+ TupleDescInitEntryCollation(fs->scan_desc,
(AttrNumber) 1,
exprCollation(funcexpr));
+ fs->toRecord = true;
+ fs->returnsTuple = true;
+ }
+ else if (functypclass == TYPEFUNC_COMPOSITE)
+ {
+ /* Composite data type, e.g. a table's row type */
+ Assert(fs->func_desc);
+ Assert(fs->func_desc->natts >= colcount);
+ /* Must copy it out of typcache for safety */
+ fs->func_desc = CreateTupleDescCopy(fs->func_desc);
+ fs->scan_desc = fs->func_desc;
+ fs->returnsTuple = true;
+ }
+ else if (functypclass == TYPEFUNC_SCALAR)
+ {
+ /* Base data type, i.e. scalar */
+ fs->func_desc = CreateTemplateTupleDesc(1, false);
+ TupleDescInitEntry(fs->func_desc,
+ (AttrNumber) 1,
+ NULL, /* don't care about the name here */
+ funcrettype,
+ -1,
+ 0);
+ TupleDescInitEntryCollation(fs->func_desc,
+ (AttrNumber) 1,
+ exprCollation(funcexpr));
+ fs->scan_desc = fs->func_desc;
fs->returnsTuple = false;
}
else if (functypclass == TYPEFUNC_RECORD)
{
- tupdesc = BuildDescFromLists(rtfunc->funccolnames,
- rtfunc->funccoltypes,
- rtfunc->funccoltypmods,
- rtfunc->funccolcollations);
-
+ fs->func_desc = BuildDescFromLists(rtfunc->funccolnames,
+ rtfunc->funccoltypes,
+ rtfunc->funccoltypmods,
+ rtfunc->funccolcollations);
/*
* For RECORD results, make sure a typmod has been assigned. (The
* function should do this for itself, but let's cover things in
* case it doesn't.)
*/
- BlessTupleDesc(tupdesc);
+ BlessTupleDesc(fs->func_desc);
+ fs->scan_desc = fs->func_desc;
fs->returnsTuple = true;
}
else
@@ -377,7 +416,6 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
elog(ERROR, "function in FROM has unsupported return type");
}
- fs->tupdesc = tupdesc;
fs->colcount = colcount;
/*
@@ -387,11 +425,11 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
*/
if (!scanstate->simple)
{
- fs->func_slot = ExecInitExtraTupleSlot(estate);
- ExecSetSlotDescriptor(fs->func_slot, fs->tupdesc);
+ fs->scan_slot = ExecInitExtraTupleSlot(estate);
+ ExecSetSlotDescriptor(fs->scan_slot, fs->scan_desc);
}
else
- fs->func_slot = scanstate->ss.ss_ScanTupleSlot;
+ fs->scan_slot = scanstate->ss.ss_ScanTupleSlot;
natts += colcount;
i++;
@@ -406,7 +444,7 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
*/
if (scanstate->simple)
{
- scan_tupdesc = CreateTupleDescCopy(scanstate->funcstates[0].tupdesc);
+ scan_tupdesc = CreateTupleDescCopy(scanstate->funcstates[0].scan_desc);
scan_tupdesc->tdtypeid = RECORDOID;
scan_tupdesc->tdtypmod = -1;
}
@@ -421,7 +459,7 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
for (i = 0; i < nfuncs; i++)
{
- TupleDesc tupdesc = scanstate->funcstates[i].tupdesc;
+ TupleDesc tupdesc = scanstate->funcstates[i].scan_desc;
int colcount = scanstate->funcstates[i].colcount;
int j;
@@ -498,6 +536,8 @@ ExecEndFunctionScan(FunctionScanState *node)
{
FunctionScanPerFuncState *fs = &node->funcstates[i];
+ if (fs->scan_slot)
+ ExecClearTuple(fs->scan_slot);
if (fs->func_slot)
ExecClearTuple(fs->func_slot);
@@ -526,6 +566,8 @@ ExecReScanFunctionScan(FunctionScanState *node)
{
FunctionScanPerFuncState *fs = &node->funcstates[i];
+ if (fs->scan_slot)
+ ExecClearTuple(fs->scan_slot);
if (fs->func_slot)
ExecClearTuple(fs->func_slot);
@@ -570,7 +612,7 @@ ExecBeginFunctionResult(FunctionScanState *node,
callerContext = CurrentMemoryContext;
- Assert(perfunc->tupdesc != NULL);
+ Assert(perfunc->scan_desc != NULL);
/*
* Prepare a resultinfo node for communication. We always do this even if
@@ -581,7 +623,7 @@ ExecBeginFunctionResult(FunctionScanState *node,
*/
perfunc->rsinfo.type = T_ReturnSetInfo;
perfunc->rsinfo.econtext = econtext;
- perfunc->rsinfo.expectedDesc = perfunc->tupdesc;
+ perfunc->rsinfo.expectedDesc = perfunc->func_desc;
perfunc->rsinfo.allowedModes = (int) (SFRM_ValuePerCall | SFRM_Materialize);
perfunc->rsinfo.returnMode = SFRM_ValuePerCall;
/* isDone is filled below */
@@ -718,7 +760,7 @@ ExecBeginFunctionResult(FunctionScanState *node,
/*
* Store current resultset item.
*/
- if (perfunc->returnsTuple)
+ if (perfunc->returnsTuple && !perfunc->toRecord)
{
if (!perfunc->fcinfo.isnull)
{
@@ -744,16 +786,16 @@ ExecBeginFunctionResult(FunctionScanState *node,
* for functions returning RECORD, but might as well do it
* always.
*/
- tupledesc_match(perfunc->tupdesc, perfunc->rsinfo.setDesc);
+ tupledesc_match(perfunc->scan_desc, perfunc->rsinfo.setDesc);
}
tmptup.t_len = HeapTupleHeaderGetDatumLength(td);
tmptup.t_data = td;
- ExecStoreTuple(&tmptup, perfunc->func_slot, InvalidBuffer, false);
+ ExecStoreTuple(&tmptup, perfunc->scan_slot, InvalidBuffer, false);
/* materializing handles expanded and toasted datums */
/* XXX: would be nice if this could be optimized away */
- ExecMaterializeSlot(perfunc->func_slot);
+ ExecMaterializeSlot(perfunc->scan_slot);
}
else
{
@@ -761,7 +803,7 @@ ExecBeginFunctionResult(FunctionScanState *node,
* NULL result from a tuple-returning function; expand it
* to a row of all nulls.
*/
- ExecStoreAllNullTuple(perfunc->func_slot);
+ ExecStoreAllNullTuple(perfunc->scan_slot);
}
}
else
@@ -769,13 +811,13 @@ ExecBeginFunctionResult(FunctionScanState *node,
/*
* Scalar-type case: just store the function result
*/
- ExecClearTuple(perfunc->func_slot);
- perfunc->func_slot->tts_values[0] = result;
- perfunc->func_slot->tts_isnull[0] = perfunc->fcinfo.isnull;
- ExecStoreVirtualTuple(perfunc->func_slot);
+ ExecClearTuple(perfunc->scan_slot);
+ perfunc->scan_slot->tts_values[0] = result;
+ perfunc->scan_slot->tts_isnull[0] = perfunc->fcinfo.isnull;
+ ExecStoreVirtualTuple(perfunc->scan_slot);
/* materializing handles expanded and toasted datums */
- ExecMaterializeSlot(perfunc->func_slot);
+ ExecMaterializeSlot(perfunc->scan_slot);
}
}
else if (perfunc->rsinfo.returnMode == SFRM_Materialize)
@@ -809,25 +851,70 @@ ExecBeginFunctionResult(FunctionScanState *node,
*/
if (perfunc->rsinfo.setDesc)
{
- tupledesc_match(perfunc->tupdesc, perfunc->rsinfo.setDesc);
+ if (!perfunc->toRecord)
+ tupledesc_match(perfunc->scan_desc, perfunc->rsinfo.setDesc);
+ else if (perfunc->func_desc)
+ tupledesc_match(perfunc->func_desc, perfunc->rsinfo.setDesc);
+ }
- /*
- * If it is a dynamically-allocated TupleDesc, free it: it is
- * typically allocated in a per-query context, so we must avoid
- * leaking it across multiple usages.
- */
- if (perfunc->rsinfo.setDesc->tdrefcount == -1)
+ if (!perfunc->toRecord)
+ {
+ /* and return first row */
+ (void) tuplestore_gettupleslot(perfunc->rsinfo.setResult,
+ ScanDirectionIsForward(direction),
+ false,
+ perfunc->scan_slot);
+ }
+ else
+ {
+
+ if (perfunc->func_slot == NULL)
{
- FreeTupleDesc(perfunc->rsinfo.setDesc);
- perfunc->rsinfo.setDesc = NULL;
+ MemoryContext oldcontext;
+ TupleDesc slotDesc;
+
+ oldcontext = MemoryContextSwitchTo(econtext->ecxt_per_query_memory);
+
+ /* don't assume slotDesc is long-lived */
+ if (perfunc->rsinfo.setDesc)
+ slotDesc = CreateTupleDescCopy(perfunc->rsinfo.setDesc);
+ else if (perfunc->func_desc)
+ slotDesc = perfunc->func_desc;
+ else
+ Assert(false);
+
+ perfunc->func_slot = MakeSingleTupleTableSlot(slotDesc);
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ (void) tuplestore_gettupleslot(perfunc->rsinfo.setResult,
+ ScanDirectionIsForward(direction),
+ false,
+ perfunc->func_slot);
+
+ ExecClearTuple(perfunc->scan_slot);
+ if (!TupIsNull(perfunc->func_slot))
+ {
+ perfunc->scan_slot->tts_values[0] =
+ ExecFetchSlotTupleDatum(perfunc->func_slot);
+ perfunc->scan_slot->tts_isnull[0] = false;
+ ExecStoreVirtualTuple(perfunc->scan_slot);
+ /* materializing handles expanded and toasted datums */
+ ExecMaterializeSlot(perfunc->scan_slot);
}
}
- /* and return first row */
- (void) tuplestore_gettupleslot(perfunc->rsinfo.setResult,
- ScanDirectionIsForward(direction),
- false,
- perfunc->func_slot);
+ /*
+ * If it is a dynamically-allocated TupleDesc, free it: it is
+ * typically allocated in a per-query context, so we want to avoid
+ * leaking it across multiple usages.
+ */
+ if (perfunc->rsinfo.setDesc &&
+ perfunc->rsinfo.setDesc->tdrefcount == -1)
+ {
+ FreeTupleDesc(perfunc->rsinfo.setDesc);
+ perfunc->rsinfo.setDesc = NULL;
+ }
}
}
else
@@ -847,9 +934,9 @@ no_function_result:
*/
perfunc->rsinfo.isDone = ExprEndResult;
if (returnsSet)
- ExecClearTuple(perfunc->func_slot);
+ ExecClearTuple(perfunc->scan_slot);
else
- ExecStoreAllNullTuple(perfunc->func_slot);
+ ExecStoreAllNullTuple(perfunc->scan_slot);
done:
MemoryContextSwitchTo(callerContext);
}
@@ -870,15 +957,37 @@ ExecNextFunctionResult(FunctionScanState *node,
if (perfunc->tstore)
{
- (void) tuplestore_gettupleslot(perfunc->tstore,
- ScanDirectionIsForward(direction),
- false,
- perfunc->func_slot);
+ if (!perfunc->toRecord)
+ {
+ /* and return first row */
+ (void) tuplestore_gettupleslot(perfunc->rsinfo.setResult,
+ ScanDirectionIsForward(direction),
+ false,
+ perfunc->scan_slot);
+ }
+ else
+ {
+ (void) tuplestore_gettupleslot(perfunc->rsinfo.setResult,
+ ScanDirectionIsForward(direction),
+ false,
+ perfunc->func_slot);
+
+ ExecClearTuple(perfunc->scan_slot);
+ if (!TupIsNull(perfunc->func_slot))
+ {
+ perfunc->scan_slot->tts_values[0] =
+ ExecFetchSlotTupleDatum(perfunc->func_slot);
+ perfunc->scan_slot->tts_isnull[0] = false;
+ ExecStoreVirtualTuple(perfunc->scan_slot);
+ /* materializing handles expanded and toasted datums */
+ ExecMaterializeSlot(perfunc->scan_slot);
+ }
+ }
}
else if (perfunc->rsinfo.isDone == ExprSingleResult ||
perfunc->rsinfo.isDone == ExprEndResult)
{
- ExecClearTuple(perfunc->func_slot);
+ ExecClearTuple(perfunc->scan_slot);
}
else
{
@@ -910,11 +1019,11 @@ ExecNextFunctionResult(FunctionScanState *node,
if (perfunc->rsinfo.isDone == ExprEndResult)
{
- ExecClearTuple(perfunc->func_slot);
+ ExecClearTuple(perfunc->scan_slot);
goto out;
}
- if (perfunc->returnsTuple)
+ if (perfunc->returnsTuple && !perfunc->toRecord)
{
if (!perfunc->fcinfo.isnull)
{
@@ -943,7 +1052,7 @@ ExecNextFunctionResult(FunctionScanState *node,
* for functions returning RECORD, but might as well do it
* always.
*/
- tupledesc_match(perfunc->tupdesc, perfunc->rsinfo.setDesc);
+ tupledesc_match(perfunc->scan_desc, perfunc->rsinfo.setDesc);
}
tupdesc = perfunc->rsinfo.setDesc;
@@ -961,26 +1070,26 @@ ExecNextFunctionResult(FunctionScanState *node,
tmptup.t_len = HeapTupleHeaderGetDatumLength(td);
tmptup.t_data = td;
- ExecStoreTuple(&tmptup, perfunc->func_slot, InvalidBuffer, false);
+ ExecStoreTuple(&tmptup, perfunc->scan_slot, InvalidBuffer, false);
/* materializing handles expanded and toasted datums */
/* XXX: would be nice if this could be optimized away */
- ExecMaterializeSlot(perfunc->func_slot);
+ ExecMaterializeSlot(perfunc->scan_slot);
}
else
{
- ExecStoreAllNullTuple(perfunc->func_slot);
+ ExecStoreAllNullTuple(perfunc->scan_slot);
}
}
else
{
/* Scalar-type case: just store the function result */
- ExecClearTuple(perfunc->func_slot);
- perfunc->func_slot->tts_values[0] = result;
- perfunc->func_slot->tts_isnull[0] = perfunc->fcinfo.isnull;
- ExecStoreVirtualTuple(perfunc->func_slot);
+ ExecClearTuple(perfunc->scan_slot);
+ perfunc->scan_slot->tts_values[0] = result;
+ perfunc->scan_slot->tts_isnull[0] = perfunc->fcinfo.isnull;
+ ExecStoreVirtualTuple(perfunc->scan_slot);
/* materializing handles expanded and toasted datums */
- ExecMaterializeSlot(perfunc->func_slot);
+ ExecMaterializeSlot(perfunc->scan_slot);
}
}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 1877fb4..292ab6c 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -2183,6 +2183,7 @@ _copyRangeTblFunction(const RangeTblFunction *from)
COPY_NODE_FIELD(funccoltypmods);
COPY_NODE_FIELD(funccolcollations);
COPY_BITMAPSET_FIELD(funcparams);
+ COPY_SCALAR_FIELD(funcasrecord);
return newnode;
}
@@ -2556,6 +2557,18 @@ _copyRangeFunction(const RangeFunction *from)
return newnode;
}
+static RangeFunctionElem *
+_copyRangeFunctionElem(const RangeFunctionElem *from)
+{
+ RangeFunctionElem *newnode = makeNode(RangeFunctionElem);
+
+ COPY_SCALAR_FIELD(asrecord);
+ COPY_NODE_FIELD(func);
+ COPY_NODE_FIELD(coldeflist);
+
+ return newnode;
+}
+
static RangeTableSample *
_copyRangeTableSample(const RangeTableSample *from)
{
@@ -5016,6 +5029,9 @@ copyObject(const void *from)
case T_RangeFunction:
retval = _copyRangeFunction(from);
break;
+ case T_RangeFunctionElem:
+ retval = _copyRangeFunctionElem(from);
+ break;
case T_RangeTableSample:
retval = _copyRangeTableSample(from);
break;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 448e1a9..f69dc8e 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2339,6 +2339,16 @@ _equalRangeFunction(const RangeFunction *a, const RangeFunction *b)
}
static bool
+_equalRangeFunctionElem(const RangeFunctionElem *a, const RangeFunctionElem *b)
+{
+ COMPARE_NODE_FIELD(func);
+ COMPARE_NODE_FIELD(coldeflist);
+ COMPARE_SCALAR_FIELD(asrecord);
+
+ return true;
+}
+
+static bool
_equalRangeTableSample(const RangeTableSample *a, const RangeTableSample *b)
{
COMPARE_NODE_FIELD(relation);
@@ -2484,6 +2494,7 @@ _equalRangeTblFunction(const RangeTblFunction *a, const RangeTblFunction *b)
COMPARE_NODE_FIELD(funccoltypmods);
COMPARE_NODE_FIELD(funccolcollations);
COMPARE_BITMAPSET_FIELD(funcparams);
+ COMPARE_SCALAR_FIELD(funcasrecord);
return true;
}
@@ -3314,6 +3325,9 @@ equal(const void *a, const void *b)
case T_RangeFunction:
retval = _equalRangeFunction(a, b);
break;
+ case T_RangeFunctionElem:
+ retval = _equalRangeFunctionElem(a, b);
+ break;
case T_RangeTableSample:
retval = _equalRangeTableSample(a, b);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 29b7712..bea295b 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -2879,6 +2879,7 @@ _outRangeTblFunction(StringInfo str, const RangeTblFunction *node)
WRITE_NODE_FIELD(funccoltypmods);
WRITE_NODE_FIELD(funccolcollations);
WRITE_BITMAPSET_FIELD(funcparams);
+ WRITE_BOOL_FIELD(funcasrecord);
}
static void
@@ -3148,6 +3149,16 @@ _outRangeFunction(StringInfo str, const RangeFunction *node)
}
static void
+_outRangeFunctionElem(StringInfo str, const RangeFunctionElem *node)
+{
+ WRITE_NODE_TYPE("RANGEFUNCTIONELEM");
+
+ WRITE_NODE_FIELD(func);
+ WRITE_BOOL_FIELD(coldeflist);
+ WRITE_BOOL_FIELD(asrecord);
+}
+
+static void
_outRangeTableSample(StringInfo str, const RangeTableSample *node)
{
WRITE_NODE_TYPE("RANGETABLESAMPLE");
@@ -3840,6 +3851,9 @@ outNode(StringInfo str, const void *obj)
case T_RangeFunction:
_outRangeFunction(str, obj);
break;
+ case T_RangeFunctionElem:
+ _outRangeFunctionElem(str, obj);
+ break;
case T_RangeTableSample:
_outRangeTableSample(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 6f9a81e..70bb73d 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -1357,6 +1357,7 @@ _readRangeTblFunction(void)
READ_NODE_FIELD(funccoltypmods);
READ_NODE_FIELD(funccolcollations);
READ_BITMAPSET_FIELD(funcparams);
+ READ_BOOL_FIELD(funcasrecord);
READ_DONE();
}
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 4496fde..3830bc9 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4828,6 +4828,10 @@ inline_set_returning_function(PlannerInfo *root, RangeTblEntry *rte)
return NULL;
rtfunc = (RangeTblFunction *) linitial(rte->functions);
+ /* Fail if the function returns as record - we don't implement that here. */
+ if (rtfunc->funcasrecord)
+ return NULL;
+
if (!IsA(rtfunc->funcexpr, FuncExpr))
return NULL;
fexpr = (FuncExpr *) rtfunc->funcexpr;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index cb5cfc4..183403f 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -441,11 +441,11 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
a_expr b_expr c_expr AexprConst indirection_el opt_slice_bound
columnref in_expr having_clause func_table array_expr
ExclusionWhereClause
-%type <list> rowsfrom_item rowsfrom_list opt_col_def_list
+%type <list> rowsfrom_list
%type <boolean> opt_ordinality
%type <list> ExclusionConstraintList ExclusionConstraintElem
%type <list> func_arg_list
-%type <node> func_arg_expr
+%type <node> func_arg_expr rowsfrom_item
%type <list> row explicit_row implicit_row type_list array_expr_list
%type <node> case_expr case_arg when_clause case_default
%type <list> when_clause_list
@@ -10980,10 +10980,17 @@ opt_repeatable_clause:
func_table: func_expr_windowless opt_ordinality
{
RangeFunction *n = makeNode(RangeFunction);
+ RangeFunctionElem *e = makeNode(RangeFunctionElem);
+
n->lateral = false;
n->ordinality = $2;
n->is_rowsfrom = false;
- n->functions = list_make1(list_make2($1, NIL));
+ n->functions = list_make1(e);
+
+ e->func = $1;
+ e->coldeflist = NIL;
+ e->asrecord = false;
+
/* alias and coldeflist are set by table_ref production */
$$ = (Node *) n;
}
@@ -10999,8 +11006,30 @@ func_table: func_expr_windowless opt_ordinality
}
;
-rowsfrom_item: func_expr_windowless opt_col_def_list
- { $$ = list_make2($1, $2); }
+rowsfrom_item: func_expr_windowless AS '(' TableFuncElementList ')'
+ {
+ RangeFunctionElem *n = makeNode(RangeFunctionElem);
+ n->func = $1;
+ n->coldeflist = $4;
+ n->asrecord = false;
+ $$ = (Node *) n;
+ }
+ | func_expr_windowless AS '(' ')'
+ {
+ RangeFunctionElem *n = makeNode(RangeFunctionElem);
+ n->func = $1;
+ n->coldeflist = NIL;
+ n->asrecord = true;
+ $$ = (Node *) n;
+ }
+ | func_expr_windowless
+ {
+ RangeFunctionElem *n = makeNode(RangeFunctionElem);
+ n->func = $1;
+ n->coldeflist = NIL;
+ n->asrecord = false;
+ $$ = (Node *) n;
+ }
;
rowsfrom_list:
@@ -11008,10 +11037,6 @@ rowsfrom_list:
| rowsfrom_list ',' rowsfrom_item { $$ = lappend($1, $3); }
;
-opt_col_def_list: AS '(' TableFuncElementList ')' { $$ = $3; }
- | /*EMPTY*/ { $$ = NIL; }
- ;
-
opt_ordinality: WITH_LA ORDINALITY { $$ = true; }
| /*EMPTY*/ { $$ = false; }
;
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 9b7fcc3..e40954c 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -533,6 +533,7 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
List *funcexprs = NIL;
List *funcnames = NIL;
List *coldeflists = NIL;
+ List *asrecordlist = NIL;
bool is_lateral;
RangeTblEntry *rte;
ListCell *lc;
@@ -567,14 +568,9 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
*/
foreach(lc, r->functions)
{
- List *pair = (List *) lfirst(lc);
- Node *fexpr;
- List *coldeflist;
-
- /* Disassemble the function-call/column-def-list pairs */
- Assert(list_length(pair) == 2);
- fexpr = (Node *) linitial(pair);
- coldeflist = (List *) lsecond(pair);
+ RangeFunctionElem *elem = (RangeFunctionElem *) lfirst(lc);
+ Node *fexpr = elem->func;
+ List *coldeflist = elem->coldeflist;
/*
* If we find a function call unnest() with more than one argument and
@@ -630,6 +626,8 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
/* coldeflist is empty, so no error is possible */
coldeflists = lappend(coldeflists, coldeflist);
+
+ asrecordlist = lappend_int(asrecordlist, false);
}
continue; /* done with this function item */
}
@@ -651,6 +649,8 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
exprLocation((Node *) r->coldeflist))));
coldeflists = lappend(coldeflists, coldeflist);
+
+ asrecordlist = lappend_int(asrecordlist, elem->asrecord);
}
pstate->p_lateral_active = false;
@@ -713,7 +713,7 @@ transformRangeFunction(ParseState *pstate, RangeFunction *r)
*/
rte = addRangeTableEntryForFunction(pstate,
funcnames, funcexprs, coldeflists,
- r, is_lateral, true);
+ asrecordlist, r, is_lateral, true);
return rte;
}
diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c
index 1e3ecbc..0d92a00 100644
--- a/src/backend/parser/parse_relation.c
+++ b/src/backend/parser/parse_relation.c
@@ -1382,6 +1382,7 @@ addRangeTableEntryForFunction(ParseState *pstate,
List *funcnames,
List *funcexprs,
List *coldeflists,
+ List *asrecordlist,
RangeFunction *rangefunc,
bool lateral,
bool inFromCl)
@@ -1395,7 +1396,8 @@ addRangeTableEntryForFunction(ParseState *pstate,
TupleDesc tupdesc;
ListCell *lc1,
*lc2,
- *lc3;
+ *lc3,
+ *lc4;
int i;
int j;
int funcno;
@@ -1429,11 +1431,12 @@ addRangeTableEntryForFunction(ParseState *pstate,
totalatts = 0;
funcno = 0;
- forthree(lc1, funcexprs, lc2, funcnames, lc3, coldeflists)
+ forfour(lc1, funcexprs, lc2, funcnames, lc3, coldeflists, lc4, asrecordlist)
{
Node *funcexpr = (Node *) lfirst(lc1);
char *funcname = (char *) lfirst(lc2);
List *coldeflist = (List *) lfirst(lc3);
+ int asrecord = lfirst_int(lc4);
RangeTblFunction *rtfunc = makeNode(RangeTblFunction);
TypeFuncClass functypclass;
Oid funcrettype;
@@ -1445,6 +1448,7 @@ addRangeTableEntryForFunction(ParseState *pstate,
rtfunc->funccoltypmods = NIL;
rtfunc->funccolcollations = NIL;
rtfunc->funcparams = NULL; /* not set until planning */
+ rtfunc->funcasrecord = asrecord;
/*
* Now determine if the function returns a simple or composite type.
@@ -1453,6 +1457,15 @@ addRangeTableEntryForFunction(ParseState *pstate,
&funcrettype,
&tupdesc);
+ if (asrecord && functypclass != TYPEFUNC_RECORD && functypclass != TYPEFUNC_COMPOSITE)
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("only composite and \"record\" returning functions can be returned as record"),
+ parser_errposition(pstate,
+ exprLocation(funcexpr))));
+ }
+
/*
* A coldeflist is required if the function returns RECORD and hasn't
* got a predetermined record type, and is prohibited otherwise.
@@ -1466,16 +1479,26 @@ addRangeTableEntryForFunction(ParseState *pstate,
parser_errposition(pstate,
exprLocation((Node *) coldeflist))));
}
- else
+ else if (functypclass == TYPEFUNC_RECORD && !asrecord)
{
- if (functypclass == TYPEFUNC_RECORD)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("a column definition list is required for functions returning \"record\""),
parser_errposition(pstate, exprLocation(funcexpr))));
}
- if (functypclass == TYPEFUNC_COMPOSITE)
+ if (asrecord)
+ {
+ tupdesc = CreateTemplateTupleDesc(1, false);
+ TupleDescInitEntry(tupdesc,
+ (AttrNumber) 1,
+ chooseScalarFunctionAlias(funcexpr, funcname,
+ alias, nfuncs),
+ funcrettype,
+ -1,
+ 0);
+ }
+ else if (functypclass == TYPEFUNC_COMPOSITE)
{
/* Composite data type, e.g. a table's row type */
Assert(tupdesc);
@@ -2052,7 +2075,29 @@ expandRTE(RangeTblEntry *rte, int rtindex, int sublevels_up,
functypclass = get_expr_result_type(rtfunc->funcexpr,
&funcrettype,
&tupdesc);
- if (functypclass == TYPEFUNC_COMPOSITE)
+
+ if (rtfunc->funcasrecord)
+ {
+ /* Base data type, i.e. scalar */
+ if (colnames)
+ *colnames = lappend(*colnames,
+ list_nth(rte->eref->colnames,
+ atts_done));
+
+ if (colvars)
+ {
+ Var *varnode;
+
+ varnode = makeVar(rtindex, atts_done + 1,
+ funcrettype, -1,
+ exprCollation(rtfunc->funcexpr),
+ sublevels_up);
+ varnode->location = location;
+
+ *colvars = lappend(*colvars, varnode);
+ }
+ }
+ else if (functypclass == TYPEFUNC_COMPOSITE)
{
/* Composite data type, e.g. a table's row type */
Assert(tupdesc);
@@ -2585,7 +2630,14 @@ get_rte_attribute_type(RangeTblEntry *rte, AttrNumber attnum,
&funcrettype,
&tupdesc);
- if (functypclass == TYPEFUNC_COMPOSITE)
+ if (rtfunc->funcasrecord)
+ {
+ /* XXX */
+ *vartype = funcrettype;
+ *vartypmod = -1;
+ *varcollid = exprCollation(rtfunc->funcexpr);
+ }
+ else if (functypclass == TYPEFUNC_COMPOSITE)
{
/* Composite data type, e.g. a table's row type */
Form_pg_attribute att_tup;
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 2f7efa8..7d7bd91 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -427,6 +427,7 @@ typedef enum NodeTag
T_WindowDef,
T_RangeSubselect,
T_RangeFunction,
+ T_RangeFunctionElem,
T_RangeTableSample,
T_TypeName,
T_ColumnDef,
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1481fff..f26e651 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -522,12 +522,9 @@ typedef struct RangeSubselect
/*
* RangeFunction - function call appearing in a FROM clause
*
- * functions is a List because we use this to represent the construct
- * ROWS FROM(func1(...), func2(...), ...). Each element of this list is a
- * two-element sublist, the first element being the untransformed function
- * call tree, and the second element being a possibly-empty list of ColumnDef
- * nodes representing any columndef list attached to that function within the
- * ROWS FROM() syntax.
+ * functions is a List because we use this to represent the construct ROWS
+ * FROM(func1(...), func2(...), ...). Each element of this list a
+ * RangeFunctionElem pointer.
*
* alias and coldeflist represent any alias and/or columndef list attached
* at the top level. (We disallow coldeflist appearing both here and
@@ -546,6 +543,17 @@ typedef struct RangeFunction
} RangeFunction;
/*
+ * RangeFunctionElem - individual function call in RangeFunction
+ */
+typedef struct RangeFunctionElem
+{
+ NodeTag type;
+ Node *func; /* untransformed function call */
+ List *coldeflist; /* optional coldef list inside ROWS FROM */
+ bool asrecord; /* record returned as one column */
+} RangeFunctionElem;
+
+/*
* RangeTableSample - TABLESAMPLE appearing in a raw FROM clause
*
* This node, appearing only in raw parse trees, represents
@@ -902,6 +910,7 @@ typedef struct RangeTblFunction
List *funccolcollations; /* OID list of column collation OIDs */
/* This is set during planning for use by the executor: */
Bitmapset *funcparams; /* PARAM_EXEC Param IDs affecting this func */
+ bool funcasrecord; /* return results as a single record column */
} RangeTblFunction;
/*
diff --git a/src/include/nodes/pg_list.h b/src/include/nodes/pg_list.h
index 77b50ff..b6b57c1 100644
--- a/src/include/nodes/pg_list.h
+++ b/src/include/nodes/pg_list.h
@@ -185,6 +185,15 @@ list_length(const List *l)
(cell1) != NULL && (cell2) != NULL && (cell3) != NULL; \
(cell1) = lnext(cell1), (cell2) = lnext(cell2), (cell3) = lnext(cell3))
+/*
+ * fortfour -
+ * the same for four lists
+ */
+#define forfour(cell1, list1, cell2, list2, cell3, list3, cell4, list4) \
+ for ((cell1) = list_head(list1), (cell2) = list_head(list2), (cell3) = list_head(list3), (cell4) = list_head(list4); \
+ (cell1) != NULL && (cell2) != NULL && (cell3) != NULL && (cell4) != NULL; \
+ (cell1) = lnext(cell1), (cell2) = lnext(cell2), (cell3) = lnext(cell3), (cell4) = lnext(cell4))
+
extern List *lappend(List *list, void *datum);
extern List *lappend_int(List *list, int datum);
extern List *lappend_oid(List *list, Oid datum);
diff --git a/src/include/parser/parse_relation.h b/src/include/parser/parse_relation.h
index 3ef3d7b..90af146 100644
--- a/src/include/parser/parse_relation.h
+++ b/src/include/parser/parse_relation.h
@@ -80,6 +80,7 @@ extern RangeTblEntry *addRangeTableEntryForFunction(ParseState *pstate,
List *funcnames,
List *funcexprs,
List *coldeflists,
+ List *asrecordlist,
RangeFunction *rangefunc,
bool lateral,
bool inFromCl);
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index a6496fc..249dc67 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -369,6 +369,10 @@ SELECT * FROM getfoo1(1) WITH ORDINALITY AS t1(v,o);
1 | 1
(1 row)
+SELECT * FROM ROWS FROM( getfoo1(1) AS ()) AS t1; -- error, not a composite / record
+ERROR: only composite and "record" returning functions can be returned as record
+LINE 1: SELECT * FROM ROWS FROM( getfoo1(1) AS ()) AS t1;
+ ^
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo1(1);
SELECT * FROM vw_getfoo;
getfoo1
@@ -401,6 +405,10 @@ SELECT * FROM getfoo2(1) WITH ORDINALITY AS t1(v,o);
1 | 2
(2 rows)
+SELECT * FROM ROWS FROM( getfoo2(1) AS ()) AS t1; -- error, not a composite / record
+ERROR: only composite and "record" returning functions can be returned as record
+LINE 1: SELECT * FROM ROWS FROM( getfoo2(1) AS ()) AS t1;
+ ^
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo2(1);
SELECT * FROM vw_getfoo;
getfoo2
@@ -435,6 +443,10 @@ SELECT * FROM getfoo3(1) WITH ORDINALITY AS t1(v,o);
Ed | 2
(2 rows)
+SELECT * FROM ROWS FROM( getfoo3(1) AS ()) AS t1; -- error, not a composite / record
+ERROR: only composite and "record" returning functions can be returned as record
+LINE 1: SELECT * FROM ROWS FROM( getfoo3(1) AS ()) AS t1;
+ ^
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo3(1);
SELECT * FROM vw_getfoo;
getfoo3
@@ -461,12 +473,24 @@ SELECT * FROM getfoo4(1) AS t1;
1 | 1 | Joe
(1 row)
+SELECT * FROM ROWS FROM( getfoo4(1) AS ()) AS t1;
+ t1
+-----------
+ (1,1,Joe)
+(1 row)
+
SELECT * FROM getfoo4(1) WITH ORDINALITY AS t1(a,b,c,o);
a | b | c | o
---+---+-----+---
1 | 1 | Joe | 1
(1 row)
+SELECT * FROM ROWS FROM( getfoo4(1) AS ()) WITH ORDINALITY AS t1(abc,o);
+ abc | o
+-----------+---
+ (1,1,Joe) | 1
+(1 row)
+
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo4(1);
SELECT * FROM vw_getfoo;
fooid | foosubid | fooname
@@ -492,6 +516,13 @@ SELECT * FROM getfoo5(1) AS t1;
1 | 2 | Ed
(2 rows)
+SELECT * FROM ROWS FROM(getfoo5(1) AS ()) AS t1;
+ t1
+-----------
+ (1,1,Joe)
+ (1,2,Ed)
+(2 rows)
+
SELECT * FROM getfoo5(1) WITH ORDINALITY AS t1(a,b,c,o);
a | b | c | o
---+---+-----+---
@@ -499,6 +530,13 @@ SELECT * FROM getfoo5(1) WITH ORDINALITY AS t1(a,b,c,o);
1 | 2 | Ed | 2
(2 rows)
+SELECT * FROM ROWS FROM(getfoo5(1) AS ()) WITH ORDINALITY AS t1(abc,o);
+ abc | o
+-----------+---
+ (1,1,Joe) | 1
+ (1,2,Ed) | 2
+(2 rows)
+
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo5(1);
SELECT * FROM vw_getfoo;
fooid | foosubid | fooname
@@ -525,12 +563,24 @@ SELECT * FROM getfoo6(1) AS t1(fooid int, foosubid int, fooname text);
1 | 1 | Joe
(1 row)
+SELECT * FROM ROWS FROM( getfoo6(1) AS ()) AS t1;
+ t1
+-----------
+ (1,1,Joe)
+(1 row)
+
SELECT * FROM ROWS FROM( getfoo6(1) AS (fooid int, foosubid int, fooname text) ) WITH ORDINALITY;
fooid | foosubid | fooname | ordinality
-------+----------+---------+------------
1 | 1 | Joe | 1
(1 row)
+SELECT * FROM ROWS FROM( getfoo6(1) AS ()) WITH ORDINALITY;
+ getfoo6 | ordinality
+-----------+------------
+ (1,1,Joe) | 1
+(1 row)
+
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo6(1) AS
(fooid int, foosubid int, fooname text);
SELECT * FROM vw_getfoo;
@@ -559,6 +609,13 @@ SELECT * FROM getfoo7(1) AS t1(fooid int, foosubid int, fooname text);
1 | 2 | Ed
(2 rows)
+SELECT * FROM ROWS FROM( getfoo7(1) AS ()) AS t1;
+ t1
+-----------
+ (1,1,Joe)
+ (1,2,Ed)
+(2 rows)
+
SELECT * FROM ROWS FROM( getfoo7(1) AS (fooid int, foosubid int, fooname text) ) WITH ORDINALITY;
fooid | foosubid | fooname | ordinality
-------+----------+---------+------------
@@ -566,6 +623,13 @@ SELECT * FROM ROWS FROM( getfoo7(1) AS (fooid int, foosubid int, fooname text) )
1 | 2 | Ed | 2
(2 rows)
+SELECT * FROM ROWS FROM( getfoo7(1) AS ()) WITH ORDINALITY;
+ getfoo7 | ordinality
+-----------+------------
+ (1,1,Joe) | 1
+ (1,2,Ed) | 2
+(2 rows)
+
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo7(1) AS
(fooid int, foosubid int, fooname text);
SELECT * FROM vw_getfoo;
@@ -601,6 +665,10 @@ SELECT * FROM getfoo8(1) WITH ORDINALITY AS t1(v,o);
1 | 1
(1 row)
+SELECT * FROM ROWS FROM( getfoo8(1) AS ()) AS t1; -- error, not a composite / record
+ERROR: only composite and "record" returning functions can be returned as record
+LINE 1: SELECT * FROM ROWS FROM( getfoo8(1) AS ()) AS t1;
+ ^
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo8(1);
SELECT * FROM vw_getfoo;
getfoo8
@@ -631,6 +699,12 @@ SELECT * FROM getfoo9(1) WITH ORDINALITY AS t1(a,b,c,o);
1 | 1 | Joe | 1
(1 row)
+SELECT * FROM ROWS FROM( getfoo9(1) AS ()) AS t1;
+ t1
+-----------
+ (1,1,Joe)
+(1 row)
+
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo9(1);
SELECT * FROM vw_getfoo;
fooid | foosubid | fooname
@@ -670,6 +744,17 @@ select * from rows from(getfoo9(1),getfoo8(1),
| | | | 1 | 2 | Ed | | | | 1 | 2 | Ed | | | | Ed | 1 | | 2
(2 rows)
+select * from rows from(getfoo9(1),getfoo8(1),
+ getfoo7(1) AS (fooid int, foosubid int, fooname text),
+ getfoo6(1) AS (),
+ getfoo5(1),getfoo4(1),getfoo3(1),getfoo2(1),getfoo1(1))
+ with ordinality as t1(a,b,c,d,e,f,g,h,k,l,m,o,p,q,r,s,t,u);
+ a | b | c | d | e | f | g | h | k | l | m | o | p | q | r | s | t | u
+---+---+-----+---+---+---+-----+-----------+---+---+-----+---+---+-----+-----+---+---+---
+ 1 | 1 | Joe | 1 | 1 | 1 | Joe | (1,1,Joe) | 1 | 1 | Joe | 1 | 1 | Joe | Joe | 1 | 1 | 1
+ | | | | 1 | 2 | Ed | | 1 | 2 | Ed | | | | Ed | 1 | | 2
+(2 rows)
+
create temporary view vw_foo as
select * from rows from(getfoo9(1),
getfoo7(1) AS (fooid int, foosubid int, fooname text),
diff --git a/src/test/regress/sql/rangefuncs.sql b/src/test/regress/sql/rangefuncs.sql
index c8edc55..b43473a 100644
--- a/src/test/regress/sql/rangefuncs.sql
+++ b/src/test/regress/sql/rangefuncs.sql
@@ -93,6 +93,7 @@ INSERT INTO foo VALUES(2,1,'Mary');
CREATE FUNCTION getfoo1(int) RETURNS int AS 'SELECT $1;' LANGUAGE SQL;
SELECT * FROM getfoo1(1) AS t1;
SELECT * FROM getfoo1(1) WITH ORDINALITY AS t1(v,o);
+SELECT * FROM ROWS FROM( getfoo1(1) AS ()) AS t1; -- error, not a composite / record
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo1(1);
SELECT * FROM vw_getfoo;
DROP VIEW vw_getfoo;
@@ -104,6 +105,7 @@ DROP VIEW vw_getfoo;
CREATE FUNCTION getfoo2(int) RETURNS setof int AS 'SELECT fooid FROM foo WHERE fooid = $1;' LANGUAGE SQL;
SELECT * FROM getfoo2(1) AS t1;
SELECT * FROM getfoo2(1) WITH ORDINALITY AS t1(v,o);
+SELECT * FROM ROWS FROM( getfoo2(1) AS ()) AS t1; -- error, not a composite / record
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo2(1);
SELECT * FROM vw_getfoo;
DROP VIEW vw_getfoo;
@@ -115,6 +117,7 @@ DROP VIEW vw_getfoo;
CREATE FUNCTION getfoo3(int) RETURNS setof text AS 'SELECT fooname FROM foo WHERE fooid = $1;' LANGUAGE SQL;
SELECT * FROM getfoo3(1) AS t1;
SELECT * FROM getfoo3(1) WITH ORDINALITY AS t1(v,o);
+SELECT * FROM ROWS FROM( getfoo3(1) AS ()) AS t1; -- error, not a composite / record
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo3(1);
SELECT * FROM vw_getfoo;
DROP VIEW vw_getfoo;
@@ -125,7 +128,9 @@ DROP VIEW vw_getfoo;
-- sql, proretset = f, prorettype = c
CREATE FUNCTION getfoo4(int) RETURNS foo AS 'SELECT * FROM foo WHERE fooid = $1;' LANGUAGE SQL;
SELECT * FROM getfoo4(1) AS t1;
+SELECT * FROM ROWS FROM( getfoo4(1) AS ()) AS t1;
SELECT * FROM getfoo4(1) WITH ORDINALITY AS t1(a,b,c,o);
+SELECT * FROM ROWS FROM( getfoo4(1) AS ()) WITH ORDINALITY AS t1(abc,o);
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo4(1);
SELECT * FROM vw_getfoo;
DROP VIEW vw_getfoo;
@@ -136,7 +141,9 @@ DROP VIEW vw_getfoo;
-- sql, proretset = t, prorettype = c
CREATE FUNCTION getfoo5(int) RETURNS setof foo AS 'SELECT * FROM foo WHERE fooid = $1;' LANGUAGE SQL;
SELECT * FROM getfoo5(1) AS t1;
+SELECT * FROM ROWS FROM(getfoo5(1) AS ()) AS t1;
SELECT * FROM getfoo5(1) WITH ORDINALITY AS t1(a,b,c,o);
+SELECT * FROM ROWS FROM(getfoo5(1) AS ()) WITH ORDINALITY AS t1(abc,o);
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo5(1);
SELECT * FROM vw_getfoo;
DROP VIEW vw_getfoo;
@@ -147,7 +154,9 @@ DROP VIEW vw_getfoo;
-- sql, proretset = f, prorettype = record
CREATE FUNCTION getfoo6(int) RETURNS RECORD AS 'SELECT * FROM foo WHERE fooid = $1;' LANGUAGE SQL;
SELECT * FROM getfoo6(1) AS t1(fooid int, foosubid int, fooname text);
+SELECT * FROM ROWS FROM( getfoo6(1) AS ()) AS t1;
SELECT * FROM ROWS FROM( getfoo6(1) AS (fooid int, foosubid int, fooname text) ) WITH ORDINALITY;
+SELECT * FROM ROWS FROM( getfoo6(1) AS ()) WITH ORDINALITY;
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo6(1) AS
(fooid int, foosubid int, fooname text);
SELECT * FROM vw_getfoo;
@@ -161,7 +170,9 @@ DROP VIEW vw_getfoo;
-- sql, proretset = t, prorettype = record
CREATE FUNCTION getfoo7(int) RETURNS setof record AS 'SELECT * FROM foo WHERE fooid = $1;' LANGUAGE SQL;
SELECT * FROM getfoo7(1) AS t1(fooid int, foosubid int, fooname text);
+SELECT * FROM ROWS FROM( getfoo7(1) AS ()) AS t1;
SELECT * FROM ROWS FROM( getfoo7(1) AS (fooid int, foosubid int, fooname text) ) WITH ORDINALITY;
+SELECT * FROM ROWS FROM( getfoo7(1) AS ()) WITH ORDINALITY;
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo7(1) AS
(fooid int, foosubid int, fooname text);
SELECT * FROM vw_getfoo;
@@ -176,6 +187,7 @@ DROP VIEW vw_getfoo;
CREATE FUNCTION getfoo8(int) RETURNS int AS 'DECLARE fooint int; BEGIN SELECT fooid into fooint FROM foo WHERE fooid = $1; RETURN fooint; END;' LANGUAGE plpgsql;
SELECT * FROM getfoo8(1) AS t1;
SELECT * FROM getfoo8(1) WITH ORDINALITY AS t1(v,o);
+SELECT * FROM ROWS FROM( getfoo8(1) AS ()) AS t1; -- error, not a composite / record
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo8(1);
SELECT * FROM vw_getfoo;
DROP VIEW vw_getfoo;
@@ -187,6 +199,7 @@ DROP VIEW vw_getfoo;
CREATE FUNCTION getfoo9(int) RETURNS foo AS 'DECLARE footup foo%ROWTYPE; BEGIN SELECT * into footup FROM foo WHERE fooid = $1; RETURN footup; END;' LANGUAGE plpgsql;
SELECT * FROM getfoo9(1) AS t1;
SELECT * FROM getfoo9(1) WITH ORDINALITY AS t1(a,b,c,o);
+SELECT * FROM ROWS FROM( getfoo9(1) AS ()) AS t1;
CREATE VIEW vw_getfoo AS SELECT * FROM getfoo9(1);
SELECT * FROM vw_getfoo;
DROP VIEW vw_getfoo;
@@ -206,6 +219,11 @@ select * from rows from(getfoo9(1),getfoo8(1),
getfoo6(1) AS (fooid int, foosubid int, fooname text),
getfoo5(1),getfoo4(1),getfoo3(1),getfoo2(1),getfoo1(1))
with ordinality as t1(a,b,c,d,e,f,g,h,i,j,k,l,m,o,p,q,r,s,t,u);
+select * from rows from(getfoo9(1),getfoo8(1),
+ getfoo7(1) AS (fooid int, foosubid int, fooname text),
+ getfoo6(1) AS (),
+ getfoo5(1),getfoo4(1),getfoo3(1),getfoo2(1),getfoo1(1))
+ with ordinality as t1(a,b,c,d,e,f,g,h,k,l,m,o,p,q,r,s,t,u);
create temporary view vw_foo as
select * from rows from(getfoo9(1),
--
2.9.3
0005-Basic-implementation-of-targetlist-SRFs-via-ROWS-FRO.patchtext/x-patch; charset=us-asciiDownload
From f18baaeb55803c179e54a4c592532ed27cf8a815 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Fri, 29 Jul 2016 18:51:02 -0700
Subject: [PATCH 5/6] Basic implementation of targetlist SRFs via ROWS FROM.
---
src/backend/executor/execQual.c | 7 +
src/backend/nodes/copyfuncs.c | 2 +
src/backend/nodes/equalfuncs.c | 2 +
src/backend/nodes/outfuncs.c | 2 +
src/backend/nodes/readfuncs.c | 2 +
src/backend/optimizer/plan/initsplan.c | 4 +
src/backend/optimizer/plan/planner.c | 8 +
src/backend/optimizer/prep/prepjointree.c | 4 +
src/backend/optimizer/util/clauses.c | 549 ++++++++++++++++++++++++++++++
src/backend/parser/analyze.c | 10 +
src/backend/parser/parse_func.c | 5 +
src/backend/parser/parse_oper.c | 5 +
src/include/nodes/parsenodes.h | 6 +-
src/include/optimizer/clauses.h | 2 +
src/include/parser/parse_node.h | 1 +
src/test/regress/expected/aggregates.out | 21 +-
src/test/regress/expected/limit.out | 72 ++--
src/test/regress/expected/portals.out | 12 +-
src/test/regress/expected/rangefuncs.out | 10 +-
src/test/regress/expected/subselect.out | 29 +-
src/test/regress/expected/tsrf.out | 15 +-
src/test/regress/expected/union.out | 8 +-
src/test/regress/output/misc.source | 18 +-
23 files changed, 716 insertions(+), 78 deletions(-)
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index 79589d0..d9e2797 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -2074,6 +2074,13 @@ ExecEvalFunc(FuncExprState *fcache,
ExecInitFcache(func->funcid, func->inputcollid, fcache,
econtext->ecxt_per_query_memory, true);
+ if (fcache->func.fn_retset)
+ {
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
+ }
+
/*
* We need to invoke ExecMakeFunctionResult if either the function itself
* or any of its input expressions can return a set. Otherwise, invoke
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 292ab6c..589fc27 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -2167,6 +2167,7 @@ _copyRangeTblEntry(const RangeTblEntry *from)
COPY_BITMAPSET_FIELD(insertedCols);
COPY_BITMAPSET_FIELD(updatedCols);
COPY_NODE_FIELD(securityQuals);
+ COPY_NODE_FIELD(deps);
return newnode;
}
@@ -2749,6 +2750,7 @@ _copyQuery(const Query *from)
COPY_SCALAR_FIELD(hasModifyingCTE);
COPY_SCALAR_FIELD(hasForUpdate);
COPY_SCALAR_FIELD(hasRowSecurity);
+ COPY_SCALAR_FIELD(hasTargetSRF);
COPY_NODE_FIELD(cteList);
COPY_NODE_FIELD(rtable);
COPY_NODE_FIELD(jointree);
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index f69dc8e..b24e623 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -927,6 +927,7 @@ _equalQuery(const Query *a, const Query *b)
COMPARE_SCALAR_FIELD(hasModifyingCTE);
COMPARE_SCALAR_FIELD(hasForUpdate);
COMPARE_SCALAR_FIELD(hasRowSecurity);
+ COMPARE_SCALAR_FIELD(hasTargetSRF);
COMPARE_NODE_FIELD(cteList);
COMPARE_NODE_FIELD(rtable);
COMPARE_NODE_FIELD(jointree);
@@ -2480,6 +2481,7 @@ _equalRangeTblEntry(const RangeTblEntry *a, const RangeTblEntry *b)
COMPARE_BITMAPSET_FIELD(insertedCols);
COMPARE_BITMAPSET_FIELD(updatedCols);
COMPARE_NODE_FIELD(securityQuals);
+ COMPARE_NODE_FIELD(deps);
return true;
}
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index bea295b..86cd575 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -2688,6 +2688,7 @@ _outQuery(StringInfo str, const Query *node)
WRITE_BOOL_FIELD(hasModifyingCTE);
WRITE_BOOL_FIELD(hasForUpdate);
WRITE_BOOL_FIELD(hasRowSecurity);
+ WRITE_BOOL_FIELD(hasTargetSRF);
WRITE_NODE_FIELD(cteList);
WRITE_NODE_FIELD(rtable);
WRITE_NODE_FIELD(jointree);
@@ -2865,6 +2866,7 @@ _outRangeTblEntry(StringInfo str, const RangeTblEntry *node)
WRITE_BITMAPSET_FIELD(insertedCols);
WRITE_BITMAPSET_FIELD(updatedCols);
WRITE_NODE_FIELD(securityQuals);
+ WRITE_NODE_FIELD(deps);
}
static void
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 70bb73d..174b02f 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -244,6 +244,7 @@ _readQuery(void)
READ_BOOL_FIELD(hasModifyingCTE);
READ_BOOL_FIELD(hasForUpdate);
READ_BOOL_FIELD(hasRowSecurity);
+ READ_BOOL_FIELD(hasTargetSRF);
READ_NODE_FIELD(cteList);
READ_NODE_FIELD(rtable);
READ_NODE_FIELD(jointree);
@@ -1338,6 +1339,7 @@ _readRangeTblEntry(void)
READ_BITMAPSET_FIELD(insertedCols);
READ_BITMAPSET_FIELD(updatedCols);
READ_NODE_FIELD(securityQuals);
+ READ_NODE_FIELD(deps);
READ_DONE();
}
diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c
index 84ce6b3..ada34cc 100644
--- a/src/backend/optimizer/plan/initsplan.c
+++ b/src/backend/optimizer/plan/initsplan.c
@@ -339,6 +339,10 @@ extract_lateral_references(PlannerInfo *root, RelOptInfo *brel, Index rtindex)
return; /* keep compiler quiet */
}
+ /* DIRTY hack time, add dependency for targetlist SRFs */
+ vars = list_concat(vars,
+ pull_vars_of_level((Node *) rte->deps, 0));
+
if (vars == NIL)
return; /* nothing to do */
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 174210b..986c92b 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -505,6 +505,14 @@ subquery_planner(PlannerGlobal *glob, Query *parse,
root->non_recursive_path = NULL;
/*
+ * Convert SRFs in targetlist into FUNCTION rtes. As this, if applicable,
+ * will move the main portion of the query into a subselect, this has to
+ * be done early on in subquery_planner().
+ */
+ if (parse->hasTargetSRF)
+ unsrfify(root);
+
+ /*
* If there is a WITH list, process each WITH query and build an initplan
* SubPlan structure for it.
*/
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index a334f15..0e06a98 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -1982,6 +1982,10 @@ replace_vars_in_jointree(Node *jtnode,
Assert(false);
break;
}
+
+ rte->deps = (List *)
+ pullup_replace_vars((Node *) rte->deps,
+ context);
}
}
}
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 3830bc9..9c502bd 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -36,6 +36,7 @@
#include "optimizer/cost.h"
#include "optimizer/planmain.h"
#include "optimizer/prep.h"
+#include "optimizer/tlist.h"
#include "optimizer/var.h"
#include "parser/analyze.h"
#include "parser/parse_agg.h"
@@ -95,6 +96,30 @@ typedef struct
char max_interesting; /* worst proparallel hazard of interest */
} max_parallel_hazard_context;
+typedef struct unsrfify_context
+{
+ PlannerInfo *root;
+ /* query being converted */
+ Query *outer_query;
+ /* created subquery */
+ Query *inner_query;
+ /* RT index of the above */
+ Index subquery_rti;
+
+ /* targetlist of the new subquery */
+ List *subquery_tlist;
+ List *subquery_colnames;
+
+ /* RTE for the currently generated function RTE */
+ RangeTblEntry *currte;
+ Index currti; /* and it's RT index */
+ /* current column number in function RTE */
+ int coloff;
+
+ /* current target's resname during expression iteration */
+ char *current_resname;
+} unsrfify_context;
+
static bool contain_agg_clause_walker(Node *node, void *context);
static bool get_agg_clause_costs_walker(Node *node,
get_agg_clause_costs_context *context);
@@ -2390,6 +2415,530 @@ rowtype_field_matches(Oid rowtypeid, int fieldnum,
return true;
}
+/*
+ * Push down expression into the subquery, return resno of targetlist entry.
+ */
+static int
+unsrfify_push_expr_to_subquery(Expr *expr, Index sortgroupref,
+ unsrfify_context *context)
+{
+ ListCell *tc;
+ int resno = 1;
+ char *resname = context->current_resname;
+ TargetEntry *new_te;
+
+ /*
+ * Check whether we already moved this expression to subquery, if so,
+ * reuse.
+ */
+ foreach(tc, context->subquery_tlist)
+ {
+ TargetEntry *te = (TargetEntry *) lfirst(tc);
+ Expr *oldexpr = te->expr;
+
+ if (equal(oldexpr, expr))
+ {
+ if (sortgroupref > 0)
+ {
+ if (te->ressortgroupref != sortgroupref &&
+ te->ressortgroupref > 0)
+ {
+ /* FIXME: might happen with duplicate expressions? */
+ elog(ERROR, "non-unique ressortgroupref?");
+ }
+ else
+ {
+ te->ressortgroupref = sortgroupref;
+ return resno;
+ }
+ }
+ return resno;
+ }
+ resno++;
+ }
+
+ /* XXX */
+ if (!resname)
+ resname = "...";
+
+ Assert(resno == list_length(context->subquery_tlist) + 1);
+
+ new_te = makeTargetEntry((Expr *) copyObject(expr),
+ resno, resname , false);
+ new_te->ressortgroupref = sortgroupref;
+ context->subquery_tlist = lappend(context->subquery_tlist, new_te);
+ context->subquery_colnames = lappend(context->subquery_colnames,
+ makeString(context->current_resname));
+
+ return resno;
+}
+
+/*
+ * Change target list to reference subquery.
+ *
+ * TargetEntry's that dont't contain a set returning function are pushed down
+ * entirely, others are modified to have relevant expressions refer to (new)
+ * entries in the subquery targetlist.
+ */
+static Node *
+unsrfify_reference_subquery_mutator(Node *node, unsrfify_context *context)
+{
+ check_stack_depth();
+
+ if (node == NULL)
+ return NULL;
+
+ switch (nodeTag(node))
+ {
+ case T_TargetEntry:
+ {
+ TargetEntry *te = (TargetEntry *) node;
+
+ /*
+ * Note that we're intentionally pushing down sortgrouprefs,
+ * that way grouping et al will work. It's more than a bit
+ * debatable though to do this unconditionally: We'll
+ * currently end up with sortgrouprefs in both top-level and
+ * subquery.
+ */
+
+ /* XXX: naming here isn't great */
+ if (!te->resname)
+ context->current_resname = "...";
+ else
+ context->current_resname = pstrdup(te->resname);
+
+ /* if expression doesn't return set, push down entirely */
+ if (!expression_returns_set((Node *) te->expr))
+ {
+ AttrNumber resno =
+ unsrfify_push_expr_to_subquery(te->expr,
+ te->ressortgroupref,
+ context);
+ te = flatCopyTargetEntry(te);
+
+ te->expr = (Expr *) makeVar(context->subquery_rti,
+ resno,
+ exprType((Node *) te->expr),
+ exprTypmod((Node *) te->expr),
+ exprCollation((Node *) te->expr),
+ 0);
+ }
+ else
+ {
+ te = (TargetEntry *)
+ expression_tree_mutator((Node *) te,
+ unsrfify_reference_subquery_mutator,
+ (void *) context);
+ }
+
+ context->current_resname = NULL;
+ return (Node *) te;
+ }
+ break;
+ /* Anything additional? */
+ case T_Var:
+ case T_Aggref:
+ case T_GroupingFunc:
+ case T_WindowFunc:
+ case T_Param /* ? */:
+ /*
+ * Vars, aggrefs, groupingfuncs, ... come from the subquery in
+ * which the main query is being moved. For each reference in the
+ * main targetlist - containing the reference to the SRF and such
+ * - move the underlying clause as a separate TargetEntry into the
+ * subquery, and reference that.
+ *
+ * Note that varlevelsup for expressions in the subquery is later
+ * adjusted with IncrementVarSublevelsUp, together with the other
+ * expressions in the subquery.
+ */
+ {
+ AttrNumber resno =
+ unsrfify_push_expr_to_subquery((Expr *) node, 0, context);
+
+ return (Node *) makeVar(context->subquery_rti,
+ resno,
+ exprType(node),
+ exprTypmod(node),
+ exprCollation(node),
+ 0);
+ }
+ return node;
+ default:
+ break;
+ }
+
+ return expression_tree_mutator(node, unsrfify_reference_subquery_mutator,
+ (void *) context);
+}
+
+static Node *
+unsrfify_implement_srfs_mutator(Node *node, unsrfify_context *context)
+{
+ check_stack_depth();
+
+ if (node == NULL)
+ return NULL;
+ switch (nodeTag(node))
+ {
+ case T_OpExpr:
+ {
+ OpExpr *expr = (OpExpr *) node;
+
+ if (expr->opretset)
+ {
+ /*
+ * TODO: Hrmpf, implement. And why is there not a single
+ * test for this :(
+ */
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("XXX: SETOF record returning operators are not supported")));
+ }
+ }
+ break;
+
+ case T_FuncExpr:
+ {
+ FuncExpr *expr = (FuncExpr *) node;
+
+ /*
+ * For set returning functions, move them to the current
+ * level's ROWS FROM expression, and add a Var referencing
+ * that expressions result.
+ */
+ if (expr->funcretset)
+ {
+ RangeTblEntry *old_currte;
+ Index old_currti;
+ int old_coloff;
+
+ /*
+ * Process set-returning arguments to set-returning
+ * functions as a separate ROWS FROM expression, again
+ * laterally joined to this.
+ */
+ old_currte = context->currte;
+ old_currti = context->currti;
+ old_coloff = context->coloff;
+
+ context->currte = NULL;
+ context->currti = 0;
+ context->coloff = 0;
+
+ expr->args = (List *)
+ expression_tree_mutator((Node *) expr->args,
+ unsrfify_implement_srfs_mutator,
+ (void *) context);
+ context->currte = old_currte;
+ context->currti = old_currti;
+ context->coloff = old_coloff;
+
+ }
+ else
+ {
+ expr->args = (List *)
+ expression_tree_mutator((Node *) expr->args,
+ unsrfify_implement_srfs_mutator,
+ (void *) context);
+ }
+
+ if (expr->funcretset)
+ {
+ RangeTblEntry *rte;
+ RangeTblFunction *rtfunc;
+ RangeTblRef *rtf;
+ Index rti;
+ Oid funcrettype;
+ /* FIXME: used in places it shouldn't */
+ char *funcname = get_func_name(expr->funcid);
+ bool asrecord;
+
+ funcrettype = exprType(node);
+
+ asrecord = type_is_rowtype(funcrettype);
+
+ if (context->currte == NULL)
+ {
+ Alias *eref;
+
+ rte = makeNode(RangeTblEntry);
+ rte->rtekind = RTE_FUNCTION;
+ rte->lateral = true;
+ rte->inh = false;
+ rte->inFromCl = true;
+
+ eref = makeAlias(funcname, NIL);
+
+ rte->eref = eref;
+
+ rte->funcordinality = false;
+
+ /*
+ * DIRTY hack time: add LATERAL dependency to the
+ * subquery containing the original query. That forces
+ * the planner to evaluate the subquery first
+ * (i.e. nestloop subquery to SRF, not the other way
+ * round), persisting the output ordering of the SRF.
+ */
+ rte->deps = list_make1(makeVar(context->subquery_rti, 0, RECORDOID, -1, InvalidOid, 0));
+
+ context->outer_query->rtable =
+ lappend(context->outer_query->rtable, rte);
+
+ rti = list_length(context->outer_query->rtable);
+
+ rtf = makeNode(RangeTblRef);
+ rtf->rtindex = rti;
+
+ context->outer_query->jointree->fromlist =
+ lappend(context->outer_query->jointree->fromlist, rtf);
+
+ context->currte = rte;
+ context->currti = rti;
+ }
+ else
+ {
+ rte = context->currte;
+ rti = context->currti;
+ }
+
+ /* add SRF RTE */
+ rtfunc = makeNode(RangeTblFunction);
+ rtfunc->funcexpr = (Node *) expr;
+ rtfunc->funccolcount = 1;
+ rtfunc->funcasrecord = asrecord;
+
+ rte->functions = lappend(rte->functions, rtfunc);
+
+ rte->eref->colnames = lappend(rte->eref->colnames,
+ makeString(funcname));
+
+ /* replace reference to RTE */
+ return (Node *) makeVar(rti,
+ ++context->coloff,
+ funcrettype,
+ exprTypmod(node),
+ expr->funccollid,
+ 0);
+ }
+ }
+ break;
+ default:
+ break;
+ }
+
+ return expression_tree_mutator(node, unsrfify_implement_srfs_mutator,
+ (void *) context);
+}
+
+/*
+ * Implement set-returning-functions in the targetlist using ROWS FROM() in
+ * the from list.
+ */
+void
+unsrfify(PlannerInfo *root)
+{
+ unsrfify_context context;
+ Query *outer_query = root->parse;
+ List *outerOldTlist = root->parse->targetList;
+ bool sortContainsSRF = false;
+ bool groupContainsSRF = false;
+ Query *inner_query;
+ RangeTblEntry *rte;
+ RangeTblRef *rtf;
+ ListCell *lc;
+
+ /* skip work if targetlist doesn't contain an SRF */
+ if (!expression_returns_set((Node *) root->parse->targetList))
+ {
+ return;
+ }
+
+ Assert(outer_query->commandType != CMD_UPDATE);
+
+ inner_query = makeNode(Query);
+ rte = makeNode(RangeTblEntry);
+ rtf = makeNode(RangeTblRef);
+
+ memset(&context, 0, sizeof(context));
+ context.root = root;
+ context.outer_query = outer_query;
+ context.inner_query = inner_query;
+
+ /* check whether sorting has to be performed before/after SRF processing */
+ foreach(lc, root->parse->sortClause)
+ {
+ SortGroupClause *sgc = lfirst(lc);
+ Node *sortExpr = get_sortgroupclause_expr(sgc, root->parse->targetList);
+
+ if (expression_returns_set(sortExpr))
+ {
+ sortContainsSRF = true;
+ break;
+ }
+ }
+
+ /* check whether sorting has to be performed before/after SRF processing */
+ foreach(lc, root->parse->groupClause)
+ {
+ SortGroupClause *sgc = lfirst(lc);
+ Node *groupExpr = get_sortgroupclause_expr(sgc, root->parse->targetList);
+
+ if (expression_returns_set(groupExpr))
+ {
+ groupContainsSRF = true;
+ break;
+ }
+ }
+
+ /*
+ * Move main query processing into a subquery. Otherwise aggregates will
+ * possibly process more rows, due to the SRF expanding the result set. We
+ * could perform this work conditionally, but that seems like an
+ * unnecessary complication.
+ *
+ * If the query has an order-by, but that order-by does not reference SRF
+ * output, then SRF expansion should happen after the sort, for two
+ * reasons: Firstly, to process fewer rows. Secondly, to have less
+ * confusing results, if the output of the SRF are sorted.
+ */
+ rte->rtekind = RTE_SUBQUERY;
+ rte->subquery = inner_query;
+ rte->security_barrier = false;
+ context.subquery_rti = list_length(outer_query->rtable) + 1;
+ rtf->rtindex = context.subquery_rti;
+
+ inner_query->commandType = CMD_SELECT;
+ inner_query->querySource = QSRC_TARGETLIST_SRF;
+ inner_query->canSetTag = true;
+
+ /*
+ * Copy the range-table, without resetting it on the outside. If the outer
+ * query is a data-modifying one, resultRelation needs to point to the
+ * actually modified table. XXX: But that doesn't work at all for
+ * UPDATEs, because there expand_targetlist() will add Vars pointing to
+ * the result relation.
+ */
+ inner_query->rtable = copyObject(outer_query->rtable);
+
+ inner_query->jointree = outer_query->jointree;
+
+ /*
+ * Transfer group / window computation to child, unless referencing SRF
+ * output.
+ */
+ if (!groupContainsSRF)
+ {
+ inner_query->hasAggs = outer_query->hasAggs;
+ outer_query->hasAggs = false; /* moved to subquery */
+ }
+ else
+ {
+ inner_query->hasAggs = false;
+ }
+
+ inner_query->hasWindowFuncs = outer_query->hasWindowFuncs; /* FIXME */
+ outer_query->hasWindowFuncs = false;
+
+ /* can still be present in outer query */
+ inner_query->hasSubLinks = outer_query->hasSubLinks;
+
+ /*
+ * CTEs stay on outer level, IncrementVarSublevelsUp adjusts ctelevelsup.
+ */
+ inner_query->hasRecursive = false;
+ inner_query->hasModifyingCTE = false;
+
+ inner_query->hasForUpdate = false;
+
+ inner_query->hasRowSecurity = outer_query->hasRowSecurity;
+
+ /* we've expanded everything */
+ outer_query->hasTargetSRF = false;
+
+ outer_query->rtable = lappend(outer_query->rtable, rte);
+
+ outer_query->jointree = makeFromExpr(list_make1(rtf), NULL);
+
+ /* targetlist is set later */
+
+ /* not modifying */
+ inner_query->onConflict = NULL;
+ inner_query->returningList = NIL;
+
+ /*
+ * Transfer group / window related clauses to child, unless referencing
+ * SRF output.
+ */
+ if (!groupContainsSRF && list_length(outer_query->groupClause) > 0)
+ {
+ inner_query->groupClause = outer_query->groupClause;
+ outer_query->groupClause = NIL;
+ }
+
+ inner_query->groupingSets = outer_query->groupingSets;
+ outer_query->groupingSets = NIL;
+
+ inner_query->havingQual = outer_query->havingQual;
+ outer_query->havingQual = NULL;
+
+ inner_query->windowClause = outer_query->windowClause;
+ outer_query->windowClause = NIL;
+
+ /* DISTINCT [ON] is computed outside */
+
+ /* sort is computed in sub query, unless referencing SRF output */
+ /* XXX: what about combinations with DISTINCT? */
+ if (!sortContainsSRF && list_length(outer_query->sortClause) > 0)
+ {
+ inner_query->sortClause = outer_query->sortClause;
+ outer_query->sortClause = NIL;
+ }
+
+
+ /* limit is processed after SRF expansion */
+
+ /* XXX: where should row marks be processed? */
+
+ /* XXX: where should set operations be processed? */
+ inner_query->setOperations = outer_query->setOperations;
+ outer_query->setOperations = NULL;
+
+ /* constraints should stay on top level */
+
+ /* XXX: where should WITH CHECK options be processed? */
+
+ /*
+ * Update the outer query's targetlist to reference subquery for all
+ * Vars, Aggs and such.
+ */
+ outer_query->targetList = (List *)
+ unsrfify_reference_subquery_mutator((Node *) outerOldTlist,
+ &context);
+ /*
+ * Now convert all targetlist SRFs into FUNCTION RTEs.
+ */
+ outer_query->targetList = (List *)
+ unsrfify_implement_srfs_mutator((Node *) outer_query->targetList,
+ &context);
+
+
+ rte->eref = makeAlias("srf", context.subquery_colnames);
+
+ inner_query->targetList = context.subquery_tlist;
+
+ /*
+ * varlevelsup for expression not local to the query (i.e. varlevelsup >
+ * 0) have to be increased by one, to adjust for the additional layer of
+ * subquery added. Do so after the above processing populating the
+ * subselect's targetlist, to avoid having to deal with varlevelsup in
+ * multiple places.
+ */
+ IncrementVarSublevelsUp((Node *) inner_query, 1, 1);
+}
+
/*--------------------
* eval_const_expressions
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index cf5bc86..f81db37 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -418,6 +418,7 @@ transformDeleteStmt(ParseState *pstate, DeleteStmt *stmt)
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
qry->hasAggs = pstate->p_hasAggs;
+ qry->hasTargetSRF = pstate->p_hasTargetSRF;
if (pstate->p_hasAggs)
parseCheckAggregates(pstate, qry);
@@ -820,6 +821,7 @@ transformInsertStmt(ParseState *pstate, InsertStmt *stmt)
qry->jointree = makeFromExpr(pstate->p_joinlist, NULL);
qry->hasSubLinks = pstate->p_hasSubLinks;
+ qry->hasTargetSRF = pstate->p_hasTargetSRF;
assign_query_collations(pstate, qry);
@@ -1232,6 +1234,7 @@ transformSelectStmt(ParseState *pstate, SelectStmt *stmt)
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
qry->hasAggs = pstate->p_hasAggs;
+ qry->hasTargetSRF = pstate->p_hasTargetSRF;
if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
parseCheckAggregates(pstate, qry);
@@ -1463,6 +1466,11 @@ transformValuesClause(ParseState *pstate, SelectStmt *stmt)
qry->hasSubLinks = pstate->p_hasSubLinks;
+ if (pstate->p_hasTargetSRF)
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
+
assign_query_collations(pstate, qry);
return qry;
@@ -1692,6 +1700,7 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
qry->hasAggs = pstate->p_hasAggs;
+ qry->hasTargetSRF = pstate->p_hasTargetSRF;
if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
parseCheckAggregates(pstate, qry);
@@ -2171,6 +2180,7 @@ transformUpdateStmt(ParseState *pstate, UpdateStmt *stmt)
qry->jointree = makeFromExpr(pstate->p_joinlist, qual);
qry->hasSubLinks = pstate->p_hasSubLinks;
+ qry->hasTargetSRF = pstate->p_hasTargetSRF;
assign_query_collations(pstate, qry);
diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c
index 61af484..770903d 100644
--- a/src/backend/parser/parse_func.c
+++ b/src/backend/parser/parse_func.c
@@ -625,6 +625,11 @@ ParseFuncOrColumn(ParseState *pstate, List *funcname, List *fargs,
exprLocation((Node *) llast(fargs)))));
}
+ if (retset)
+ {
+ pstate->p_hasTargetSRF = true;
+ }
+
/* build the appropriate output structure */
if (fdresult == FUNCDETAIL_NORMAL)
{
diff --git a/src/backend/parser/parse_oper.c b/src/backend/parser/parse_oper.c
index e913d05..0a1a0f1 100644
--- a/src/backend/parser/parse_oper.c
+++ b/src/backend/parser/parse_oper.c
@@ -841,6 +841,11 @@ make_op(ParseState *pstate, List *opname, Node *ltree, Node *rtree,
ReleaseSysCache(tup);
+ if (result->opretset)
+ {
+ pstate->p_hasTargetSRF = true;
+ }
+
return (Expr *) result;
}
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index f26e651..969db8c 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -32,7 +32,8 @@ typedef enum QuerySource
QSRC_PARSER, /* added by parse analysis (now unused) */
QSRC_INSTEAD_RULE, /* added by unconditional INSTEAD rule */
QSRC_QUAL_INSTEAD_RULE, /* added by conditional INSTEAD rule */
- QSRC_NON_INSTEAD_RULE /* added by non-INSTEAD rule */
+ QSRC_NON_INSTEAD_RULE, /* added by non-INSTEAD rule */
+ QSRC_TARGETLIST_SRF /* added by targetlist SRF processing */
} QuerySource;
/* Sort ordering options for ORDER BY and CREATE INDEX */
@@ -122,6 +123,7 @@ typedef struct Query
bool hasModifyingCTE; /* has INSERT/UPDATE/DELETE in WITH */
bool hasForUpdate; /* FOR [KEY] UPDATE/SHARE was specified */
bool hasRowSecurity; /* rewriter has applied some RLS policy */
+ bool hasTargetSRF; /* has SRF in target list */
List *cteList; /* WITH list (of CommonTableExpr's) */
@@ -879,6 +881,8 @@ typedef struct RangeTblEntry
Bitmapset *insertedCols; /* columns needing INSERT permission */
Bitmapset *updatedCols; /* columns needing UPDATE permission */
List *securityQuals; /* any security barrier quals to apply */
+
+ List *deps;
} RangeTblEntry;
/*
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 9abef37..7fb5005 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -79,6 +79,8 @@ extern int NumRelids(Node *clause);
extern void CommuteOpExpr(OpExpr *clause);
extern void CommuteRowCompareExpr(RowCompareExpr *clause);
+extern void unsrfify(PlannerInfo *root);
+
extern Node *eval_const_expressions(PlannerInfo *root, Node *node);
extern Node *estimate_expression_value(PlannerInfo *root, Node *node);
diff --git a/src/include/parser/parse_node.h b/src/include/parser/parse_node.h
index e3e359c..c0eec33 100644
--- a/src/include/parser/parse_node.h
+++ b/src/include/parser/parse_node.h
@@ -152,6 +152,7 @@ struct ParseState
bool p_hasWindowFuncs;
bool p_hasSubLinks;
bool p_hasModifyingCTE;
+ bool p_hasTargetSRF;
bool p_is_insert;
bool p_locked_from_parent;
Relation p_target_relation;
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index 45208a6..b791572 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -814,16 +814,19 @@ select max(unique2) from tenk1 order by max(unique2)+1;
explain (costs off)
select max(unique2), generate_series(1,3) as g from tenk1 order by g desc;
- QUERY PLAN
----------------------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------------------
Sort
- Sort Key: (generate_series(1, 3)) DESC
- InitPlan 1 (returns $0)
- -> Limit
- -> Index Only Scan Backward using tenk1_unique2 on tenk1
- Index Cond: (unique2 IS NOT NULL)
- -> Result
-(7 rows)
+ Sort Key: generate_series.generate_series DESC
+ -> Nested Loop
+ -> Subquery Scan on srf
+ -> Result
+ InitPlan 1 (returns $0)
+ -> Limit
+ -> Index Only Scan Backward using tenk1_unique2 on tenk1
+ Index Cond: (unique2 IS NOT NULL)
+ -> Function Scan on generate_series
+(10 rows)
select max(unique2), generate_series(1,3) as g from tenk1 order by g desc;
max | g
diff --git a/src/test/regress/expected/limit.out b/src/test/regress/expected/limit.out
index 9c3eecf..45f0c38 100644
--- a/src/test/regress/expected/limit.out
+++ b/src/test/regress/expected/limit.out
@@ -208,13 +208,20 @@ select currval('testseq');
explain (verbose, costs off)
select unique1, unique2, generate_series(1,10)
from tenk1 order by unique2 limit 7;
- QUERY PLAN
-----------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------------------
Limit
- Output: unique1, unique2, (generate_series(1, 10))
- -> Index Scan using tenk1_unique2 on public.tenk1
- Output: unique1, unique2, generate_series(1, 10)
-(4 rows)
+ Output: srf.unique1, srf.unique2, generate_series.generate_series
+ -> Nested Loop
+ Output: srf.unique1, srf.unique2, generate_series.generate_series
+ -> Subquery Scan on srf
+ Output: srf.unique1, srf.unique2, srf.*
+ -> Index Scan using tenk1_unique2 on public.tenk1
+ Output: tenk1.unique1, tenk1.unique2
+ -> Function Scan on pg_catalog.generate_series
+ Output: generate_series.generate_series
+ Function Call: generate_series(1, 10)
+(11 rows)
select unique1, unique2, generate_series(1,10)
from tenk1 order by unique2 limit 7;
@@ -232,18 +239,23 @@ select unique1, unique2, generate_series(1,10)
explain (verbose, costs off)
select unique1, unique2, generate_series(1,10)
from tenk1 order by tenthous limit 7;
- QUERY PLAN
---------------------------------------------------------------------
+ QUERY PLAN
+--------------------------------------------------------------------------------------
Limit
- Output: unique1, unique2, (generate_series(1, 10)), tenthous
- -> Result
- Output: unique1, unique2, generate_series(1, 10), tenthous
- -> Sort
- Output: unique1, unique2, tenthous
- Sort Key: tenk1.tenthous
- -> Seq Scan on public.tenk1
- Output: unique1, unique2, tenthous
-(9 rows)
+ Output: srf.unique1, srf.unique2, generate_series.generate_series, srf."..."
+ -> Nested Loop
+ Output: srf.unique1, srf.unique2, generate_series.generate_series, srf."..."
+ -> Subquery Scan on srf
+ Output: srf.unique1, srf.unique2, srf."...", srf.*
+ -> Sort
+ Output: tenk1.unique1, tenk1.unique2, tenk1.tenthous
+ Sort Key: tenk1.tenthous
+ -> Seq Scan on public.tenk1
+ Output: tenk1.unique1, tenk1.unique2, tenk1.tenthous
+ -> Function Scan on pg_catalog.generate_series
+ Output: generate_series.generate_series
+ Function Call: generate_series(1, 10)
+(14 rows)
select unique1, unique2, generate_series(1,10)
from tenk1 order by tenthous limit 7;
@@ -261,11 +273,12 @@ select unique1, unique2, generate_series(1,10)
-- use of random() is to keep planner from folding the expressions together
explain (verbose, costs off)
select generate_series(0,2) as s1, generate_series((random()*.1)::int,2) as s2;
- QUERY PLAN
-------------------------------------------------------------------------------------------------------
- Result
- Output: generate_series(0, 2), generate_series(((random() * '0.1'::double precision))::integer, 2)
-(2 rows)
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------
+ Function Scan on generate_series
+ Output: generate_series.generate_series, generate_series.generate_series_1
+ Function Call: generate_series(0, 2), generate_series(((random() * '0.1'::double precision))::integer, 2)
+(3 rows)
select generate_series(0,2) as s1, generate_series((random()*.1)::int,2) as s2;
s1 | s2
@@ -278,14 +291,15 @@ select generate_series(0,2) as s1, generate_series((random()*.1)::int,2) as s2;
explain (verbose, costs off)
select generate_series(0,2) as s1, generate_series((random()*.1)::int,2) as s2
order by s2 desc;
- QUERY PLAN
-------------------------------------------------------------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------------
Sort
- Output: (generate_series(0, 2)), (generate_series(((random() * '0.1'::double precision))::integer, 2))
- Sort Key: (generate_series(((random() * '0.1'::double precision))::integer, 2)) DESC
- -> Result
- Output: generate_series(0, 2), generate_series(((random() * '0.1'::double precision))::integer, 2)
-(5 rows)
+ Output: generate_series.generate_series, generate_series.generate_series_1
+ Sort Key: generate_series.generate_series_1 DESC
+ -> Function Scan on generate_series
+ Output: generate_series.generate_series, generate_series.generate_series_1
+ Function Call: generate_series(0, 2), generate_series(((random() * '0.1'::double precision))::integer, 2)
+(6 rows)
select generate_series(0,2) as s1, generate_series((random()*.1)::int,2) as s2
order by s2 desc;
diff --git a/src/test/regress/expected/portals.out b/src/test/regress/expected/portals.out
index 3ae918a..7530bb8 100644
--- a/src/test/regress/expected/portals.out
+++ b/src/test/regress/expected/portals.out
@@ -1320,16 +1320,16 @@ fetch backward all in c1;
rollback;
begin;
explain (costs off) declare c2 cursor for select generate_series(1,3) as g;
- QUERY PLAN
-------------
- Result
+ QUERY PLAN
+----------------------------------
+ Function Scan on generate_series
(1 row)
explain (costs off) declare c2 scroll cursor for select generate_series(1,3) as g;
- QUERY PLAN
---------------
+ QUERY PLAN
+----------------------------------------
Materialize
- -> Result
+ -> Function Scan on generate_series
(2 rows)
declare c2 scroll cursor for select generate_series(1,3) as g;
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index 249dc67..635aa50 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -2080,12 +2080,10 @@ SELECT *,
END)
FROM
(VALUES (1,''), (2,'0000000049404'), (3,'FROM 10000000876')) v(id, str);
- id | str | lower
-----+------------------+------------------
- 1 | |
- 2 | 0000000049404 | 49404
- 3 | FROM 10000000876 | from 10000000876
-(3 rows)
+ id | str | lower
+----+---------------+-------
+ 2 | 0000000049404 | 49404
+(1 row)
-- check whole-row-Var handling in nested lateral functions (bug #11703)
create function extractq2(t int8_tbl) returns int8 as $$
diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out
index 0fc93d9..569784d 100644
--- a/src/test/regress/expected/subselect.out
+++ b/src/test/regress/expected/subselect.out
@@ -807,24 +807,31 @@ select * from int4_tbl where
explain (verbose, costs off)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,2) / 10 g from int4_tbl i group by f1);
- QUERY PLAN
-----------------------------------------------------------------
- Hash Semi Join
+ QUERY PLAN
+----------------------------------------------------------------------------
+ Nested Loop Semi Join
Output: o.f1
- Hash Cond: (o.f1 = "ANY_subquery".f1)
+ Join Filter: (o.f1 = "ANY_subquery".f1)
-> Seq Scan on public.int4_tbl o
Output: o.f1
- -> Hash
+ -> Materialize
Output: "ANY_subquery".f1, "ANY_subquery".g
-> Subquery Scan on "ANY_subquery"
Output: "ANY_subquery".f1, "ANY_subquery".g
Filter: ("ANY_subquery".f1 = "ANY_subquery".g)
- -> HashAggregate
- Output: i.f1, (generate_series(1, 2) / 10)
- Group Key: i.f1
- -> Seq Scan on public.int4_tbl i
- Output: i.f1
-(15 rows)
+ -> Nested Loop
+ Output: srf.f1, (generate_series.generate_series / 10)
+ -> Subquery Scan on srf
+ Output: srf.f1, srf.*
+ -> HashAggregate
+ Output: i.f1
+ Group Key: i.f1
+ -> Seq Scan on public.int4_tbl i
+ Output: i.f1
+ -> Function Scan on pg_catalog.generate_series
+ Output: generate_series.generate_series
+ Function Call: generate_series(1, 2)
+(22 rows)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,2) / 10 g from int4_tbl i group by f1);
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
index f520a91..4a7ab6d 100644
--- a/src/test/regress/expected/tsrf.out
+++ b/src/test/regress/expected/tsrf.out
@@ -25,8 +25,8 @@ SELECT generate_series(1, 2), generate_series(1,4);
-----------------+-----------------
1 | 1
2 | 2
- 1 | 3
- 2 | 4
+ | 3
+ | 4
(4 rows)
-- srf, with SRF argument
@@ -43,7 +43,16 @@ SELECT generate_series(1, generate_series(1, 3));
-- srf, with two SRF arguments
SELECT generate_series(generate_series(1,3), generate_series(2, 4));
-ERROR: functions and operators can take at most one set argument
+ generate_series
+-----------------
+ 1
+ 2
+ 2
+ 3
+ 3
+ 4
+(6 rows)
+
CREATE TABLE few(id int, dataa text, datab text);
INSERT INTO few VALUES(1, 'a', 'foo'),(2, 'a', 'bar'),(3, 'b', 'bar');
-- SRF output order of sorting is maintained, if SRF is not referenced
diff --git a/src/test/regress/expected/union.out b/src/test/regress/expected/union.out
index 016571b..04e9765 100644
--- a/src/test/regress/expected/union.out
+++ b/src/test/regress/expected/union.out
@@ -622,16 +622,16 @@ SELECT * FROM
SELECT 2 AS t, 4 AS x) ss
WHERE x < 4
ORDER BY x;
- QUERY PLAN
---------------------------------------------------------
+ QUERY PLAN
+---------------------------------------------------------------
Sort
Sort Key: ss.x
-> Subquery Scan on ss
Filter: (ss.x < 4)
-> HashAggregate
- Group Key: (1), (generate_series(1, 10))
+ Group Key: (1), generate_series.generate_series
-> Append
- -> Result
+ -> Function Scan on generate_series
-> Result
(9 rows)
diff --git a/src/test/regress/output/misc.source b/src/test/regress/output/misc.source
index 5c88aad..d8d87cf 100644
--- a/src/test/regress/output/misc.source
+++ b/src/test/regress/output/misc.source
@@ -511,7 +511,7 @@ SELECT p.name, name(p.hobbies), name(equipment(p.hobbies)) FROM ONLY person p;
name | name | name
-------+-------------+---------------
mike | posthacking | advil
- mike | posthacking | peet's coffee
+ mike | | peet's coffee
joe | basketball | hightops
sally | basketball | hightops
(4 rows)
@@ -523,11 +523,11 @@ SELECT p.name, name(p.hobbies), name(equipment(p.hobbies)) FROM person* p;
name | name | name
-------+-------------+---------------
mike | posthacking | advil
- mike | posthacking | peet's coffee
+ mike | | peet's coffee
joe | basketball | hightops
sally | basketball | hightops
jeff | posthacking | advil
- jeff | posthacking | peet's coffee
+ jeff | | peet's coffee
(6 rows)
--
@@ -538,7 +538,7 @@ SELECT name(equipment(p.hobbies)), p.name, name(p.hobbies) FROM ONLY person p;
name | name | name
---------------+-------+-------------
advil | mike | posthacking
- peet's coffee | mike | posthacking
+ peet's coffee | mike |
hightops | joe | basketball
hightops | sally | basketball
(4 rows)
@@ -547,18 +547,18 @@ SELECT (p.hobbies).equipment.name, p.name, name(p.hobbies) FROM person* p;
name | name | name
---------------+-------+-------------
advil | mike | posthacking
- peet's coffee | mike | posthacking
+ peet's coffee | mike |
hightops | joe | basketball
hightops | sally | basketball
advil | jeff | posthacking
- peet's coffee | jeff | posthacking
+ peet's coffee | jeff |
(6 rows)
SELECT (p.hobbies).equipment.name, name(p.hobbies), p.name FROM ONLY person p;
name | name | name
---------------+-------------+-------
advil | posthacking | mike
- peet's coffee | posthacking | mike
+ peet's coffee | | mike
hightops | basketball | joe
hightops | basketball | sally
(4 rows)
@@ -567,11 +567,11 @@ SELECT name(equipment(p.hobbies)), name(p.hobbies), p.name FROM person* p;
name | name | name
---------------+-------------+-------
advil | posthacking | mike
- peet's coffee | posthacking | mike
+ peet's coffee | | mike
hightops | basketball | joe
hightops | basketball | sally
advil | posthacking | jeff
- peet's coffee | posthacking | jeff
+ peet's coffee | | jeff
(6 rows)
SELECT user_relns() AS user_relns
--
2.9.3
0006-Remove-unused-code-related-to-targetlist-SRFs.patchtext/x-patch; charset=us-asciiDownload
From d38013a7e9d49dd7b1e9c6c2b22c9906945e7010 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Thu, 25 Aug 2016 11:24:49 -0700
Subject: [PATCH 6/6] Remove unused code related to targetlist SRFs.
---
src/backend/catalog/index.c | 3 +-
src/backend/commands/copy.c | 2 +-
src/backend/commands/prepare.c | 3 +-
src/backend/commands/tablecmds.c | 3 +-
src/backend/commands/typecmds.c | 2 +-
src/backend/executor/execAmi.c | 42 +-
src/backend/executor/execQual.c | 1174 ++++-------------------------
src/backend/executor/execScan.c | 30 +-
src/backend/executor/execUtils.c | 6 -
src/backend/executor/nodeAgg.c | 52 +-
src/backend/executor/nodeBitmapHeapscan.c | 2 -
src/backend/executor/nodeCtescan.c | 2 -
src/backend/executor/nodeCustom.c | 2 -
src/backend/executor/nodeForeignscan.c | 2 -
src/backend/executor/nodeFunctionscan.c | 16 +-
src/backend/executor/nodeGather.c | 25 +-
src/backend/executor/nodeGroup.c | 42 +-
src/backend/executor/nodeHash.c | 2 +-
src/backend/executor/nodeHashjoin.c | 52 +-
src/backend/executor/nodeIndexonlyscan.c | 2 -
src/backend/executor/nodeIndexscan.c | 11 +-
src/backend/executor/nodeLimit.c | 19 +-
src/backend/executor/nodeMergejoin.c | 59 +-
src/backend/executor/nodeModifyTable.c | 4 +-
src/backend/executor/nodeNestloop.c | 41 +-
src/backend/executor/nodeResult.c | 33 +-
src/backend/executor/nodeSamplescan.c | 8 +-
src/backend/executor/nodeSeqscan.c | 2 -
src/backend/executor/nodeSubplan.c | 31 +-
src/backend/executor/nodeSubqueryscan.c | 2 -
src/backend/executor/nodeTidscan.c | 8 +-
src/backend/executor/nodeValuesscan.c | 5 +-
src/backend/executor/nodeWindowAgg.c | 58 +-
src/backend/executor/nodeWorktablescan.c | 2 -
src/backend/optimizer/plan/planner.c | 119 +--
src/backend/optimizer/util/clauses.c | 46 +-
src/backend/optimizer/util/predtest.c | 2 +-
src/backend/utils/adt/domains.c | 2 +-
src/backend/utils/adt/xml.c | 4 +-
src/include/executor/executor.h | 13 +-
src/include/nodes/execnodes.h | 16 +-
src/include/optimizer/clauses.h | 1 -
src/pl/plpgsql/src/pl_exec.c | 5 +-
43 files changed, 284 insertions(+), 1671 deletions(-)
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index b0b43cf..fd82855 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -1788,8 +1788,7 @@ FormIndexDatum(IndexInfo *indexInfo,
elog(ERROR, "wrong number of index expressions");
iDatum = ExecEvalExprSwitchContext((ExprState *) lfirst(indexpr_item),
GetPerTupleExprContext(estate),
- &isNull,
- NULL);
+ &isNull);
indexpr_item = lnext(indexpr_item);
}
values[i] = iDatum;
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index f45b330..28466ac 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -3172,7 +3172,7 @@ NextCopyFrom(CopyState cstate, ExprContext *econtext,
Assert(CurrentMemoryContext == econtext->ecxt_per_tuple_memory);
values[defmap[i]] = ExecEvalExpr(defexprs[i], econtext,
- &nulls[defmap[i]], NULL);
+ &nulls[defmap[i]]);
}
return true;
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index cec37ce..451c8d5 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -404,8 +404,7 @@ EvaluateParams(PreparedStatement *pstmt, List *params,
prm->pflags = PARAM_FLAG_CONST;
prm->value = ExecEvalExprSwitchContext(n,
GetPerTupleExprContext(estate),
- &prm->isnull,
- NULL);
+ &prm->isnull);
i++;
}
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 86e9814..92e468d 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -4151,8 +4151,7 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
values[ex->attnum - 1] = ExecEvalExpr(ex->exprstate,
econtext,
- &isnull[ex->attnum - 1],
- NULL);
+ &isnull[ex->attnum - 1]);
}
/*
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index ce04211..755af68 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2741,7 +2741,7 @@ validateDomainConstraint(Oid domainoid, char *ccbin)
conResult = ExecEvalExprSwitchContext(exprstate,
econtext,
- &isNull, NULL);
+ &isNull);
if (!isNull && !DatumGetBool(conResult))
{
diff --git a/src/backend/executor/execAmi.c b/src/backend/executor/execAmi.c
index ea2f09e..959d27a 100644
--- a/src/backend/executor/execAmi.c
+++ b/src/backend/executor/execAmi.c
@@ -58,7 +58,6 @@
#include "utils/syscache.h"
-static bool TargetListSupportsBackwardScan(List *targetlist);
static bool IndexSupportsBackwardScan(Oid indexid);
@@ -119,7 +118,7 @@ ExecReScan(PlanState *node)
UpdateChangedParamSet(node->righttree, node->chgParam);
}
- /* Shut down any SRFs in the plan node's targetlist */
+ /* Call expression callbacks */
if (node->ps_ExprContext)
ReScanExprContext(node->ps_ExprContext);
@@ -455,8 +454,7 @@ ExecSupportsBackwardScan(Plan *node)
{
case T_Result:
if (outerPlan(node) != NULL)
- return ExecSupportsBackwardScan(outerPlan(node)) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return ExecSupportsBackwardScan(outerPlan(node));
else
return false;
@@ -473,12 +471,6 @@ ExecSupportsBackwardScan(Plan *node)
return true;
}
- case T_SeqScan:
- case T_TidScan:
- case T_ValuesScan:
- case T_CteScan:
- return TargetListSupportsBackwardScan(node->targetlist);
-
case T_SampleScan:
/* Simplify life for tablesample methods by disallowing this */
return false;
@@ -487,35 +479,33 @@ ExecSupportsBackwardScan(Plan *node)
return false;
case T_IndexScan:
- return IndexSupportsBackwardScan(((IndexScan *) node)->indexid) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return IndexSupportsBackwardScan(((IndexScan *) node)->indexid);
case T_IndexOnlyScan:
- return IndexSupportsBackwardScan(((IndexOnlyScan *) node)->indexid) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return IndexSupportsBackwardScan(((IndexOnlyScan *) node)->indexid);
case T_SubqueryScan:
- return ExecSupportsBackwardScan(((SubqueryScan *) node)->subplan) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return ExecSupportsBackwardScan(((SubqueryScan *) node)->subplan);
case T_CustomScan:
{
uint32 flags = ((CustomScan *) node)->flags;
- if ((flags & CUSTOMPATH_SUPPORT_BACKWARD_SCAN) &&
- TargetListSupportsBackwardScan(node->targetlist))
+ if (flags & CUSTOMPATH_SUPPORT_BACKWARD_SCAN)
return true;
}
return false;
+ case T_SeqScan:
+ case T_TidScan:
+ case T_ValuesScan:
+ case T_CteScan:
case T_Material:
case T_Sort:
- /* these don't evaluate tlist */
return true;
case T_LockRows:
case T_Limit:
- /* these don't evaluate tlist */
return ExecSupportsBackwardScan(outerPlan(node));
default:
@@ -524,18 +514,6 @@ ExecSupportsBackwardScan(Plan *node)
}
/*
- * If the tlist contains set-returning functions, we can't support backward
- * scan, because the TupFromTlist code is direction-ignorant.
- */
-static bool
-TargetListSupportsBackwardScan(List *targetlist)
-{
- if (expression_returns_set((Node *) targetlist))
- return false;
- return true;
-}
-
-/*
* An IndexScan or IndexOnlyScan node supports backward scan only if the
* index's AM does.
*/
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index d9e2797..1bdc0ac 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -64,127 +64,115 @@
/* static function decls */
static Datum ExecEvalArrayRef(ArrayRefExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static bool isAssignmentIndirectionExpr(ExprState *exprstate);
static Datum ExecEvalAggref(AggrefExprState *aggref,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWindowFunc(WindowFuncExprState *wfunc,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalParamExtern(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
-static void ShutdownFuncExpr(Datum arg);
-static TupleDesc get_cached_rowtype(Oid type_id, int32 typmod,
- TupleDesc *cache_field, ExprContext *econtext);
+ bool *isNull);
static void ShutdownTupleDescRef(Datum arg);
-static void ExecPrepareTuplestoreResult(FuncExprState *fcache,
- ExprContext *econtext,
- Tuplestorestate *resultStore,
- TupleDesc resultDesc);
-static void tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc);
-static Datum ExecMakeFunctionResult(FuncExprState *fcache,
- ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
static Datum ExecMakeFunctionResultNoSets(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFunc(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalOper(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalDistinct(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCaseTestExpr(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalArray(ArrayExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRow(RowExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRowCompare(RowCompareExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoalesce(CoalesceExprState *coalesceExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalMinMax(MinMaxExprState *minmaxExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalSQLValueFunction(ExprState *svfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNullIf(FuncExprState *nullIfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNullTest(NullTestState *nstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalBooleanTest(GenericExprState *bstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceToDomain(CoerceToDomainState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceToDomainValue(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFieldSelect(FieldSelectState *fstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFieldStore(FieldStoreState *fstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRelabelType(GenericExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
/* ----------------------------------------------------------------
@@ -195,8 +183,7 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
* Each of the following routines having the signature
* Datum ExecEvalFoo(ExprState *expression,
* ExprContext *econtext,
- * bool *isNull,
- * ExprDoneCond *isDone);
+ * bool *isNull);
* is responsible for evaluating one type or subtype of ExprState node.
* They are normally called via the ExecEvalExpr macro, which makes use of
* the function pointer set up when the ExprState node was built by
@@ -220,22 +207,6 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
* return value: Datum value of result
* *isNull: set to TRUE if result is NULL (actual return value is
* meaningless if so); set to FALSE if non-null result
- * *isDone: set to indicator of set-result status
- *
- * A caller that can only accept a singleton (non-set) result should pass
- * NULL for isDone; if the expression computes a set result then an error
- * will be reported via ereport. If the caller does pass an isDone pointer
- * then *isDone is set to one of these three states:
- * ExprSingleResult singleton result (not a set)
- * ExprMultipleResult return value is one element of a set
- * ExprEndResult there are no more elements in the set
- * When ExprMultipleResult is returned, the caller should invoke
- * ExecEvalExpr() repeatedly until ExprEndResult is returned. ExprEndResult
- * is returned after the last real set element. For convenience isNull will
- * always be set TRUE when ExprEndResult is returned, but this should not be
- * taken as indicating a NULL element of the set. Note that these return
- * conventions allow us to distinguish among a singleton NULL, a NULL element
- * of a set, and an empty set.
*
* The caller should already have switched into the temporary memory
* context econtext->ecxt_per_tuple_memory. The convenience entry point
@@ -260,8 +231,7 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
static Datum
ExecEvalArrayRef(ArrayRefExprState *astate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayRef *arrayRef = (ArrayRef *) astate->xprstate.expr;
Datum array_source;
@@ -278,8 +248,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
array_source = ExecEvalExpr(astate->refexpr,
econtext,
- isNull,
- isDone);
+ isNull);
/*
* If refexpr yields NULL, and it's a fetch, then result is NULL. In the
@@ -287,8 +256,6 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
*/
if (*isNull)
{
- if (isDone && *isDone == ExprEndResult)
- return (Datum) NULL; /* end of set result */
if (!isAssignment)
return (Datum) NULL;
}
@@ -314,8 +281,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
upper.indx[i++] = DatumGetInt32(ExecEvalExpr(eltstate,
econtext,
- &eisnull,
- NULL));
+ &eisnull));
/* If any index expr yields NULL, result is NULL or error */
if (eisnull)
{
@@ -350,8 +316,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
lower.indx[j++] = DatumGetInt32(ExecEvalExpr(eltstate,
econtext,
- &eisnull,
- NULL));
+ &eisnull));
/* If any index expr yields NULL, result is NULL or error */
if (eisnull)
{
@@ -438,8 +403,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
*/
sourceData = ExecEvalExpr(astate->refassgnexpr,
econtext,
- &eisnull,
- NULL);
+ &eisnull);
econtext->caseValue_datum = save_datum;
econtext->caseValue_isNull = save_isNull;
@@ -542,11 +506,8 @@ isAssignmentIndirectionExpr(ExprState *exprstate)
*/
static Datum
ExecEvalAggref(AggrefExprState *aggref, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
-
if (econtext->ecxt_aggvalues == NULL) /* safety check */
elog(ERROR, "no aggregates in this expression context");
@@ -563,11 +524,8 @@ ExecEvalAggref(AggrefExprState *aggref, ExprContext *econtext,
*/
static Datum
ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
-
if (econtext->ecxt_aggvalues == NULL) /* safety check */
elog(ERROR, "no window functions in this expression context");
@@ -588,15 +546,12 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
*/
static Datum
ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) exprstate->expr;
TupleTableSlot *slot;
AttrNumber attnum;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Get the input slot and attribute number we want */
switch (variable->varno)
{
@@ -677,15 +632,12 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) exprstate->expr;
TupleTableSlot *slot;
AttrNumber attnum;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Get the input slot and attribute number we want */
switch (variable->varno)
{
@@ -725,7 +677,7 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
@@ -733,9 +685,6 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
MemoryContext oldcontext;
bool needslow = false;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* This was checked by ExecInitExpr */
Assert(variable->varattno == InvalidAttrNumber);
@@ -941,7 +890,7 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
/* Fetch the value */
return (*wrvstate->xprstate.evalfunc) ((ExprState *) wrvstate, econtext,
- isNull, isDone);
+ isNull);
}
/* ----------------------------------------------------------------
@@ -952,14 +901,12 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
HeapTupleHeader dtuple;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = false;
/* Get the input slot we want */
@@ -1008,7 +955,7 @@ ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
@@ -1018,8 +965,6 @@ ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
HeapTupleHeader dtuple;
int i;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = false;
/* Get the input slot we want */
@@ -1097,13 +1042,10 @@ ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Const *con = (Const *) exprstate->expr;
- if (isDone)
- *isDone = ExprSingleResult;
-
*isNull = con->constisnull;
return con->constvalue;
}
@@ -1116,15 +1058,12 @@ ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Param *expression = (Param *) exprstate->expr;
int thisParamId = expression->paramid;
ParamExecData *prm;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* PARAM_EXEC params (internal executor parameters) are stored in the
* ecxt_param_exec_vals array, and can be accessed by array index.
@@ -1149,15 +1088,12 @@ ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalParamExtern(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Param *expression = (Param *) exprstate->expr;
int thisParamId = expression->paramid;
ParamListInfo paramInfo = econtext->ecxt_param_list_info;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* PARAM_EXTERN parameters must be sought in ecxt_param_list_info.
*/
@@ -1323,7 +1259,7 @@ GetAttributeByName(HeapTupleHeader tuple, const char *attname, bool *isNull)
*/
void
ExecInitFcache(Oid foid, Oid input_collation, FuncExprState *fcache,
- MemoryContext fcacheCxt, bool needDescForSets)
+ MemoryContext fcacheCxt)
{
AclResult aclresult;
@@ -1356,88 +1292,9 @@ ExecInitFcache(Oid foid, Oid input_collation, FuncExprState *fcache,
list_length(fcache->args),
input_collation, NULL, NULL);
- /* If function returns set, prepare expected tuple descriptor */
- if (fcache->func.fn_retset && needDescForSets)
- {
- TypeFuncClass functypclass;
- Oid funcrettype;
- TupleDesc tupdesc;
- MemoryContext oldcontext;
-
- functypclass = get_expr_result_type(fcache->func.fn_expr,
- &funcrettype,
- &tupdesc);
-
- /* Must save tupdesc in fcache's context */
- oldcontext = MemoryContextSwitchTo(fcacheCxt);
-
- if (functypclass == TYPEFUNC_COMPOSITE)
- {
- /* Composite data type, e.g. a table's row type */
- Assert(tupdesc);
- /* Must copy it out of typcache for safety */
- fcache->funcResultDesc = CreateTupleDescCopy(tupdesc);
- fcache->funcReturnsTuple = true;
- }
- else if (functypclass == TYPEFUNC_SCALAR)
- {
- /* Base data type, i.e. scalar */
- tupdesc = CreateTemplateTupleDesc(1, false);
- TupleDescInitEntry(tupdesc,
- (AttrNumber) 1,
- NULL,
- funcrettype,
- -1,
- 0);
- fcache->funcResultDesc = tupdesc;
- fcache->funcReturnsTuple = false;
- }
- else if (functypclass == TYPEFUNC_RECORD)
- {
- /* This will work if function doesn't need an expectedDesc */
- fcache->funcResultDesc = NULL;
- fcache->funcReturnsTuple = true;
- }
- else
- {
- /* Else, we will fail if function needs an expectedDesc */
- fcache->funcResultDesc = NULL;
- }
-
- MemoryContextSwitchTo(oldcontext);
- }
- else
- fcache->funcResultDesc = NULL;
-
/* Initialize additional state */
fcache->funcResultStore = NULL;
fcache->funcResultSlot = NULL;
- fcache->setArgsValid = false;
- fcache->shutdown_reg = false;
-}
-
-/*
- * callback function in case a FuncExpr returning a set needs to be shut down
- * before it has been run to completion
- */
-static void
-ShutdownFuncExpr(Datum arg)
-{
- FuncExprState *fcache = (FuncExprState *) DatumGetPointer(arg);
-
- /* If we have a slot, make sure it's let go of any tuplestore pointer */
- if (fcache->funcResultSlot)
- ExecClearTuple(fcache->funcResultSlot);
-
- /* Release any open tuplestore */
- if (fcache->funcResultStore)
- tuplestore_end(fcache->funcResultStore);
- fcache->funcResultStore = NULL;
-
- /* Clear any active set-argument state */
- fcache->setArgsValid = false;
-
- /* execUtils will deregister the callback... */
fcache->shutdown_reg = false;
}
@@ -1499,500 +1356,38 @@ ShutdownTupleDescRef(Datum arg)
/*
* Evaluate arguments for a function.
*/
-ExprDoneCond
+void
ExecEvalFuncArgs(FunctionCallInfo fcinfo,
List *argList,
ExprContext *econtext)
{
- ExprDoneCond argIsDone;
int i;
ListCell *arg;
- argIsDone = ExprSingleResult; /* default assumption */
-
i = 0;
foreach(arg, argList)
{
ExprState *argstate = (ExprState *) lfirst(arg);
- ExprDoneCond thisArgIsDone;
fcinfo->arg[i] = ExecEvalExpr(argstate,
econtext,
- &fcinfo->argnull[i],
- &thisArgIsDone);
-
- if (thisArgIsDone != ExprSingleResult)
- {
- /*
- * We allow only one argument to have a set value; we'd need much
- * more complexity to keep track of multiple set arguments (cf.
- * ExecTargetList) and it doesn't seem worth it.
- */
- if (argIsDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("functions and operators can take at most one set argument")));
- argIsDone = thisArgIsDone;
- }
+ &fcinfo->argnull[i]);
i++;
}
Assert(i == fcinfo->nargs);
-
- return argIsDone;
-}
-
-/*
- * ExecPrepareTuplestoreResult
- *
- * Subroutine for ExecMakeFunctionResult: prepare to extract rows from a
- * tuplestore function result. We must set up a funcResultSlot (unless
- * already done in a previous call cycle) and verify that the function
- * returned the expected tuple descriptor.
- */
-static void
-ExecPrepareTuplestoreResult(FuncExprState *fcache,
- ExprContext *econtext,
- Tuplestorestate *resultStore,
- TupleDesc resultDesc)
-{
- fcache->funcResultStore = resultStore;
-
- if (fcache->funcResultSlot == NULL)
- {
- /* Create a slot so we can read data out of the tuplestore */
- TupleDesc slotDesc;
- MemoryContext oldcontext;
-
- oldcontext = MemoryContextSwitchTo(fcache->func.fn_mcxt);
-
- /*
- * If we were not able to determine the result rowtype from context,
- * and the function didn't return a tupdesc, we have to fail.
- */
- if (fcache->funcResultDesc)
- slotDesc = fcache->funcResultDesc;
- else if (resultDesc)
- {
- /* don't assume resultDesc is long-lived */
- slotDesc = CreateTupleDescCopy(resultDesc);
- }
- else
- {
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("function returning setof record called in "
- "context that cannot accept type record")));
- slotDesc = NULL; /* keep compiler quiet */
- }
-
- fcache->funcResultSlot = MakeSingleTupleTableSlot(slotDesc);
- MemoryContextSwitchTo(oldcontext);
- }
-
- /*
- * If function provided a tupdesc, cross-check it. We only really need to
- * do this for functions returning RECORD, but might as well do it always.
- */
- if (resultDesc)
- {
- if (fcache->funcResultDesc)
- tupledesc_match(fcache->funcResultDesc, resultDesc);
-
- /*
- * If it is a dynamically-allocated TupleDesc, free it: it is
- * typically allocated in a per-query context, so we must avoid
- * leaking it across multiple usages.
- */
- if (resultDesc->tdrefcount == -1)
- FreeTupleDesc(resultDesc);
- }
-
- /* Register cleanup callback if we didn't already */
- if (!fcache->shutdown_reg)
- {
- RegisterExprContextCallback(econtext,
- ShutdownFuncExpr,
- PointerGetDatum(fcache));
- fcache->shutdown_reg = true;
- }
-}
-
-/*
- * Check that function result tuple type (src_tupdesc) matches or can
- * be considered to match what the query expects (dst_tupdesc). If
- * they don't match, ereport.
- *
- * We really only care about number of attributes and data type.
- * Also, we can ignore type mismatch on columns that are dropped in the
- * destination type, so long as the physical storage matches. This is
- * helpful in some cases involving out-of-date cached plans.
- */
-static void
-tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc)
-{
- int i;
-
- if (dst_tupdesc->natts != src_tupdesc->natts)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("function return row and query-specified return row do not match"),
- errdetail_plural("Returned row contains %d attribute, but query expects %d.",
- "Returned row contains %d attributes, but query expects %d.",
- src_tupdesc->natts,
- src_tupdesc->natts, dst_tupdesc->natts)));
-
- for (i = 0; i < dst_tupdesc->natts; i++)
- {
- Form_pg_attribute dattr = dst_tupdesc->attrs[i];
- Form_pg_attribute sattr = src_tupdesc->attrs[i];
-
- if (IsBinaryCoercible(sattr->atttypid, dattr->atttypid))
- continue; /* no worries */
- if (!dattr->attisdropped)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("function return row and query-specified return row do not match"),
- errdetail("Returned type %s at ordinal position %d, but query expects %s.",
- format_type_be(sattr->atttypid),
- i + 1,
- format_type_be(dattr->atttypid))));
-
- if (dattr->attlen != sattr->attlen ||
- dattr->attalign != sattr->attalign)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("function return row and query-specified return row do not match"),
- errdetail("Physical storage mismatch on dropped attribute at ordinal position %d.",
- i + 1)));
- }
-}
-
-/*
- * ExecMakeFunctionResult
- *
- * Evaluate the arguments to a function and then the function itself.
- * init_fcache is presumed already run on the FuncExprState.
- *
- * This function handles the most general case, wherein the function or
- * one of its arguments can return a set.
- */
-static Datum
-ExecMakeFunctionResult(FuncExprState *fcache,
- ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
-{
- List *arguments;
- Datum result;
- FunctionCallInfo fcinfo;
- PgStat_FunctionCallUsage fcusage;
- ReturnSetInfo rsinfo; /* for functions returning sets */
- ExprDoneCond argDone;
- bool hasSetArg;
- int i;
-
-restart:
-
- /* Guard against stack overflow due to overly complex expressions */
- check_stack_depth();
-
- /*
- * If a previous call of the function returned a set result in the form of
- * a tuplestore, continue reading rows from the tuplestore until it's
- * empty.
- */
- if (fcache->funcResultStore)
- {
- Assert(isDone); /* it was provided before ... */
- if (tuplestore_gettupleslot(fcache->funcResultStore, true, false,
- fcache->funcResultSlot))
- {
- *isDone = ExprMultipleResult;
- if (fcache->funcReturnsTuple)
- {
- /* We must return the whole tuple as a Datum. */
- *isNull = false;
- return ExecFetchSlotTupleDatum(fcache->funcResultSlot);
- }
- else
- {
- /* Extract the first column and return it as a scalar. */
- return slot_getattr(fcache->funcResultSlot, 1, isNull);
- }
- }
- /* Exhausted the tuplestore, so clean up */
- tuplestore_end(fcache->funcResultStore);
- fcache->funcResultStore = NULL;
- /* We are done unless there was a set-valued argument */
- if (!fcache->setHasSetArg)
- {
- *isDone = ExprEndResult;
- *isNull = true;
- return (Datum) 0;
- }
- /* If there was, continue evaluating the argument values */
- Assert(!fcache->setArgsValid);
- }
-
- /*
- * arguments is a list of expressions to evaluate before passing to the
- * function manager. We skip the evaluation if it was already done in the
- * previous call (ie, we are continuing the evaluation of a set-valued
- * function). Otherwise, collect the current argument values into fcinfo.
- */
- fcinfo = &fcache->fcinfo_data;
- arguments = fcache->args;
- if (!fcache->setArgsValid)
- {
- argDone = ExecEvalFuncArgs(fcinfo, arguments, econtext);
- if (argDone == ExprEndResult)
- {
- /* input is an empty set, so return an empty set. */
- *isNull = true;
- if (isDone)
- *isDone = ExprEndResult;
- else
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
- return (Datum) 0;
- }
- hasSetArg = (argDone != ExprSingleResult);
- }
- else
- {
- /* Re-use callinfo from previous evaluation */
- hasSetArg = fcache->setHasSetArg;
- /* Reset flag (we may set it again below) */
- fcache->setArgsValid = false;
- }
-
- /*
- * Now call the function, passing the evaluated parameter values.
- */
- if (fcache->func.fn_retset || hasSetArg)
- {
- /*
- * We need to return a set result. Complain if caller not ready to
- * accept one.
- */
- if (isDone == NULL)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
-
- /*
- * Prepare a resultinfo node for communication. If the function
- * doesn't itself return set, we don't pass the resultinfo to the
- * function, but we need to fill it in anyway for internal use.
- */
- if (fcache->func.fn_retset)
- fcinfo->resultinfo = (Node *) &rsinfo;
- rsinfo.type = T_ReturnSetInfo;
- rsinfo.econtext = econtext;
- rsinfo.expectedDesc = fcache->funcResultDesc;
- rsinfo.allowedModes = (int) (SFRM_ValuePerCall | SFRM_Materialize);
- /* note we do not set SFRM_Materialize_Random or _Preferred */
- rsinfo.returnMode = SFRM_ValuePerCall;
- /* isDone is filled below */
- rsinfo.setResult = NULL;
- rsinfo.setDesc = NULL;
-
- /*
- * This loop handles the situation where we have both a set argument
- * and a set-valued function. Once we have exhausted the function's
- * value(s) for a particular argument value, we have to get the next
- * argument value and start the function over again. We might have to
- * do it more than once, if the function produces an empty result set
- * for a particular input value.
- */
- for (;;)
- {
- /*
- * If function is strict, and there are any NULL arguments, skip
- * calling the function (at least for this set of args).
- */
- bool callit = true;
-
- if (fcache->func.fn_strict)
- {
- for (i = 0; i < fcinfo->nargs; i++)
- {
- if (fcinfo->argnull[i])
- {
- callit = false;
- break;
- }
- }
- }
-
- if (callit)
- {
- pgstat_init_function_usage(fcinfo, &fcusage);
-
- fcinfo->isnull = false;
- rsinfo.isDone = ExprSingleResult;
- result = FunctionCallInvoke(fcinfo);
- *isNull = fcinfo->isnull;
- *isDone = rsinfo.isDone;
-
- pgstat_end_function_usage(&fcusage,
- rsinfo.isDone != ExprMultipleResult);
- }
- else if (fcache->func.fn_retset)
- {
- /* for a strict SRF, result for NULL is an empty set */
- result = (Datum) 0;
- *isNull = true;
- *isDone = ExprEndResult;
- }
- else
- {
- /* for a strict non-SRF, result for NULL is a NULL */
- result = (Datum) 0;
- *isNull = true;
- *isDone = ExprSingleResult;
- }
-
- /* Which protocol does function want to use? */
- if (rsinfo.returnMode == SFRM_ValuePerCall)
- {
- if (*isDone != ExprEndResult)
- {
- /*
- * Got a result from current argument. If function itself
- * returns set, save the current argument values to re-use
- * on the next call.
- */
- if (fcache->func.fn_retset &&
- *isDone == ExprMultipleResult)
- {
- fcache->setHasSetArg = hasSetArg;
- fcache->setArgsValid = true;
- /* Register cleanup callback if we didn't already */
- if (!fcache->shutdown_reg)
- {
- RegisterExprContextCallback(econtext,
- ShutdownFuncExpr,
- PointerGetDatum(fcache));
- fcache->shutdown_reg = true;
- }
- }
-
- /*
- * Make sure we say we are returning a set, even if the
- * function itself doesn't return sets.
- */
- if (hasSetArg)
- *isDone = ExprMultipleResult;
- break;
- }
- }
- else if (rsinfo.returnMode == SFRM_Materialize)
- {
- /* check we're on the same page as the function author */
- if (rsinfo.isDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
- errmsg("table-function protocol for materialize mode was not followed")));
- if (rsinfo.setResult != NULL)
- {
- /* prepare to return values from the tuplestore */
- ExecPrepareTuplestoreResult(fcache, econtext,
- rsinfo.setResult,
- rsinfo.setDesc);
- /* remember whether we had set arguments */
- fcache->setHasSetArg = hasSetArg;
- /* loop back to top to start returning from tuplestore */
- goto restart;
- }
- /* if setResult was left null, treat it as empty set */
- *isDone = ExprEndResult;
- *isNull = true;
- result = (Datum) 0;
- }
- else
- ereport(ERROR,
- (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
- errmsg("unrecognized table-function returnMode: %d",
- (int) rsinfo.returnMode)));
-
- /* Else, done with this argument */
- if (!hasSetArg)
- break; /* input not a set, so done */
-
- /* Re-eval args to get the next element of the input set */
- argDone = ExecEvalFuncArgs(fcinfo, arguments, econtext);
-
- if (argDone != ExprMultipleResult)
- {
- /* End of argument set, so we're done. */
- *isNull = true;
- *isDone = ExprEndResult;
- result = (Datum) 0;
- break;
- }
-
- /*
- * If we reach here, loop around to run the function on the new
- * argument.
- */
- }
- }
- else
- {
- /*
- * Non-set case: much easier.
- *
- * In common cases, this code path is unreachable because we'd have
- * selected ExecMakeFunctionResultNoSets instead. However, it's
- * possible to get here if an argument sometimes produces set results
- * and sometimes scalar results. For example, a CASE expression might
- * call a set-returning function in only some of its arms.
- */
- if (isDone)
- *isDone = ExprSingleResult;
-
- /*
- * If function is strict, and there are any NULL arguments, skip
- * calling the function and return NULL.
- */
- if (fcache->func.fn_strict)
- {
- for (i = 0; i < fcinfo->nargs; i++)
- {
- if (fcinfo->argnull[i])
- {
- *isNull = true;
- return (Datum) 0;
- }
- }
- }
-
- pgstat_init_function_usage(fcinfo, &fcusage);
-
- fcinfo->isnull = false;
- result = FunctionCallInvoke(fcinfo);
- *isNull = fcinfo->isnull;
-
- pgstat_end_function_usage(&fcusage, true);
- }
-
- return result;
}
/*
* ExecMakeFunctionResultNoSets
*
- * Simplified version of ExecMakeFunctionResult that can only handle
- * non-set cases. Hand-tuned for speed.
+ * Portion of ExecMakeFunctionResult that does not need initialization.
+ * Hand-tuned for speed.
*/
static Datum
ExecMakeFunctionResultNoSets(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
ListCell *arg;
Datum result;
@@ -2003,9 +1398,6 @@ ExecMakeFunctionResultNoSets(FuncExprState *fcache,
/* Guard against stack overflow due to overly complex expressions */
check_stack_depth();
- if (isDone)
- *isDone = ExprSingleResult;
-
/* inlined, simplified version of ExecEvalFuncArgs */
fcinfo = &fcache->fcinfo_data;
i = 0;
@@ -2015,8 +1407,7 @@ ExecMakeFunctionResultNoSets(FuncExprState *fcache,
fcinfo->arg[i] = ExecEvalExpr(argstate,
econtext,
- &fcinfo->argnull[i],
- NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -2064,15 +1455,14 @@ ExecMakeFunctionResultNoSets(FuncExprState *fcache,
static Datum
ExecEvalFunc(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* This is called only the first time through */
FuncExpr *func = (FuncExpr *) fcache->xprstate.expr;
/* Initialize function lookup info */
ExecInitFcache(func->funcid, func->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory);
if (fcache->func.fn_retset)
{
@@ -2081,22 +1471,9 @@ ExecEvalFunc(FuncExprState *fcache,
errmsg("set-valued function called in context that cannot accept a set")));
}
- /*
- * We need to invoke ExecMakeFunctionResult if either the function itself
- * or any of its input expressions can return a set. Otherwise, invoke
- * ExecMakeFunctionResultNoSets. In either case, change the evalfunc
- * pointer to go directly there on subsequent uses.
- */
- if (fcache->func.fn_retset || expression_returns_set((Node *) func->args))
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResult;
- return ExecMakeFunctionResult(fcache, econtext, isNull, isDone);
- }
- else
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
- }
+ /* Change the evalfunc pointer, to skip the above initialization. */
+ fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -2106,32 +1483,25 @@ ExecEvalFunc(FuncExprState *fcache,
static Datum
ExecEvalOper(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* This is called only the first time through */
OpExpr *op = (OpExpr *) fcache->xprstate.expr;
/* Initialize function lookup info */
ExecInitFcache(op->opfuncid, op->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory);
- /*
- * We need to invoke ExecMakeFunctionResult if either the function itself
- * or any of its input expressions can return a set. Otherwise, invoke
- * ExecMakeFunctionResultNoSets. In either case, change the evalfunc
- * pointer to go directly there on subsequent uses.
- */
- if (fcache->func.fn_retset || expression_returns_set((Node *) op->args))
+ if (fcache->func.fn_retset)
{
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResult;
- return ExecMakeFunctionResult(fcache, econtext, isNull, isDone);
- }
- else
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
}
+
+ /* Change the evalfunc pointer, to skip the above initialization. */
+ fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -2148,17 +1518,13 @@ ExecEvalOper(FuncExprState *fcache,
static Datum
ExecEvalDistinct(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
- /* Set default values for result flags: non-null, not a set result */
+ /* Set non-null as default */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -2168,7 +1534,7 @@ ExecEvalDistinct(FuncExprState *fcache,
DistinctExpr *op = (DistinctExpr *) fcache->xprstate.expr;
ExecInitFcache(op->opfuncid, op->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory);
Assert(!fcache->func.fn_retset);
}
@@ -2176,11 +1542,7 @@ ExecEvalDistinct(FuncExprState *fcache,
* Evaluate arguments
*/
fcinfo = &fcache->fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, fcache->args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("IS DISTINCT FROM does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, fcache->args, econtext);
Assert(fcinfo->nargs == 2);
if (fcinfo->argnull[0] && fcinfo->argnull[1])
@@ -2216,7 +1578,7 @@ ExecEvalDistinct(FuncExprState *fcache,
static Datum
ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ScalarArrayOpExpr *opexpr = (ScalarArrayOpExpr *) sstate->fxprstate.xprstate.expr;
bool useOr = opexpr->useOr;
@@ -2225,7 +1587,6 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
Datum result;
bool resultnull;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
int i;
int16 typlen;
bool typbyval;
@@ -2234,10 +1595,8 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
bits8 *bitmap;
int bitmask;
- /* Set default values for result flags: non-null, not a set result */
+ /* Set non-null as default */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -2245,7 +1604,7 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
if (sstate->fxprstate.func.fn_oid == InvalidOid)
{
ExecInitFcache(opexpr->opfuncid, opexpr->inputcollid, &sstate->fxprstate,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory);
Assert(!sstate->fxprstate.func.fn_retset);
}
@@ -2253,11 +1612,7 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
* Evaluate arguments
*/
fcinfo = &sstate->fxprstate.fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, sstate->fxprstate.args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("op ANY/ALL (array) does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, sstate->fxprstate.args, econtext);
Assert(fcinfo->nargs == 2);
/*
@@ -2403,15 +1758,12 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
*/
static Datum
ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ExprState *clause = linitial(notclause->args);
Datum expr_value;
- if (isDone)
- *isDone = ExprSingleResult;
-
- expr_value = ExecEvalExpr(clause, econtext, isNull, NULL);
+ expr_value = ExecEvalExpr(clause, econtext, isNull);
/*
* if the expression evaluates to null, then we just cascade the null back
@@ -2433,15 +1785,12 @@ ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
*/
static Datum
ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = orExpr->args;
ListCell *clause;
bool AnyNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
AnyNull = false;
/*
@@ -2462,7 +1811,7 @@ ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
ExprState *clausestate = (ExprState *) lfirst(clause);
Datum clause_value;
- clause_value = ExecEvalExpr(clausestate, econtext, isNull, NULL);
+ clause_value = ExecEvalExpr(clausestate, econtext, isNull);
/*
* if we have a non-null true result, then return it.
@@ -2484,15 +1833,12 @@ ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
*/
static Datum
ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = andExpr->args;
ListCell *clause;
bool AnyNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
AnyNull = false;
/*
@@ -2509,7 +1855,7 @@ ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
ExprState *clausestate = (ExprState *) lfirst(clause);
Datum clause_value;
- clause_value = ExecEvalExpr(clausestate, econtext, isNull, NULL);
+ clause_value = ExecEvalExpr(clausestate, econtext, isNull);
/*
* if we have a non-null false result, then return it.
@@ -2535,7 +1881,7 @@ ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
static Datum
ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ConvertRowtypeExpr *convert = (ConvertRowtypeExpr *) cstate->xprstate.expr;
HeapTuple result;
@@ -2543,7 +1889,7 @@ ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
HeapTupleHeader tuple;
HeapTupleData tmptup;
- tupDatum = ExecEvalExpr(cstate->arg, econtext, isNull, isDone);
+ tupDatum = ExecEvalExpr(cstate->arg, econtext, isNull);
/* this test covers the isDone exception too: */
if (*isNull)
@@ -2619,16 +1965,13 @@ ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
*/
static Datum
ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = caseExpr->args;
ListCell *clause;
Datum save_datum;
bool save_isNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* If there's a test expression, we have to evaluate it and save the value
* where the CaseTestExpr placeholders can find it. We must save and
@@ -2652,8 +1995,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
econtext->caseValue_datum = ExecEvalExpr(caseExpr->arg,
econtext,
- &arg_isNull,
- NULL);
+ &arg_isNull);
econtext->caseValue_isNull = arg_isNull;
}
@@ -2670,8 +2012,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
clause_value = ExecEvalExpr(wclause->expr,
econtext,
- &clause_isNull,
- NULL);
+ &clause_isNull);
/*
* if we have a true test, then we return the result, since the case
@@ -2684,8 +2025,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
econtext->caseValue_isNull = save_isNull;
return ExecEvalExpr(wclause->result,
econtext,
- isNull,
- isDone);
+ isNull);
}
}
@@ -2696,8 +2036,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
{
return ExecEvalExpr(caseExpr->defresult,
econtext,
- isNull,
- isDone);
+ isNull);
}
*isNull = true;
@@ -2712,10 +2051,8 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
static Datum
ExecEvalCaseTestExpr(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = econtext->caseValue_isNull;
return econtext->caseValue_datum;
}
@@ -2732,17 +2069,13 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
static Datum
ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
int result = 0;
int attnum = 0;
Bitmapset *grouped_cols = gstate->aggstate->grouped_cols;
ListCell *lc;
- if (isDone)
- *isDone = ExprSingleResult;
-
*isNull = false;
foreach(lc, (gstate->clauses))
@@ -2764,7 +2097,7 @@ ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
*/
static Datum
ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayExpr *arrayExpr = (ArrayExpr *) astate->xprstate.expr;
ArrayType *result;
@@ -2774,10 +2107,8 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
int dims[MAXDIM];
int lbs[MAXDIM];
- /* Set default values for result flags: non-null, not a set result */
+ /* Set default values for result flags: non-null */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
if (!arrayExpr->multidims)
{
@@ -2802,7 +2133,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
{
ExprState *e = (ExprState *) lfirst(element);
- dvalues[i] = ExecEvalExpr(e, econtext, &dnulls[i], NULL);
+ dvalues[i] = ExecEvalExpr(e, econtext, &dnulls[i]);
i++;
}
@@ -2852,7 +2183,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
ArrayType *array;
int this_ndims;
- arraydatum = ExecEvalExpr(e, econtext, &eisnull, NULL);
+ arraydatum = ExecEvalExpr(e, econtext, &eisnull);
/* temporarily ignore null subarrays */
if (eisnull)
{
@@ -2991,7 +2322,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
static Datum
ExecEvalRow(RowExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
HeapTuple tuple;
Datum *values;
@@ -3000,10 +2331,8 @@ ExecEvalRow(RowExprState *rstate,
ListCell *arg;
int i;
- /* Set default values for result flags: non-null, not a set result */
+ /* Set default values for result flag: non-null */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/* Allocate workspace */
natts = rstate->tupdesc->natts;
@@ -3019,7 +2348,7 @@ ExecEvalRow(RowExprState *rstate,
{
ExprState *e = (ExprState *) lfirst(arg);
- values[i] = ExecEvalExpr(e, econtext, &isnull[i], NULL);
+ values[i] = ExecEvalExpr(e, econtext, &isnull[i]);
i++;
}
@@ -3038,7 +2367,7 @@ ExecEvalRow(RowExprState *rstate,
static Datum
ExecEvalRowCompare(RowCompareExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
bool result;
RowCompareType rctype = ((RowCompareExpr *) rstate->xprstate.expr)->rctype;
@@ -3047,8 +2376,6 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
ListCell *r;
int i;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
i = 0;
@@ -3062,9 +2389,9 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
rstate->collations[i],
NULL, NULL);
locfcinfo.arg[0] = ExecEvalExpr(le, econtext,
- &locfcinfo.argnull[0], NULL);
+ &locfcinfo.argnull[0]);
locfcinfo.arg[1] = ExecEvalExpr(re, econtext,
- &locfcinfo.argnull[1], NULL);
+ &locfcinfo.argnull[1]);
if (rstate->funcs[i].fn_strict &&
(locfcinfo.argnull[0] || locfcinfo.argnull[1]))
return (Datum) 0; /* force NULL result */
@@ -3108,20 +2435,17 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
*/
static Datum
ExecEvalCoalesce(CoalesceExprState *coalesceExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ListCell *arg;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Simply loop through until something NOT NULL is found */
foreach(arg, coalesceExpr->args)
{
ExprState *e = (ExprState *) lfirst(arg);
Datum value;
- value = ExecEvalExpr(e, econtext, isNull, NULL);
+ value = ExecEvalExpr(e, econtext, isNull);
if (!*isNull)
return value;
}
@@ -3137,7 +2461,7 @@ ExecEvalCoalesce(CoalesceExprState *coalesceExpr, ExprContext *econtext,
*/
static Datum
ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result = (Datum) 0;
MinMaxExpr *minmax = (MinMaxExpr *) minmaxExpr->xprstate.expr;
@@ -3146,8 +2470,6 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
FunctionCallInfoData locfcinfo;
ListCell *arg;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
InitFunctionCallInfoData(locfcinfo, &minmaxExpr->cfunc, 2,
@@ -3162,7 +2484,7 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
bool valueIsNull;
int32 cmpresult;
- value = ExecEvalExpr(e, econtext, &valueIsNull, NULL);
+ value = ExecEvalExpr(e, econtext, &valueIsNull);
if (valueIsNull)
continue; /* ignore NULL inputs */
@@ -3198,14 +2520,12 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
static Datum
ExecEvalSQLValueFunction(ExprState *svfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result = (Datum) 0;
SQLValueFunction *svf = (SQLValueFunction *) svfExpr->expr;
FunctionCallInfoData fcinfo;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = false;
/*
@@ -3266,7 +2586,7 @@ ExecEvalSQLValueFunction(ExprState *svfExpr,
*/
static Datum
ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
XmlExpr *xexpr = (XmlExpr *) xmlExpr->xprstate.expr;
Datum value;
@@ -3274,8 +2594,6 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
ListCell *arg;
ListCell *narg;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
switch (xexpr->op)
@@ -3288,7 +2606,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
{
ExprState *e = (ExprState *) lfirst(arg);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (!isnull)
values = lappend(values, DatumGetPointer(value));
}
@@ -3313,7 +2631,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
ExprState *e = (ExprState *) lfirst(arg);
char *argname = strVal(lfirst(narg));
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (!isnull)
{
appendStringInfo(&buf, "<%s>%s</%s>",
@@ -3356,13 +2674,13 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 2);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
data = DatumGetTextP(value);
e = (ExprState *) lsecond(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull) /* probably can't happen */
return (Datum) 0;
preserve_whitespace = DatumGetBool(value);
@@ -3386,7 +2704,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
if (xmlExpr->args)
{
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
arg = NULL;
else
@@ -3413,20 +2731,20 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 3);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
data = DatumGetXmlP(value);
e = (ExprState *) lsecond(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
version = NULL;
else
version = DatumGetTextP(value);
e = (ExprState *) lthird(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
standalone = DatumGetInt32(value);
*isNull = false;
@@ -3445,7 +2763,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 1);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
@@ -3463,7 +2781,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 1);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
else
@@ -3490,14 +2808,10 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
static Datum
ExecEvalNullIf(FuncExprState *nullIfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
-
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -3507,7 +2821,7 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
NullIfExpr *op = (NullIfExpr *) nullIfExpr->xprstate.expr;
ExecInitFcache(op->opfuncid, op->inputcollid, nullIfExpr,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory);
Assert(!nullIfExpr->func.fn_retset);
}
@@ -3515,11 +2829,7 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
* Evaluate arguments
*/
fcinfo = &nullIfExpr->fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, nullIfExpr->args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("NULLIF does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, nullIfExpr->args, econtext);
Assert(fcinfo->nargs == 2);
/* if either argument is NULL they can't be equal */
@@ -3549,16 +2859,12 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
static Datum
ExecEvalNullTest(NullTestState *nstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
NullTest *ntest = (NullTest *) nstate->xprstate.expr;
Datum result;
- result = ExecEvalExpr(nstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(nstate->arg, econtext, isNull);
if (ntest->argisrow && !(*isNull))
{
@@ -3658,16 +2964,12 @@ ExecEvalNullTest(NullTestState *nstate,
static Datum
ExecEvalBooleanTest(GenericExprState *bstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
BooleanTest *btest = (BooleanTest *) bstate->xprstate.expr;
Datum result;
- result = ExecEvalExpr(bstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(bstate->arg, econtext, isNull);
switch (btest->booltesttype)
{
@@ -3743,16 +3045,13 @@ ExecEvalBooleanTest(GenericExprState *bstate,
*/
static Datum
ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
CoerceToDomain *ctest = (CoerceToDomain *) cstate->xprstate.expr;
Datum result;
ListCell *l;
- result = ExecEvalExpr(cstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(cstate->arg, econtext, isNull);
/* Make sure we have up-to-date constraints */
UpdateDomainConstraintRef(cstate->constraint_ref);
@@ -3790,8 +3089,8 @@ ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
econtext->domainValue_datum = result;
econtext->domainValue_isNull = *isNull;
- conResult = ExecEvalExpr(con->check_expr,
- econtext, &conIsNull, NULL);
+ conResult = ExecEvalExpr(con->check_expr, econtext,
+ &conIsNull);
if (!conIsNull &&
!DatumGetBool(conResult))
@@ -3826,10 +3125,8 @@ ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
static Datum
ExecEvalCoerceToDomainValue(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = econtext->domainValue_isNull;
return econtext->domainValue_datum;
}
@@ -3843,8 +3140,7 @@ ExecEvalCoerceToDomainValue(ExprState *exprstate,
static Datum
ExecEvalFieldSelect(FieldSelectState *fstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
FieldSelect *fselect = (FieldSelect *) fstate->xprstate.expr;
AttrNumber fieldnum = fselect->fieldnum;
@@ -3857,7 +3153,7 @@ ExecEvalFieldSelect(FieldSelectState *fstate,
Form_pg_attribute attr;
HeapTupleData tmptup;
- tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull, isDone);
+ tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull);
/* this test covers the isDone exception too: */
if (*isNull)
@@ -3922,8 +3218,7 @@ ExecEvalFieldSelect(FieldSelectState *fstate,
static Datum
ExecEvalFieldStore(FieldStoreState *fstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
FieldStore *fstore = (FieldStore *) fstate->xprstate.expr;
HeapTuple tuple;
@@ -3936,10 +3231,7 @@ ExecEvalFieldStore(FieldStoreState *fstate,
ListCell *l1,
*l2;
- tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return tupDatum;
+ tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull);
/* Lookup tupdesc if first time through or after rescan */
tupDesc = get_cached_rowtype(fstore->resulttype, -1,
@@ -3999,8 +3291,7 @@ ExecEvalFieldStore(FieldStoreState *fstate,
values[fieldnum - 1] = ExecEvalExpr(newval,
econtext,
- &isnull[fieldnum - 1],
- NULL);
+ &isnull[fieldnum - 1]);
}
econtext->caseValue_datum = save_datum;
@@ -4023,9 +3314,9 @@ ExecEvalFieldStore(FieldStoreState *fstate,
static Datum
ExecEvalRelabelType(GenericExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- return ExecEvalExpr(exprstate->arg, econtext, isNull, isDone);
+ return ExecEvalExpr(exprstate->arg, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -4037,16 +3328,13 @@ ExecEvalRelabelType(GenericExprState *exprstate,
static Datum
ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
Datum inputval;
char *string;
- inputval = ExecEvalExpr(iostate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return inputval; /* nothing to do */
+ inputval = ExecEvalExpr(iostate->arg, econtext, isNull);
if (*isNull)
string = NULL; /* output functions are not called on nulls */
@@ -4071,16 +3359,14 @@ ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
static Datum
ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) astate->xprstate.expr;
Datum result;
FunctionCallInfoData locfcinfo;
- result = ExecEvalExpr(astate->arg, econtext, isNull, isDone);
+ result = ExecEvalExpr(astate->arg, econtext, isNull);
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to do */
if (*isNull)
return result; /* nothing to do */
@@ -4148,7 +3434,7 @@ ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
*/
static Datum
ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
@@ -4165,14 +3451,13 @@ ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
Datum
ExecEvalExprSwitchContext(ExprState *expression,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
Datum retDatum;
MemoryContext oldContext;
oldContext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
- retDatum = ExecEvalExpr(expression, econtext, isNull, isDone);
+ retDatum = ExecEvalExpr(expression, econtext, isNull);
MemoryContextSwitchTo(oldContext);
return retDatum;
}
@@ -5032,7 +4317,7 @@ ExecQual(List *qual, ExprContext *econtext, bool resultForNull)
Datum expr_value;
bool isNull;
- expr_value = ExecEvalExpr(clause, econtext, &isNull, NULL);
+ expr_value = ExecEvalExpr(clause, econtext, &isNull);
if (isNull)
{
@@ -5090,17 +4375,9 @@ ExecCleanTargetListLength(List *targetlist)
/*
* ExecTargetList
* Evaluates a targetlist with respect to the given
- * expression context. Returns TRUE if we were able to create
- * a result, FALSE if we have exhausted a set-valued expression.
+ * expression context.
*
* Results are stored into the passed values and isnull arrays.
- * The caller must provide an itemIsDone array that persists across calls.
- *
- * As with ExecEvalExpr, the caller should pass isDone = NULL if not
- * prepared to deal with sets of result tuples. Otherwise, a return
- * of *isDone = ExprMultipleResult signifies a set element, and a return
- * of *isDone = ExprEndResult signifies end of the set of tuple.
- * We assume that *isDone has been initialized to ExprSingleResult by caller.
*
* Since fields of the result tuple might be multiply referenced in higher
* plan nodes, we have to force any read/write expanded values to read-only
@@ -5109,19 +4386,16 @@ ExecCleanTargetListLength(List *targetlist)
* actually-multiply-referenced Vars and insert an expression node that
* would do that only where really required.
*/
-static bool
+static void
ExecTargetList(List *targetlist,
TupleDesc tupdesc,
ExprContext *econtext,
Datum *values,
- bool *isnull,
- ExprDoneCond *itemIsDone,
- ExprDoneCond *isDone)
+ bool *isnull)
{
Form_pg_attribute *att = tupdesc->attrs;
MemoryContext oldContext;
ListCell *tl;
- bool haveDoneSets;
/*
* Run in short-lived per-tuple context while computing expressions.
@@ -5131,8 +4405,6 @@ ExecTargetList(List *targetlist,
/*
* evaluate all the expressions in the target list
*/
- haveDoneSets = false; /* any exhausted set exprs in tlist? */
-
foreach(tl, targetlist)
{
GenericExprState *gstate = (GenericExprState *) lfirst(tl);
@@ -5141,117 +4413,15 @@ ExecTargetList(List *targetlist,
values[resind] = ExecEvalExpr(gstate->arg,
econtext,
- &isnull[resind],
- &itemIsDone[resind]);
+ &isnull[resind]);
values[resind] = MakeExpandedObjectReadOnly(values[resind],
isnull[resind],
att[resind]->attlen);
-
- if (itemIsDone[resind] != ExprSingleResult)
- {
- /* We have a set-valued expression in the tlist */
- if (isDone == NULL)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
- if (itemIsDone[resind] == ExprMultipleResult)
- {
- /* we have undone sets in the tlist, set flag */
- *isDone = ExprMultipleResult;
- }
- else
- {
- /* we have done sets in the tlist, set flag for that */
- haveDoneSets = true;
- }
- }
- }
-
- if (haveDoneSets)
- {
- /*
- * note: can't get here unless we verified isDone != NULL
- */
- if (*isDone == ExprSingleResult)
- {
- /*
- * all sets are done, so report that tlist expansion is complete.
- */
- *isDone = ExprEndResult;
- MemoryContextSwitchTo(oldContext);
- return false;
- }
- else
- {
- /*
- * We have some done and some undone sets. Restart the done ones
- * so that we can deliver a tuple (if possible).
- */
- foreach(tl, targetlist)
- {
- GenericExprState *gstate = (GenericExprState *) lfirst(tl);
- TargetEntry *tle = (TargetEntry *) gstate->xprstate.expr;
- AttrNumber resind = tle->resno - 1;
-
- if (itemIsDone[resind] == ExprEndResult)
- {
- values[resind] = ExecEvalExpr(gstate->arg,
- econtext,
- &isnull[resind],
- &itemIsDone[resind]);
-
- values[resind] = MakeExpandedObjectReadOnly(values[resind],
- isnull[resind],
- att[resind]->attlen);
-
- if (itemIsDone[resind] == ExprEndResult)
- {
- /*
- * Oh dear, this item is returning an empty set. Guess
- * we can't make a tuple after all.
- */
- *isDone = ExprEndResult;
- break;
- }
- }
- }
-
- /*
- * If we cannot make a tuple because some sets are empty, we still
- * have to cycle the nonempty sets to completion, else resources
- * will not be released from subplans etc.
- *
- * XXX is that still necessary?
- */
- if (*isDone == ExprEndResult)
- {
- foreach(tl, targetlist)
- {
- GenericExprState *gstate = (GenericExprState *) lfirst(tl);
- TargetEntry *tle = (TargetEntry *) gstate->xprstate.expr;
- AttrNumber resind = tle->resno - 1;
-
- while (itemIsDone[resind] == ExprMultipleResult)
- {
- values[resind] = ExecEvalExpr(gstate->arg,
- econtext,
- &isnull[resind],
- &itemIsDone[resind]);
- /* no need for MakeExpandedObjectReadOnly */
- }
- }
-
- MemoryContextSwitchTo(oldContext);
- return false;
- }
- }
}
/* Report success */
MemoryContextSwitchTo(oldContext);
-
- return true;
}
/*
@@ -5268,7 +4438,7 @@ ExecTargetList(List *targetlist,
* result slot.
*/
TupleTableSlot *
-ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
+ExecProject(ProjectionInfo *projInfo)
{
TupleTableSlot *slot;
ExprContext *econtext;
@@ -5285,10 +4455,6 @@ ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
slot = projInfo->pi_slot;
econtext = projInfo->pi_exprContext;
- /* Assume single result row until proven otherwise */
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* Clear any former contents of the result slot. This makes it safe for
* us to use the slot's Datum/isnull arrays as workspace. (Also, we can
@@ -5356,21 +4522,15 @@ ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
}
/*
- * If there are any generic expressions, evaluate them. It's possible
- * that there are set-returning functions in such expressions; if so and
- * we have reached the end of the set, we return the result slot, which we
- * already marked empty.
+ * If there are any generic expressions, evaluate them.
*/
if (projInfo->pi_targetlist)
{
- if (!ExecTargetList(projInfo->pi_targetlist,
- slot->tts_tupleDescriptor,
- econtext,
- slot->tts_values,
- slot->tts_isnull,
- projInfo->pi_itemIsDone,
- isDone))
- return slot; /* no more result rows, return empty slot */
+ ExecTargetList(projInfo->pi_targetlist,
+ slot->tts_tupleDescriptor,
+ econtext,
+ slot->tts_values,
+ slot->tts_isnull);
}
/*
diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c
index fb0013d..eb224b4 100644
--- a/src/backend/executor/execScan.c
+++ b/src/backend/executor/execScan.c
@@ -125,8 +125,6 @@ ExecScan(ScanState *node,
ExprContext *econtext;
List *qual;
ProjectionInfo *projInfo;
- ExprDoneCond isDone;
- TupleTableSlot *resultSlot;
/*
* Fetch data from node
@@ -146,21 +144,6 @@ ExecScan(ScanState *node,
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- Assert(projInfo); /* can't get here if not projecting */
- resultSlot = ExecProject(projInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a scan tuple.
@@ -214,15 +197,9 @@ ExecScan(ScanState *node,
{
/*
* Form a projection tuple, store it in the result tuple slot
- * and return it --- unless we find we can project no tuples
- * from this scan tuple, in which case continue scan.
+ * and return it.
*/
- resultSlot = ExecProject(projInfo, &isDone);
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ return ExecProject(projInfo);
}
else
{
@@ -352,9 +329,6 @@ ExecScanReScan(ScanState *node)
{
EState *estate = node->ps.state;
- /* Stop projecting any tuples from SRFs in the targetlist */
- node->ps.ps_TupFromTlist = false;
-
/* Rescan EvalPlanQual tuple if we're inside an EvalPlanQual recheck */
if (estate->es_epqScanDone != NULL)
{
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index e937cf8..ded073a 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -592,12 +592,6 @@ ExecBuildProjectionInfo(List *targetList,
projInfo->pi_numSimpleVars = numSimpleVars;
projInfo->pi_directMap = directMap;
- if (exprlist == NIL)
- projInfo->pi_itemIsDone = NULL; /* not needed */
- else
- projInfo->pi_itemIsDone = (ExprDoneCond *)
- palloc(len * sizeof(ExprDoneCond));
-
return projInfo;
}
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index ce2fc28..f2ba170 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -859,13 +859,13 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
bool isnull;
res = ExecEvalExprSwitchContext(filter, aggstate->tmpcontext,
- &isnull, NULL);
+ &isnull);
if (isnull || !DatumGetBool(res))
continue;
}
/* Evaluate the current input expressions for this aggregate */
- slot = ExecProject(pertrans->evalproj, NULL);
+ slot = ExecProject(pertrans->evalproj);
if (pertrans->numSortCols > 0)
{
@@ -951,7 +951,7 @@ combine_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
FunctionCallInfo fcinfo = &pertrans->transfn_fcinfo;
/* Evaluate the current input expressions for this aggregate */
- slot = ExecProject(pertrans->evalproj, NULL);
+ slot = ExecProject(pertrans->evalproj);
Assert(slot->tts_nvalid >= 1);
/*
@@ -1325,8 +1325,7 @@ finalize_aggregate(AggState *aggstate,
fcinfo.arg[i] = ExecEvalExpr(expr,
aggstate->ss.ps.ps_ExprContext,
- &fcinfo.argnull[i],
- NULL);
+ &fcinfo.argnull[i]);
anynull |= fcinfo.argnull[i];
i++;
}
@@ -1566,7 +1565,7 @@ finalize_aggregates(AggState *aggstate,
/*
* Project the result of a group (whose aggs have already been calculated by
* finalize_aggregates). Returns the result slot, or NULL if no row is
- * projected (suppressed by qual or by an empty SRF).
+ * projected (suppressed by qual).
*/
static TupleTableSlot *
project_aggregates(AggState *aggstate)
@@ -1579,20 +1578,10 @@ project_aggregates(AggState *aggstate)
if (ExecQual(aggstate->ss.ps.qual, econtext, false))
{
/*
- * Form and return or store a projection tuple using the aggregate
- * results and the representative input tuple.
+ * Form and return projection tuple using the aggregate results and
+ * the representative input tuple.
*/
- ExprDoneCond isDone;
- TupleTableSlot *result;
-
- result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- aggstate->ss.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(aggstate->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(aggstate, 1);
@@ -1802,27 +1791,6 @@ ExecAgg(AggState *node)
{
TupleTableSlot *result;
- /*
- * Check to see if we're still projecting out tuples from a previous agg
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ss.ps.ps_TupFromTlist)
- {
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
- * (We must do the ps_TupFromTlist check first, because in some cases
- * agg_done gets set before we emit the final aggregate tuple, and we have
- * to finish running SRFs for it.)
- */
if (!node->agg_done)
{
/* Dispatch based on strategy */
@@ -2443,8 +2411,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&aggstate->ss.ps);
ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
- aggstate->ss.ps.ps_TupFromTlist = false;
-
/*
* get the count of aggregates in targetlist and quals
*/
@@ -3411,8 +3377,6 @@ ExecReScanAgg(AggState *node)
node->agg_done = false;
- node->ss.ps.ps_TupFromTlist = false;
-
if (aggnode->aggstrategy == AGG_HASHED)
{
/*
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 449aacb..16381f6 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -575,8 +575,6 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeCtescan.c b/src/backend/executor/nodeCtescan.c
index 3c2f684..1acb166 100644
--- a/src/backend/executor/nodeCtescan.c
+++ b/src/backend/executor/nodeCtescan.c
@@ -265,8 +265,6 @@ ExecInitCteScan(CteScan *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&scanstate->ss.ps);
ExecAssignScanProjectionInfo(&scanstate->ss);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
return scanstate;
}
diff --git a/src/backend/executor/nodeCustom.c b/src/backend/executor/nodeCustom.c
index 322abca..b465252 100644
--- a/src/backend/executor/nodeCustom.c
+++ b/src/backend/executor/nodeCustom.c
@@ -48,8 +48,6 @@ ExecInitCustomScan(CustomScan *cscan, EState *estate, int eflags)
/* create expression context for node */
ExecAssignExprContext(estate, &css->ss.ps);
- css->ss.ps.ps_TupFromTlist = false;
-
/* initialize child expressions */
css->ss.ps.targetlist = (List *)
ExecInitExpr((Expr *) cscan->scan.plan.targetlist,
diff --git a/src/backend/executor/nodeForeignscan.c b/src/backend/executor/nodeForeignscan.c
index d886aaf..3762843 100644
--- a/src/backend/executor/nodeForeignscan.c
+++ b/src/backend/executor/nodeForeignscan.c
@@ -152,8 +152,6 @@ ExecInitForeignScan(ForeignScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeFunctionscan.c b/src/backend/executor/nodeFunctionscan.c
index 1da4fde..725a5fe 100644
--- a/src/backend/executor/nodeFunctionscan.c
+++ b/src/backend/executor/nodeFunctionscan.c
@@ -282,8 +282,6 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* tuple table initialization
*/
@@ -647,7 +645,6 @@ ExecBeginFunctionResult(FunctionScanState *node,
IsA(funcexpr->expr, FuncExpr))
{
FuncExprState *fcache = (FuncExprState *) funcexpr;
- ExprDoneCond argDone;
/*
* This path is similar to ExecMakeFunctionResult.
@@ -662,7 +659,7 @@ ExecBeginFunctionResult(FunctionScanState *node,
FuncExpr *func = (FuncExpr *) fcache->xprstate.expr;
ExecInitFcache(func->funcid, func->inputcollid, fcache,
- econtext->ecxt_per_query_memory, false);
+ econtext->ecxt_per_query_memory);
}
returnsSet = fcache->func.fn_retset;
InitFunctionCallInfoData(perfunc->fcinfo, &(fcache->func),
@@ -681,15 +678,9 @@ ExecBeginFunctionResult(FunctionScanState *node,
* and can be reset each time the node is re-scanned.
*/
oldcontext = MemoryContextSwitchTo(node->argcontext);
- argDone = ExecEvalFuncArgs(&perfunc->fcinfo, fcache->args, econtext);
+ ExecEvalFuncArgs(&perfunc->fcinfo, fcache->args, econtext);
MemoryContextSwitchTo(oldcontext);
- /* We don't allow sets in the arguments of the table function */
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
-
/*
* If function is strict, and there are any NULL arguments, skip
* calling the function and act like it returned NULL (or an empty
@@ -742,8 +733,7 @@ ExecBeginFunctionResult(FunctionScanState *node,
else
{
perfunc->rsinfo.isDone = ExprSingleResult;
- result = ExecEvalExpr(funcexpr, econtext,
- &perfunc->fcinfo.isnull, NULL);
+ result = ExecEvalExpr(funcexpr, econtext, &perfunc->fcinfo.isnull);
/* done after this, will use SFRM_ValuePerCall branch below */
}
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 438d1b2..51754c8 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -99,8 +99,6 @@ ExecInitGather(Gather *node, EState *estate, int eflags)
outerNode = outerPlan(node);
outerPlanState(gatherstate) = ExecInitNode(outerNode, estate, eflags);
- gatherstate->ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
@@ -131,8 +129,6 @@ ExecGather(GatherState *node)
TupleTableSlot *fslot = node->funnel_slot;
int i;
TupleTableSlot *slot;
- TupleTableSlot *resultSlot;
- ExprDoneCond isDone;
ExprContext *econtext;
/*
@@ -198,20 +194,6 @@ ExecGather(GatherState *node)
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note we can't do this
* until we're done projecting. This will also clear any previous tuple
@@ -239,13 +221,8 @@ ExecGather(GatherState *node)
* back around for another tuple
*/
econtext->ecxt_outertuple = slot;
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ return ExecProject(node->ps.ps_ProjInfo);
}
return slot;
diff --git a/src/backend/executor/nodeGroup.c b/src/backend/executor/nodeGroup.c
index dcf5175..2f55c70 100644
--- a/src/backend/executor/nodeGroup.c
+++ b/src/backend/executor/nodeGroup.c
@@ -50,23 +50,6 @@ ExecGroup(GroupState *node)
grpColIdx = ((Group *) node->ss.ps.plan)->grpColIdx;
/*
- * Check to see if we're still projecting out tuples from a previous group
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ss.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
* The ScanTupleSlot holds the (copied) first tuple of each group.
*/
firsttupleslot = node->ss.ss_ScanTupleSlot;
@@ -107,16 +90,7 @@ ExecGroup(GroupState *node)
/*
* Form and return a projection tuple using the first input tuple.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ss.ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(node, 1);
@@ -170,16 +144,7 @@ ExecGroup(GroupState *node)
/*
* Form and return a projection tuple using the first input tuple.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ss.ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(node, 1);
@@ -246,8 +211,6 @@ ExecInitGroup(Group *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&grpstate->ss.ps);
ExecAssignProjectionInfo(&grpstate->ss.ps, NULL);
- grpstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Precompute fmgr lookup data for inner loop
*/
@@ -283,7 +246,6 @@ ExecReScanGroup(GroupState *node)
PlanState *outerPlan = outerPlanState(node);
node->grp_done = FALSE;
- node->ss.ps.ps_TupFromTlist = false;
/* must clear first tuple */
ExecClearTuple(node->ss.ss_ScanTupleSlot);
diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c
index 9ed09a7..e008a51 100644
--- a/src/backend/executor/nodeHash.c
+++ b/src/backend/executor/nodeHash.c
@@ -963,7 +963,7 @@ ExecHashGetHashValue(HashJoinTable hashtable,
/*
* Get the join attribute value of the tuple
*/
- keyval = ExecEvalExpr(keyexpr, econtext, &isNull, NULL);
+ keyval = ExecEvalExpr(keyexpr, econtext, &isNull);
/*
* If the attribute is NULL, and the join operator is strict, then
diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c
index 369e666..45c7be2 100644
--- a/src/backend/executor/nodeHashjoin.c
+++ b/src/backend/executor/nodeHashjoin.c
@@ -66,7 +66,6 @@ ExecHashJoin(HashJoinState *node)
List *joinqual;
List *otherqual;
ExprContext *econtext;
- ExprDoneCond isDone;
HashJoinTable hashtable;
TupleTableSlot *outerTupleSlot;
uint32 hashvalue;
@@ -83,22 +82,6 @@ ExecHashJoin(HashJoinState *node)
econtext = node->js.ps.ps_ExprContext;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -315,16 +298,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
{
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -354,16 +328,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
{
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -393,16 +358,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
{
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -586,7 +542,6 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags)
/* child Hash node needs to evaluate inner hash keys, too */
((HashState *) innerPlanState(hjstate))->hashkeys = rclauses;
- hjstate->js.ps.ps_TupFromTlist = false;
hjstate->hj_JoinState = HJ_BUILD_HASHTABLE;
hjstate->hj_MatchedOuter = false;
hjstate->hj_OuterNotEmpty = false;
@@ -1000,7 +955,6 @@ ExecReScanHashJoin(HashJoinState *node)
node->hj_CurSkewBucketNo = INVALID_SKEW_BUCKET_NO;
node->hj_CurTuple = NULL;
- node->js.ps.ps_TupFromTlist = false;
node->hj_MatchedOuter = false;
node->hj_FirstOuterTupleSlot = NULL;
diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c
index 4f6f91c..edd45661 100644
--- a/src/backend/executor/nodeIndexonlyscan.c
+++ b/src/backend/executor/nodeIndexonlyscan.c
@@ -412,8 +412,6 @@ ExecInitIndexOnlyScan(IndexOnlyScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &indexstate->ss.ps);
- indexstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 3143bd9..d1b1c23 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -336,8 +336,7 @@ EvalOrderByExpressions(IndexScanState *node, ExprContext *econtext)
node->iss_OrderByValues[i] = ExecEvalExpr(orderby,
econtext,
- &node->iss_OrderByNulls[i],
- NULL);
+ &node->iss_OrderByNulls[i]);
i++;
}
@@ -590,8 +589,7 @@ ExecIndexEvalRuntimeKeys(ExprContext *econtext,
*/
scanvalue = ExecEvalExpr(key_expr,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
{
scan_key->sk_argument = scanvalue;
@@ -648,8 +646,7 @@ ExecIndexEvalArrayKeys(ExprContext *econtext,
*/
arraydatum = ExecEvalExpr(array_expr,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
{
result = false;
@@ -837,8 +834,6 @@ ExecInitIndexScan(IndexScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &indexstate->ss.ps);
- indexstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*
diff --git a/src/backend/executor/nodeLimit.c b/src/backend/executor/nodeLimit.c
index faf32e1..0ef9ea5 100644
--- a/src/backend/executor/nodeLimit.c
+++ b/src/backend/executor/nodeLimit.c
@@ -239,8 +239,7 @@ recompute_limits(LimitState *node)
{
val = ExecEvalExprSwitchContext(node->limitOffset,
econtext,
- &isNull,
- NULL);
+ &isNull);
/* Interpret NULL offset as no offset */
if (isNull)
node->offset = 0;
@@ -263,8 +262,7 @@ recompute_limits(LimitState *node)
{
val = ExecEvalExprSwitchContext(node->limitCount,
econtext,
- &isNull,
- NULL);
+ &isNull);
/* Interpret NULL count as no count (LIMIT ALL) */
if (isNull)
{
@@ -346,18 +344,11 @@ pass_down_bound(LimitState *node, PlanState *child_node)
else if (IsA(child_node, ResultState))
{
/*
- * An extra consideration here is that if the Result is projecting a
- * targetlist that contains any SRFs, we can't assume that every input
- * tuple generates an output tuple, so a Sort underneath might need to
- * return more than N tuples to satisfy LIMIT N. So we cannot use
- * bounded sort.
- *
* If Result supported qual checking, we'd have to punt on seeing a
- * qual, too. Note that having a resconstantqual is not a
- * showstopper: if that fails we're not getting any rows at all.
+ * qual. Note that having a resconstantqual is not a showstopper: if
+ * that fails we're not getting any rows at all.
*/
- if (outerPlanState(child_node) &&
- !expression_returns_set((Node *) child_node->plan->targetlist))
+ if (outerPlanState(child_node))
pass_down_bound(node, outerPlanState(child_node));
}
}
diff --git a/src/backend/executor/nodeMergejoin.c b/src/backend/executor/nodeMergejoin.c
index 6db09b8..340a2a9 100644
--- a/src/backend/executor/nodeMergejoin.c
+++ b/src/backend/executor/nodeMergejoin.c
@@ -313,7 +313,7 @@ MJEvalOuterValues(MergeJoinState *mergestate)
MergeJoinClause clause = &mergestate->mj_Clauses[i];
clause->ldatum = ExecEvalExpr(clause->lexpr, econtext,
- &clause->lisnull, NULL);
+ &clause->lisnull);
if (clause->lisnull)
{
/* match is impossible; can we end the join early? */
@@ -360,7 +360,7 @@ MJEvalInnerValues(MergeJoinState *mergestate, TupleTableSlot *innerslot)
MergeJoinClause clause = &mergestate->mj_Clauses[i];
clause->rdatum = ExecEvalExpr(clause->rexpr, econtext,
- &clause->risnull, NULL);
+ &clause->risnull);
if (clause->risnull)
{
/* match is impossible; can we end the join early? */
@@ -465,19 +465,10 @@ MJFillOuter(MergeJoinState *node)
* qualification succeeded. now form the desired projection tuple and
* return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
MJ_printf("ExecMergeJoin: returning outer fill tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -506,19 +497,9 @@ MJFillInner(MergeJoinState *node)
* qualification succeeded. now form the desired projection tuple and
* return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
MJ_printf("ExecMergeJoin: returning inner fill tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -642,23 +623,6 @@ ExecMergeJoin(MergeJoinState *node)
doFillInner = node->mj_FillInner;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -856,20 +820,9 @@ ExecMergeJoin(MergeJoinState *node)
* qualification succeeded. now form the desired
* projection tuple and return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
MJ_printf("ExecMergeJoin: returning tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo,
- &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -1629,7 +1582,6 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags)
* initialize join state
*/
mergestate->mj_JoinState = EXEC_MJ_INITIALIZE_OUTER;
- mergestate->js.ps.ps_TupFromTlist = false;
mergestate->mj_MatchedOuter = false;
mergestate->mj_MatchedInner = false;
mergestate->mj_OuterTupleSlot = NULL;
@@ -1684,7 +1636,6 @@ ExecReScanMergeJoin(MergeJoinState *node)
ExecClearTuple(node->mj_MarkedTupleSlot);
node->mj_JoinState = EXEC_MJ_INITIALIZE_OUTER;
- node->js.ps.ps_TupFromTlist = false;
node->mj_MatchedOuter = false;
node->mj_MatchedInner = false;
node->mj_OuterTupleSlot = NULL;
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index af7b26c..0e6187b 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -175,7 +175,7 @@ ExecProcessReturning(ResultRelInfo *resultRelInfo,
econtext->ecxt_outertuple = planSlot;
/* Compute the RETURNING expressions */
- return ExecProject(projectReturning, NULL);
+ return ExecProject(projectReturning);
}
/*
@@ -1216,7 +1216,7 @@ ExecOnConflictUpdate(ModifyTableState *mtstate,
}
/* Project the new tuple version */
- ExecProject(resultRelInfo->ri_onConflictSetProj, NULL);
+ ExecProject(resultRelInfo->ri_onConflictSetProj);
/*
* Note that it is possible that the target tuple has been modified in
diff --git a/src/backend/executor/nodeNestloop.c b/src/backend/executor/nodeNestloop.c
index 555fa09..5d30e75 100644
--- a/src/backend/executor/nodeNestloop.c
+++ b/src/backend/executor/nodeNestloop.c
@@ -82,23 +82,6 @@ ExecNestLoop(NestLoopState *node)
econtext = node->js.ps.ps_ExprContext;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -201,19 +184,10 @@ ExecNestLoop(NestLoopState *node)
* the slot containing the result tuple using
* ExecProject().
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
ENL1_printf("qualification succeeded, projecting tuple");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -259,19 +233,10 @@ ExecNestLoop(NestLoopState *node)
* qualification was satisfied so we project and return the
* slot containing the result tuple using ExecProject().
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
ENL1_printf("qualification succeeded, projecting tuple");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -377,7 +342,6 @@ ExecInitNestLoop(NestLoop *node, EState *estate, int eflags)
/*
* finally, wipe the current outer tuple clean.
*/
- nlstate->js.ps.ps_TupFromTlist = false;
nlstate->nl_NeedNewOuter = true;
nlstate->nl_MatchedOuter = false;
@@ -441,7 +405,6 @@ ExecReScanNestLoop(NestLoopState *node)
* outer Vars are used as run-time keys...
*/
- node->js.ps.ps_TupFromTlist = false;
node->nl_NeedNewOuter = true;
node->nl_MatchedOuter = false;
}
diff --git a/src/backend/executor/nodeResult.c b/src/backend/executor/nodeResult.c
index 4007b76..3901351 100644
--- a/src/backend/executor/nodeResult.c
+++ b/src/backend/executor/nodeResult.c
@@ -67,10 +67,8 @@ TupleTableSlot *
ExecResult(ResultState *node)
{
TupleTableSlot *outerTupleSlot;
- TupleTableSlot *resultSlot;
PlanState *outerPlan;
ExprContext *econtext;
- ExprDoneCond isDone;
econtext = node->ps.ps_ExprContext;
@@ -92,20 +90,6 @@ ExecResult(ResultState *node)
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a scan tuple.
@@ -147,18 +131,8 @@ ExecResult(ResultState *node)
node->rs_done = true;
}
- /*
- * form the result tuple using ExecProject(), and return it --- unless
- * the projection produces an empty set, in which case we must loop
- * back to see if there are more outerPlan tuples.
- */
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ /* form the result tuple using ExecProject(), and return it */
+ return ExecProject(node->ps.ps_ProjInfo);
}
return NULL;
@@ -228,8 +202,6 @@ ExecInitResult(Result *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &resstate->ps);
- resstate->ps.ps_TupFromTlist = false;
-
/*
* tuple table initialization
*/
@@ -295,7 +267,6 @@ void
ExecReScanResult(ResultState *node)
{
node->rs_done = false;
- node->ps.ps_TupFromTlist = false;
node->rs_checkqual = (node->resconstantqual == NULL) ? false : true;
/*
diff --git a/src/backend/executor/nodeSamplescan.c b/src/backend/executor/nodeSamplescan.c
index 9ce7c02..64396e1 100644
--- a/src/backend/executor/nodeSamplescan.c
+++ b/src/backend/executor/nodeSamplescan.c
@@ -188,8 +188,6 @@ ExecInitSampleScan(SampleScan *node, EState *estate, int eflags)
*/
InitScanRelation(scanstate, estate, eflags);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
@@ -299,8 +297,7 @@ tablesample_init(SampleScanState *scanstate)
params[i] = ExecEvalExprSwitchContext(argstate,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLESAMPLE_ARGUMENT),
@@ -312,8 +309,7 @@ tablesample_init(SampleScanState *scanstate)
{
datum = ExecEvalExprSwitchContext(scanstate->repeatable,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLESAMPLE_REPEAT),
diff --git a/src/backend/executor/nodeSeqscan.c b/src/backend/executor/nodeSeqscan.c
index 00bf3a5..477dc42 100644
--- a/src/backend/executor/nodeSeqscan.c
+++ b/src/backend/executor/nodeSeqscan.c
@@ -206,8 +206,6 @@ ExecInitSeqScan(SeqScan *node, EState *estate, int eflags)
*/
InitScanRelation(scanstate, estate, eflags);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c
index e503494..5800ca8 100644
--- a/src/backend/executor/nodeSubplan.c
+++ b/src/backend/executor/nodeSubplan.c
@@ -41,12 +41,10 @@
static Datum ExecSubPlan(SubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecAlternativeSubPlan(AlternativeSubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecHashSubPlan(SubPlanState *node,
ExprContext *econtext,
bool *isNull);
@@ -69,15 +67,12 @@ static bool slotNoNulls(TupleTableSlot *slot);
static Datum
ExecSubPlan(SubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
SubPlan *subplan = (SubPlan *) node->xprstate.expr;
/* Set default values for result flags: non-null, not a set result */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/* Sanity checks */
if (subplan->subLinkType == CTE_SUBLINK)
@@ -128,7 +123,7 @@ ExecHashSubPlan(SubPlanState *node,
* have to set the econtext to use (hack alert!).
*/
node->projLeft->pi_exprContext = econtext;
- slot = ExecProject(node->projLeft, NULL);
+ slot = ExecProject(node->projLeft);
/*
* Note: because we are typically called in a per-tuple context, we have
@@ -285,8 +280,7 @@ ExecScanSubPlan(SubPlanState *node,
prm->value = ExecEvalExprSwitchContext((ExprState *) lfirst(pvar),
econtext,
- &(prm->isnull),
- NULL);
+ &(prm->isnull));
planstate->chgParam = bms_add_member(planstate->chgParam, paramid);
}
@@ -403,7 +397,7 @@ ExecScanSubPlan(SubPlanState *node,
}
rowresult = ExecEvalExprSwitchContext(node->testexpr, econtext,
- &rownull, NULL);
+ &rownull);
if (subLinkType == ANY_SUBLINK)
{
@@ -570,7 +564,7 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext)
&(prmdata->isnull));
col++;
}
- slot = ExecProject(node->projRight, NULL);
+ slot = ExecProject(node->projRight);
/*
* If result contains any nulls, store separately or not at all.
@@ -987,8 +981,7 @@ ExecSetParamPlan(SubPlanState *node, ExprContext *econtext)
prm->value = ExecEvalExprSwitchContext((ExprState *) lfirst(pvar),
econtext,
- &(prm->isnull),
- NULL);
+ &(prm->isnull));
planstate->chgParam = bms_add_member(planstate->chgParam, paramid);
}
@@ -1224,8 +1217,7 @@ ExecInitAlternativeSubPlan(AlternativeSubPlan *asplan, PlanState *parent)
static Datum
ExecAlternativeSubPlan(AlternativeSubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* Just pass control to the active subplan */
SubPlanState *activesp = (SubPlanState *) list_nth(node->subplans,
@@ -1233,8 +1225,5 @@ ExecAlternativeSubPlan(AlternativeSubPlanState *node,
Assert(IsA(activesp, SubPlanState));
- return ExecSubPlan(activesp,
- econtext,
- isNull,
- isDone);
+ return ExecSubPlan(activesp, econtext, isNull);
}
diff --git a/src/backend/executor/nodeSubqueryscan.c b/src/backend/executor/nodeSubqueryscan.c
index 9bafc62..4de7024 100644
--- a/src/backend/executor/nodeSubqueryscan.c
+++ b/src/backend/executor/nodeSubqueryscan.c
@@ -138,8 +138,6 @@ ExecInitSubqueryScan(SubqueryScan *node, EState *estate, int eflags)
*/
subquerystate->subplan = ExecInitNode(node->subplan, estate, eflags);
- subquerystate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize scan tuple type (needed by ExecAssignScanProjectionInfo)
*/
diff --git a/src/backend/executor/nodeTidscan.c b/src/backend/executor/nodeTidscan.c
index 2604103..e1c736c 100644
--- a/src/backend/executor/nodeTidscan.c
+++ b/src/backend/executor/nodeTidscan.c
@@ -104,8 +104,7 @@ TidListCreate(TidScanState *tidstate)
itemptr = (ItemPointer)
DatumGetPointer(ExecEvalExprSwitchContext(exstate,
econtext,
- &isNull,
- NULL));
+ &isNull));
if (!isNull &&
ItemPointerIsValid(itemptr) &&
ItemPointerGetBlockNumber(itemptr) < nblocks)
@@ -133,8 +132,7 @@ TidListCreate(TidScanState *tidstate)
exstate = (ExprState *) lsecond(saexstate->fxprstate.args);
arraydatum = ExecEvalExprSwitchContext(exstate,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
continue;
itemarray = DatumGetArrayTypeP(arraydatum);
@@ -469,8 +467,6 @@ ExecInitTidScan(TidScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &tidstate->ss.ps);
- tidstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeValuesscan.c b/src/backend/executor/nodeValuesscan.c
index 9c03f8a..18c8ae9 100644
--- a/src/backend/executor/nodeValuesscan.c
+++ b/src/backend/executor/nodeValuesscan.c
@@ -140,8 +140,7 @@ ValuesNext(ValuesScanState *node)
values[resind] = ExecEvalExpr(estate,
econtext,
- &isnull[resind],
- NULL);
+ &isnull[resind]);
/*
* We must force any R/W expanded datums to read-only state, in
@@ -272,8 +271,6 @@ ExecInitValuesScan(ValuesScan *node, EState *estate, int eflags)
scanstate->exprlists[i++] = (List *) lfirst(vtl);
}
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c
index d4c88a1..42550c9 100644
--- a/src/backend/executor/nodeWindowAgg.c
+++ b/src/backend/executor/nodeWindowAgg.c
@@ -256,7 +256,7 @@ advance_windowaggregate(WindowAggState *winstate,
if (filter)
{
bool isnull;
- Datum res = ExecEvalExpr(filter, econtext, &isnull, NULL);
+ Datum res = ExecEvalExpr(filter, econtext, &isnull);
if (isnull || !DatumGetBool(res))
{
@@ -272,7 +272,7 @@ advance_windowaggregate(WindowAggState *winstate,
ExprState *argstate = (ExprState *) lfirst(arg);
fcinfo->arg[i] = ExecEvalExpr(argstate, econtext,
- &fcinfo->argnull[i], NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -418,7 +418,7 @@ advance_windowaggregate_base(WindowAggState *winstate,
if (filter)
{
bool isnull;
- Datum res = ExecEvalExpr(filter, econtext, &isnull, NULL);
+ Datum res = ExecEvalExpr(filter, econtext, &isnull);
if (isnull || !DatumGetBool(res))
{
@@ -434,7 +434,7 @@ advance_windowaggregate_base(WindowAggState *winstate,
ExprState *argstate = (ExprState *) lfirst(arg);
fcinfo->arg[i] = ExecEvalExpr(argstate, econtext,
- &fcinfo->argnull[i], NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -1551,15 +1551,12 @@ update_frametailpos(WindowObject winobj, TupleTableSlot *slot)
* ExecWindowAgg receives tuples from its outer subplan and
* stores them into a tuplestore, then processes window functions.
* This node doesn't reduce nor qualify any row so the number of
- * returned rows is exactly the same as its outer subplan's result
- * (ignoring the case of SRFs in the targetlist, that is).
+ * returned rows is exactly the same as its outer subplan's result.
* -----------------
*/
TupleTableSlot *
ExecWindowAgg(WindowAggState *winstate)
{
- TupleTableSlot *result;
- ExprDoneCond isDone;
ExprContext *econtext;
int i;
int numfuncs;
@@ -1568,23 +1565,6 @@ ExecWindowAgg(WindowAggState *winstate)
return NULL;
/*
- * Check to see if we're still projecting out tuples from a previous
- * output tuple (because there is a function-returning-set in the
- * projection expressions). If so, try to project another one.
- */
- if (winstate->ss.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(winstate->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- winstate->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
* Compute frame offset values, if any, during first call.
*/
if (winstate->all_first)
@@ -1601,8 +1581,7 @@ ExecWindowAgg(WindowAggState *winstate)
Assert(winstate->startOffset != NULL);
value = ExecEvalExprSwitchContext(winstate->startOffset,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
@@ -1627,8 +1606,7 @@ ExecWindowAgg(WindowAggState *winstate)
Assert(winstate->endOffset != NULL);
value = ExecEvalExprSwitchContext(winstate->endOffset,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
@@ -1651,7 +1629,6 @@ ExecWindowAgg(WindowAggState *winstate)
winstate->all_first = false;
}
-restart:
if (winstate->buffer == NULL)
{
/* Initialize for first partition and set current row = 0 */
@@ -1743,17 +1720,8 @@ restart:
* evaluated with respect to that row.
*/
econtext->ecxt_outertuple = winstate->ss.ss_ScanTupleSlot;
- result = ExecProject(winstate->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprEndResult)
- {
- /* SRF in tlist returned no rows, so advance to next input tuple */
- goto restart;
- }
-
- winstate->ss.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
+ return ExecProject(winstate->ss.ps.ps_ProjInfo);
}
/* -----------------
@@ -1867,8 +1835,6 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&winstate->ss.ps);
ExecAssignProjectionInfo(&winstate->ss.ps, NULL);
- winstate->ss.ps.ps_TupFromTlist = false;
-
/* Set up data for comparing tuples */
if (node->partNumCols > 0)
winstate->partEqfunctions = execTuplesMatchPrepare(node->partNumCols,
@@ -2061,8 +2027,6 @@ ExecReScanWindowAgg(WindowAggState *node)
ExprContext *econtext = node->ss.ps.ps_ExprContext;
node->all_done = false;
-
- node->ss.ps.ps_TupFromTlist = false;
node->all_first = true;
/* release tuplestore et al */
@@ -2685,7 +2649,7 @@ WinGetFuncArgInPartition(WindowObject winobj, int argno,
}
econtext->ecxt_outertuple = slot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
}
@@ -2784,7 +2748,7 @@ WinGetFuncArgInFrame(WindowObject winobj, int argno,
}
econtext->ecxt_outertuple = slot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
}
@@ -2814,5 +2778,5 @@ WinGetFuncArgCurrent(WindowObject winobj, int argno, bool *isnull)
econtext->ecxt_outertuple = winstate->ss.ss_ScanTupleSlot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
diff --git a/src/backend/executor/nodeWorktablescan.c b/src/backend/executor/nodeWorktablescan.c
index cfed6e6..dbb8ea3 100644
--- a/src/backend/executor/nodeWorktablescan.c
+++ b/src/backend/executor/nodeWorktablescan.c
@@ -174,8 +174,6 @@ ExecInitWorkTableScan(WorkTableScan *node, EState *estate, int eflags)
*/
ExecAssignResultTypeFromTL(&scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
return scanstate;
}
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 986c92b..73862a2 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -151,8 +151,7 @@ static PathTarget *make_window_input_target(PlannerInfo *root,
static List *make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc,
List *tlist);
static PathTarget *make_sort_input_target(PlannerInfo *root,
- PathTarget *final_target,
- bool *have_postponed_srfs);
+ PathTarget *final_target);
/*****************************************************************************
@@ -1443,8 +1442,6 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
int64 offset_est = 0;
int64 count_est = 0;
double limit_tuples = -1.0;
- bool have_postponed_srfs = false;
- double tlist_rows;
PathTarget *final_target;
RelOptInfo *current_rel;
RelOptInfo *final_rel;
@@ -1710,10 +1707,7 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
* Figure out whether there's a hard limit on the number of rows that
* query_planner's result subplan needs to return. Even if we know a
* hard limit overall, it doesn't apply if the query has any
- * grouping/aggregation operations. (XXX it also doesn't apply if the
- * tlist contains any SRFs; but checking for that here seems more
- * costly than it's worth, since root->limit_tuples is only used for
- * cost estimates, and only in a small number of cases.)
+ * grouping/aggregation operations.
*/
if (parse->groupClause ||
parse->groupingSets ||
@@ -1757,8 +1751,7 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
*/
if (parse->sortClause)
sort_input_target = make_sort_input_target(root,
- final_target,
- &have_postponed_srfs);
+ final_target);
else
sort_input_target = final_target;
@@ -1915,50 +1908,17 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
/*
* If ORDER BY was given, consider ways to implement that, and generate a
* new upperrel containing only paths that emit the correct ordering and
- * project the correct final_target. We can apply the original
- * limit_tuples limit in sort costing here, but only if there are no
- * postponed SRFs.
+ * project the correct final_target.
*/
if (parse->sortClause)
{
current_rel = create_ordered_paths(root,
current_rel,
final_target,
- have_postponed_srfs ? -1.0 :
limit_tuples);
}
/*
- * If there are set-returning functions in the tlist, scale up the output
- * rowcounts of all surviving Paths to account for that. Note that if any
- * SRFs appear in sorting or grouping columns, we'll have underestimated
- * the numbers of rows passing through earlier steps; but that's such a
- * weird usage that it doesn't seem worth greatly complicating matters to
- * account for it.
- */
- tlist_rows = tlist_returns_set_rows(tlist);
- if (tlist_rows > 1)
- {
- foreach(lc, current_rel->pathlist)
- {
- Path *path = (Path *) lfirst(lc);
-
- /*
- * We assume that execution costs of the tlist as such were
- * already accounted for. However, it still seems appropriate to
- * charge something more for the executor's general costs of
- * processing the added tuples. The cost is probably less than
- * cpu_tuple_cost, though, so we arbitrarily use half of that.
- */
- path->total_cost += path->rows * (tlist_rows - 1) *
- cpu_tuple_cost / 2;
-
- path->rows *= tlist_rows;
- }
- /* No need to run set_cheapest; we're keeping all paths anyway. */
- }
-
- /*
* Now we are prepared to build the final-output upperrel.
*/
final_rel = fetch_upper_rel(root, UPPERREL_FINAL, NULL);
@@ -4907,20 +4867,12 @@ make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc,
*
* Our current policy is to postpone volatile expressions till after the sort
* unconditionally (assuming that that's possible, ie they are in plain tlist
- * columns and not ORDER BY/GROUP BY/DISTINCT columns). We also prefer to
- * postpone set-returning expressions, because running them beforehand would
- * bloat the sort dataset, and because it might cause unexpected output order
- * if the sort isn't stable. However there's a constraint on that: all SRFs
- * in the tlist should be evaluated at the same plan step, so that they can
- * run in sync in ExecTargetList. So if any SRFs are in sort columns, we
- * mustn't postpone any SRFs. (Note that in principle that policy should
- * probably get applied to the group/window input targetlists too, but we
- * have not done that historically.) Lastly, expensive expressions are
- * postponed if there is a LIMIT, or if root->tuple_fraction shows that
- * partial evaluation of the query is possible (if neither is true, we expect
- * to have to evaluate the expressions for every row anyway), or if there are
- * any volatile or set-returning expressions (since once we've put in a
- * projection at all, it won't cost any more to postpone more stuff).
+ * columns and not ORDER BY/GROUP BY/DISTINCT columns). Also, expensive
+ * expressions are postponed if there is a LIMIT, or if root->tuple_fraction
+ * shows that partial evaluation of the query is possible (if neither is true,
+ * we expect to have to evaluate the expressions for every row anyway), or if
+ * there are any volatile or set-returning expressions (since once we've put
+ * in a projection at all, it won't cost any more to postpone more stuff).
*
* Another issue that could potentially be considered here is that
* evaluating tlist expressions could result in data that's either wider
@@ -4944,30 +4896,21 @@ make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc,
* computed earlier.
*
* 'final_target' is the query's final target list (in PathTarget form)
- * 'have_postponed_srfs' is an output argument, see below
*
* The result is the PathTarget to be computed by the plan node immediately
* below the Sort step (and the Distinct step, if any). This will be
* exactly final_target if we decide a projection step wouldn't be helpful.
- *
- * In addition, *have_postponed_srfs is set to TRUE if we choose to postpone
- * any set-returning functions to after the Sort.
*/
static PathTarget *
make_sort_input_target(PlannerInfo *root,
- PathTarget *final_target,
- bool *have_postponed_srfs)
+ PathTarget *final_target)
{
Query *parse = root->parse;
PathTarget *input_target;
int ncols;
- bool *col_is_srf;
bool *postpone_col;
- bool have_srf;
bool have_volatile;
bool have_expensive;
- bool have_srf_sortcols;
- bool postpone_srfs;
List *postponable_cols;
List *postponable_vars;
int i;
@@ -4976,13 +4919,10 @@ make_sort_input_target(PlannerInfo *root,
/* Shouldn't get here unless query has ORDER BY */
Assert(parse->sortClause);
- *have_postponed_srfs = false; /* default result */
-
/* Inspect tlist and collect per-column information */
ncols = list_length(final_target->exprs);
- col_is_srf = (bool *) palloc0(ncols * sizeof(bool));
postpone_col = (bool *) palloc0(ncols * sizeof(bool));
- have_srf = have_volatile = have_expensive = have_srf_sortcols = false;
+ have_volatile = have_expensive = false;
i = 0;
foreach(lc, final_target->exprs)
@@ -5000,16 +4940,9 @@ make_sort_input_target(PlannerInfo *root,
if (get_pathtarget_sortgroupref(final_target, i) == 0)
{
/*
- * Check for SRF or volatile functions. Check the SRF case first
- * because we must know whether we have any postponed SRFs.
+ * Check for volatile functions.
*/
- if (expression_returns_set((Node *) expr))
- {
- /* We'll decide below whether these are postponable */
- col_is_srf[i] = true;
- have_srf = true;
- }
- else if (contain_volatile_functions((Node *) expr))
+ if (contain_volatile_functions((Node *) expr))
{
/* Unconditionally postpone */
postpone_col[i] = true;
@@ -5038,39 +4971,19 @@ make_sort_input_target(PlannerInfo *root,
}
}
}
- else
- {
- /* For sortgroupref cols, just check if any contain SRFs */
- if (!have_srf_sortcols &&
- expression_returns_set((Node *) expr))
- have_srf_sortcols = true;
- }
i++;
}
/*
- * We can postpone SRFs if we have some but none are in sortgroupref cols.
- */
- postpone_srfs = (have_srf && !have_srf_sortcols);
-
- /*
* If we don't need a post-sort projection, just return final_target.
*/
- if (!(postpone_srfs || have_volatile ||
+ if (!(have_volatile ||
(have_expensive &&
(parse->limitCount || root->tuple_fraction > 0))))
return final_target;
/*
- * Report whether the post-sort projection will contain set-returning
- * functions. This is important because it affects whether the Sort can
- * rely on the query's LIMIT (if any) to bound the number of rows it needs
- * to return.
- */
- *have_postponed_srfs = postpone_srfs;
-
- /*
* Construct the sort-input target, taking all non-postponable columns and
* then adding Vars, PlaceHolderVars, Aggrefs, and WindowFuncs found in
* the postponable ones.
@@ -5083,7 +4996,7 @@ make_sort_input_target(PlannerInfo *root,
{
Expr *expr = (Expr *) lfirst(lc);
- if (postpone_col[i] || (postpone_srfs && col_is_srf[i]))
+ if (postpone_col[i])
postponable_cols = lappend(postponable_cols, expr);
else
add_column_to_pathtarget(input_target, expr,
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 9c502bd..8f7a8bd 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -807,11 +807,13 @@ find_window_functions_walker(Node *node, WindowFuncLists *lists)
* Estimate the number of rows returned by a set-returning expression.
* The result is 1 if there are no set-returning functions.
*
- * We use the product of the rowcount estimates of all the functions in
- * the given tree (this corresponds to the behavior of ExecMakeFunctionResult
- * for nested set-returning functions).
+ * We use the product of the rowcount estimates of all the functions in the
+ * given tree (this corresponds to the behavior of ExecMakeFunctionResult for
+ * nested set-returning functions).
*
* Note: keep this in sync with expression_returns_set() in nodes/nodeFuncs.c.
+ *
+ * FIXME: This possibly be simplified now that targetlist SRFs are gone.
*/
double
expression_returns_set_rows(Node *clause)
@@ -881,40 +883,6 @@ expression_returns_set_rows_walker(Node *node, double *count)
(void *) count);
}
-/*
- * tlist_returns_set_rows
- * Estimate the number of rows returned by a set-returning targetlist.
- * The result is 1 if there are no set-returning functions.
- *
- * Here, the result is the largest rowcount estimate of any of the tlist's
- * expressions, not the product as you would get from naively applying
- * expression_returns_set_rows() to the whole tlist. The behavior actually
- * implemented by ExecTargetList produces a number of rows equal to the least
- * common multiple of the expression rowcounts, so that the product would be
- * a worst-case estimate that is typically not realistic. Taking the max as
- * we do here is a best-case estimate that might not be realistic either,
- * but it's probably closer for typical usages. We don't try to compute the
- * actual LCM because we're working with very approximate estimates, so their
- * LCM would be unduly noisy.
- */
-double
-tlist_returns_set_rows(List *tlist)
-{
- double result = 1;
- ListCell *lc;
-
- foreach(lc, tlist)
- {
- TargetEntry *tle = (TargetEntry *) lfirst(lc);
- double colresult;
-
- colresult = expression_returns_set_rows((Node *) tle->expr);
- if (result < colresult)
- result = colresult;
- }
- return result;
-}
-
/*****************************************************************************
* Subplan clause manipulation
@@ -4899,7 +4867,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
/*
* Forget it if the function is not SQL-language or has other showstopper
- * properties. (The nargs check is just paranoia.)
+ * properties. (The nargs and retset checks are just paranoia.)
*/
if (funcform->prolang != SQLlanguageId ||
funcform->prosecdef ||
@@ -5287,7 +5255,7 @@ evaluate_expr(Expr *expr, Oid result_type, int32 result_typmod,
*/
const_val = ExecEvalExprSwitchContext(exprstate,
GetPerTupleExprContext(estate),
- &const_is_null, NULL);
+ &const_is_null);
/* Get info needed about result datatype */
get_typlenbyval(result_type, &resultTypLen, &resultTypByVal);
diff --git a/src/backend/optimizer/util/predtest.c b/src/backend/optimizer/util/predtest.c
index 2c2efb1..0c59fe8 100644
--- a/src/backend/optimizer/util/predtest.c
+++ b/src/backend/optimizer/util/predtest.c
@@ -1596,7 +1596,7 @@ operator_predicate_proof(Expr *predicate, Node *clause, bool refute_it)
/* And execute it. */
test_result = ExecEvalExprSwitchContext(test_exprstate,
GetPerTupleExprContext(estate),
- &isNull, NULL);
+ &isNull);
/* Get back to outer memory context */
MemoryContextSwitchTo(oldcontext);
diff --git a/src/backend/utils/adt/domains.c b/src/backend/utils/adt/domains.c
index 19ee4ce..c568c6c 100644
--- a/src/backend/utils/adt/domains.c
+++ b/src/backend/utils/adt/domains.c
@@ -164,7 +164,7 @@ domain_check_input(Datum value, bool isnull, DomainIOData *my_extra)
conResult = ExecEvalExprSwitchContext(con->check_expr,
econtext,
- &conIsNull, NULL);
+ &conIsNull);
if (!conIsNull &&
!DatumGetBool(conResult))
diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c
index 7ed5bcb..65bf6ad 100644
--- a/src/backend/utils/adt/xml.c
+++ b/src/backend/utils/adt/xml.c
@@ -603,7 +603,7 @@ xmlelement(XmlExprState *xmlExpr, ExprContext *econtext)
bool isnull;
char *str;
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
str = NULL;
else
@@ -620,7 +620,7 @@ xmlelement(XmlExprState *xmlExpr, ExprContext *econtext)
bool isnull;
char *str;
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
/* here we can just forget NULL elements immediately */
if (!isnull)
{
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 7f11285..79a31a9 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -69,8 +69,8 @@
* now it's just a macro invoking the function pointed to by an ExprState
* node. Beware of double evaluation of the ExprState argument!
*/
-#define ExecEvalExpr(expr, econtext, isNull, isDone) \
- ((*(expr)->evalfunc) (expr, econtext, isNull, isDone))
+#define ExecEvalExpr(expr, econtext, isNull) \
+ ((*(expr)->evalfunc) (expr, econtext, isNull))
/* Hook for plugins to get control in ExecutorStart() */
@@ -235,18 +235,17 @@ extern Datum GetAttributeByNum(HeapTupleHeader tuple, AttrNumber attrno,
extern Datum GetAttributeByName(HeapTupleHeader tuple, const char *attname,
bool *isNull);
extern Datum ExecEvalExprSwitchContext(ExprState *expression, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
-extern ExprDoneCond ExecEvalFuncArgs(FunctionCallInfo fcinfo,
+ bool *isNull);
+extern void ExecEvalFuncArgs(FunctionCallInfo fcinfo,
List *argList, ExprContext *econtext);
extern ExprState *ExecInitExpr(Expr *node, PlanState *parent);
extern void ExecInitFcache(Oid foid, Oid input_collation, FuncExprState *fcache,
- MemoryContext fcacheCxt, bool needDescForSets);
+ MemoryContext fcacheCxt);
extern ExprState *ExecPrepareExpr(Expr *node, EState *estate);
extern bool ExecQual(List *qual, ExprContext *econtext, bool resultForNull);
extern int ExecTargetListLength(List *targetlist);
extern int ExecCleanTargetListLength(List *targetlist);
-extern TupleTableSlot *ExecProject(ProjectionInfo *projInfo,
- ExprDoneCond *isDone);
+extern TupleTableSlot *ExecProject(ProjectionInfo *projInfo);
/*
* prototypes from functions in execScan.c
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index e7fd7bd..043f969 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -243,7 +243,6 @@ typedef struct ProjectionInfo
List *pi_targetlist;
ExprContext *pi_exprContext;
TupleTableSlot *pi_slot;
- ExprDoneCond *pi_itemIsDone;
bool pi_directMap;
int pi_numSimpleVars;
int *pi_varSlotOffsets;
@@ -569,8 +568,7 @@ typedef struct ExprState ExprState;
typedef Datum (*ExprStateEvalFunc) (ExprState *expression,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
struct ExprState
{
@@ -692,21 +690,13 @@ typedef struct FuncExprState
TupleTableSlot *funcResultSlot;
/*
- * In some cases we need to compute a tuple descriptor for the function's
- * output. If so, it's stored here.
- */
- TupleDesc funcResultDesc;
- bool funcReturnsTuple; /* valid when funcResultDesc isn't
- * NULL */
-
- /*
* setArgsValid is true when we are evaluating a set-returning function
* that uses value-per-call mode and we are in the middle of a call
* series; we want to pass the same argument values to the function again
* (and again, until it returns ExprEndResult). This indicates that
* fcinfo_data already contains valid argument data.
*/
- bool setArgsValid;
+ bool setArgsValid2;
/*
* Flag to remember whether we found a set-valued argument to the
@@ -1057,8 +1047,6 @@ typedef struct PlanState
TupleTableSlot *ps_ResultTupleSlot; /* slot for my result tuples */
ExprContext *ps_ExprContext; /* node's expression-evaluation context */
ProjectionInfo *ps_ProjInfo; /* info for doing tuple projection */
- bool ps_TupFromTlist;/* state flag for processing set-valued
- * functions in targetlist */
} PlanState;
/* ----------------
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 7fb5005..e2d44ce 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -54,7 +54,6 @@ extern bool contain_window_function(Node *clause);
extern WindowFuncLists *find_window_functions(Node *clause, Index maxWinRef);
extern double expression_returns_set_rows(Node *clause);
-extern double tlist_returns_set_rows(List *tlist);
extern bool contain_subplans(Node *clause);
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
index f9b3b22..d8905c9 100644
--- a/src/pl/plpgsql/src/pl_exec.c
+++ b/src/pl/plpgsql/src/pl_exec.c
@@ -5644,8 +5644,7 @@ exec_eval_simple_expr(PLpgSQL_execstate *estate,
*/
*result = ExecEvalExpr(expr->expr_simple_state,
econtext,
- isNull,
- NULL);
+ isNull);
/* Assorted cleanup */
expr->expr_simple_in_use = false;
@@ -6312,7 +6311,7 @@ exec_cast_value(PLpgSQL_execstate *estate,
cast_entry->cast_in_use = true;
value = ExecEvalExpr(cast_entry->cast_exprstate, econtext,
- isnull, NULL);
+ isnull);
cast_entry->cast_in_use = false;
--
2.9.3
On 2016-08-27 14:48:29 -0700, Andres Freund wrote:
My next steps are to work on cleaning up the code a bit more, and
increase the regression coverage.
Oh, there's one open item I actually don't really know how to handle
well: A decent way of enforcing the join order between the subquery and
the functionscan when there's no lateral dependencies. I've hacked up
the lateral machinery to just always add a pointless dependency, but
that seems fairly ugly. If somebody has a better idea, that'd be great.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 08/28/2016 12:48 AM, Andres Freund wrote:
Attached is a significantly updated patch series (see the mail one up
for details about what this is, I don't want to quote it in its
entirety).There's still some corner cases (DISTINCT + SRF, UNION/INTERSECT with
SRF) to test / implement and a good bit of code cleanup to do. But
feature wise it's pretty much complete.
Looks good, aside from the few FIXMEs, TODOs and XXXs and DIRTYs.
I think we need to come up with a better word for "unsrfify". That's
quite a mouthful. Perhaps something as boring as
"convert_srfs_to_function_rtes".
Would it make sense for addRangeTableEntryForFunction() to take a List
of RangeFunctionElems as argument, now that we have such a struct? The
lists-of-same-length approach gets a bit tedious.
Typos:
s/fortfour/forfour
s/Each element of this list a/ Each element of this list is a/
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-08-29 12:56:25 +0300, Heikki Linnakangas wrote:
On 08/28/2016 12:48 AM, Andres Freund wrote:
Attached is a significantly updated patch series (see the mail one up
for details about what this is, I don't want to quote it in its
entirety).There's still some corner cases (DISTINCT + SRF, UNION/INTERSECT with
SRF) to test / implement and a good bit of code cleanup to do. But
feature wise it's pretty much complete.Looks good
Thanks for the look!
aside from the few FIXMEs, TODOs and XXXs
Those I pretty much know to handle.
DIRTYs.
But I think this one is the "ordering" dependency information, and there
I don't yet have good idea.
I think we need to come up with a better word for "unsrfify". That's quite a
mouthful. Perhaps something as boring as "convert_srfs_to_function_rtes".
Yea, that was more of a working title. Maybe implement_targetlist_srfs()?
Would it make sense for addRangeTableEntryForFunction() to take a List of
RangeFunctionElems as argument, now that we have such a struct? The
lists-of-same-length approach gets a bit tedious.
Yea, I was thinking the same.
Typos:
s/fortfour/forfour
s/Each element of this list a/ Each element of this list is a/
Thanks.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Aug 23, 2016 at 3:10 AM, Andres Freund <andres@anarazel.de> wrote:
as noted in [1] I started hacking on removing the current implementation
of SRFs in the targetlist (tSRFs henceforth). IM discussion brought the
need for a description of the problem, need and approach to light.
Thanks for writing this up.
1) How to deal with the least-common-multiple behaviour of tSRFs. E.g.
=# SELECT generate_series(1, 3), generate_series(1,2);
returning
┌─────────────────┬─────────────────┐
│ generate_series │ generate_series │
├─────────────────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 2 │
│ 3 │ 1 │
│ 1 │ 2 │
│ 2 │ 1 │
│ 3 │ 2 │
└─────────────────┴─────────────────┘
(6 rows)
but
=# SELECT generate_series(1, 3), generate_series(5,7);
returning
┌─────────────────┬─────────────────┐
│ generate_series │ generate_series │
├─────────────────┼─────────────────┤
│ 1 │ 5 │
│ 2 │ 6 │
│ 3 │ 7 │
└─────────────────┴─────────────────┘discussion in this thread came, according to my reading, to the
conclusion that that behaviour is just confusing and that the ROWS FROM
behaviour of
=# SELECT * FROM ROWS FROM(generate_series(1, 3), generate_series(1,2));
┌─────────────────┬─────────────────┐
│ generate_series │ generate_series │
├─────────────────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 2 │
│ 3 │ (null) │
└─────────────────┴─────────────────┘
(3 rows)makes more sense. We also discussed erroring out if two SRFs return
differing amount of rows, but that seems not to be preferred so far. And
we can easily add it if we want.
This all seems fine. I don't think erroring out is an improvement.
2) A naive conversion to ROWS FROM, like in the example in the
introductory paragraph, can change the output, when implemented as a
join from ROWS FROM to the rest of the query, rather than the other
way round. E.g.
=# EXPLAIN SELECT * FROM few, ROWS FROM(generate_series(1,10));
┌──────────────────────────────────────────────────────────────────────────────┐
│ QUERY PLAN │
├──────────────────────────────────────────────────────────────────────────────┤
│ Nested Loop (cost=0.00..36.03 rows=2000 width=8) │
│ -> Function Scan on generate_series (cost=0.00..10.00 rows=1000 width=4) │
│ -> Materialize (cost=0.00..1.03 rows=2 width=4) │
│ -> Seq Scan on few (cost=0.00..1.02 rows=2 width=4) │
└──────────────────────────────────────────────────────────────────────────────┘
(4 rows)
=# SELECT * FROM few, ROWS FROM(generate_series(1,3));
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 1 │
│ 1 │ 2 │
│ 2 │ 2 │
│ 1 │ 3 │
│ 2 │ 3 │
└────┴─────────────────┘
(6 rows)
surely isn't what was intended. So the join order needs to be enforced.
In general, we've been skeptical about giving any guarantees about
result ordering. Maybe this case is different and we should give some
guarantee here, but I don't think it's 100% obvious.
3) tSRFs are evaluated after GROUP BY, and window functions:
=# SELECT generate_series(1, count(*)) FROM (VALUES(1),(2),(10)) f;
┌─────────────────┐
│ generate_series │
├─────────────────┤
│ 1 │
│ 2 │
│ 3 │
└─────────────────┘
which means we have to push the "original" query into a subquery, with
the ROWS FROM laterally referencing the subquery:
SELECT generate_series FROM (SELECT count(*) FROM (VALUES(1),(2),(10)) f) s, ROWS FROM (generate_series(1,s.count));
Seems OK.
4) The evaluation order of tSRFs in combination with ORDER BY is a bit
confusing. Namely tSRFs are implemented after ORDER BY has been
evaluated, unless the ORDER BY references the SRF.
E.g.
=# SELECT few.id, generate_series FROM ROWS FROM(generate_series(1,3)),few ORDER BY few.id DESC;
might return
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 24 │ 3 │
│ 24 │ 2 │
│ 24 │ 1 │
..
instead of
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 24 │ 1 │
│ 24 │ 2 │
│ 24 │ 3 │
as before.which means we'll sometimes have to push down the ORDER BY into the
subquery (when not referencing tSRFs, so they're evaluated first),
sometimes evaluate them on the outside (if tSRFs are referenced)
OK.
5) tSRFs can have tSRFs as argument, e.g.:
=# SELECT generate_series(1, generate_series(1,3));
┌─────────────────┐
│ generate_series │
├─────────────────┤
│ 1 │
│ 1 │
│ 2 │
│ 1 │
│ 2 │
│ 3 │
└─────────────────┘
that can quite easily be implemented by having the "nested" tSRF
evaluate as a separate ROWS FROM expression.Which even allows us to implement the previously forbidden
=# SELECT generate_series(generate_series(1,3), generate_series(2,4));
ERROR: 0A000: functions and operators can take at most one set argument- not that I think that's of great value ;)
OK.
6) SETOF record type functions cannot directly be used in ROWS FROM() -
as ROWS FROM "expands" records returned by functions. When converting
something like
CREATE OR REPLACE FUNCTION setof_record_sql() RETURNS SETOF record LANGUAGE sql AS $$SELECT 1 AS a, 2 AS b UNION ALL SELECT 1, 2;$$;
SELECT setof_record_sql();
we don't have that available though.The best way to handle that seems to be to introduce the ability for
ROWS FROM not to expand the record returned by a column. I'm currently
thinking that something like ROWS FROM(setof_record_sql() AS ()) would
do the trick. That'd also considerably simplify the handling of
functions returning known composite types - my current POC patch
generates a ROW(a,b,..) type expression for those.I'm open to better syntax suggestions.
I definitely agree that having some syntax to avoid row-expansion in
this case (and maybe in other cases) would be a good thing; I suspect
that would get a good bit of use. I don't care much for that
particular choice of syntax, which seems fairly magical, but I'm not
sure what would be better.
7) ROWS FROM () / functions in the FROM list are currently signifcantly
slower than the equivalent in the target list (for SFRM_ValuePerCall
SRFs at least):=# COPY (SELECT generate_series(1,10000000)) TO '/dev/null';
COPY 10000000
Time: 1311.469 ms
=# COPY (SELECT * FROM generate_series(1,10000000)) TO '/dev/null';
LOG: 00000: temporary file: path "base/pgsql_tmp/pgsql_tmp702.0", size 140000000
LOCATION: FileClose, fd.c:1484
COPY 10000000
Time: 2173.282 ms
for SRFM_Materialize SRFs there's no meaningufl difference:
CREATE FUNCTION plpgsql_generate_series(bigint, bigint) RETURNS SETOF bigint LANGUAGE plpgsql AS $$BEGIN RETURN QUERY SELECT generate_series($1, $2);END;$$;=# COPY (SELECT plpgsql_generate_series(1,10000000)) TO '/dev/null';
LOG: 00000: temporary file: path "base/pgsql_tmp/pgsql_tmp702.2", size 180000000
COPY 10000000
Time: 3058.437 ms=# COPY (SELECT * FROM plpgsql_generate_series(1,10000000)) TO '/dev/null';LOG: 00000: temporary file: path "base/pgsql_tmp/pgsql_tmp702.1", size 180000000
COPY 10000000
Time: 2964.661 msthat makes sense, because nodeFunctionscan.c, via
ExecMakeTableFunctionResult, forces materialization of ValuePerCall
SRFs.ISTM that we need should fix that by allowing ValuePerCall without
materialization, as long as EXEC_FLAG_BACKWARD isn't required.
That sounds good.
I've implemented ([2]) a prototype of this. My basic approach is:
I) During parse-analysis, remember whether a query has any tSRFs
(Query->hasTargetSRF). That avoids doing a useless pass over the
query, if no tSRFs are present.
II) At the beginning of subquery_planner(), before doing any work
operating on subqueries and such, implement SRFs if ->hasTargetSRF().
(unsrfify() in the POC)
III) Unconditionally move the "current" query into a subquery. For that
do a mutator pass over the query, replacing Vars/Aggrefs/... in the
original targetlist with Var references to the new subquery.
(unsrfify_reference_subquery_mutator() in the POC)
IV) Do a pass over the outer query's targetlist, and implement any tSRFs
using a ROWS FROM() RTE (or multiple ones in case of nested tSRFs).
(unsrfify_implement_srfs_mutator() in the POC)that seems to mostly work well.
I gather that III and IV are skipped if hasTargetSRF isn't set.
The behaviour changes this implies are:
a) Least-common-multiple behaviour, as in (1) above, is gone. I think
that's good.b) We currently allow tSRFs in UPDATE ... SET expressions. I don't
actually know what that's supposed to mean. I'm inclined
a;
=# CREATE TABLE blarg AS SELECT 1::int a;
SELECT 1
=# UPDATE blarg SET a = generate_series(2,3);
UPDATE 1
=# SELECT * FROM blarg ;
┌───┐
│ a │
├───┤
│ 2 │
└───┘
I'm inclined to think that that's a bad idea, and should rather be
forbidden.c) COALESCE/CASE have, so far, shortcut tSRF expansion. E.g.
SELECT id, COALESCE(1, generate_series(1,2)) FROM (VALUES(1),(2)) few(id);
returns only two rows, despite the generate_series(). But by
implementing the generate_series as a ROWS FROM, it'd return four.I think that's ok.
Those all sound OK.
d) Not a problem with the patch per-se, but I'm doubful that that's ok:
=# SELECT 1 ORDER BY generate_series(1, 10);
returns 10 rows ;) - maybe we should forbid that?
OK by me. I feel like this isn't the only case where the presence of
resjunk columns has user-visible effects, although I can't think of
another one right at the moment. It seems like something to avoid,
though.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sun, Aug 28, 2016 at 3:18 AM, Andres Freund <andres@anarazel.de> wrote:
0003-Avoid-materializing-SRFs-in-the-FROM-list.patch
To avoid performance regressions from moving SRFM_ValuePerCall SRFs to
ROWS FROM, nodeFunctionscan.c needs to support not materializing
output.In my present patch I've *ripped out* the support for materialization
in nodeFunctionscan.c entirely. That means that rescans referencing
volatile functions can change their behaviour (if a function is
rescanned, without having it's parameters changed), and that native
backward scan support is gone. I don't think that's actually an issue.
Can you expand on why you think those things aren't an issue? Because
it seems like they might be.
0006-Remove-unused-code-related-to-targetlist-SRFs.patch
Now that there's no targetlist SRFs at execution time anymore, rip out
executor and planner code related to that. There's possibly more, but
that's what I could find in a couple passes of searching around.This actually speeds up tpch-h queries by roughly 4% for me.
Nice.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Sep 2, 2016 at 3:31 AM, Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, Aug 23, 2016 at 3:10 AM, Andres Freund <andres@anarazel.de> wrote:
=# SELECT * FROM few, ROWS FROM(generate_series(1,3));
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 1 │
│ 1 │ 2 │
│ 2 │ 2 │
│ 1 │ 3 │
│ 2 │ 3 │
└────┴─────────────────┘
(6 rows)
surely isn't what was intended. So the join order needs to be enforced.In general, we've been skeptical about giving any guarantees about
result ordering.
+1
I think it is a very bad idea to move away from the statement that
a query generates a set of rows, and that no order is guaranteed
unless the top level has an ORDER BY clause. How hard is it to add
ORDER BY 1, 2 to the above query? Let the optimizer notice when a
node returns data in the needed order and skip the sort if possible.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-02 09:05:35 -0500, Kevin Grittner wrote:
On Fri, Sep 2, 2016 at 3:31 AM, Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, Aug 23, 2016 at 3:10 AM, Andres Freund <andres@anarazel.de> wrote:
=# SELECT * FROM few, ROWS FROM(generate_series(1,3));
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 1 │
│ 1 │ 2 │
│ 2 │ 2 │
│ 1 │ 3 │
│ 2 │ 3 │
└────┴─────────────────┘
(6 rows)
surely isn't what was intended. So the join order needs to be enforced.In general, we've been skeptical about giving any guarantees about
result ordering.
Well, it's historically how we behaved for SRFs. I'm pretty sure that
people will be confused if
SELECT generate_series(1, 10) FROM sometbl;
suddenly returns rows in an order that reverse from what
generate_series() returns.
+
I think it is a very bad idea to move away from the statement that
a query generates a set of rows, and that no order is guaranteed
unless the top level has an ORDER BY clause. How hard is it to add
ORDER BY 1, 2 to the above query? Let the optimizer notice when a
node returns data in the needed order and skip the sort if possible.
There's no such infrastructure for SRFS/ROWS FROM.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
Thanks for looking.
On 2016-09-02 14:01:32 +0530, Robert Haas wrote:
6) SETOF record type functions cannot directly be used in ROWS FROM() -
as ROWS FROM "expands" records returned by functions. When converting
something like
CREATE OR REPLACE FUNCTION setof_record_sql() RETURNS SETOF record LANGUAGE sql AS $$SELECT 1 AS a, 2 AS b UNION ALL SELECT 1, 2;$$;
SELECT setof_record_sql();
we don't have that available though.The best way to handle that seems to be to introduce the ability for
ROWS FROM not to expand the record returned by a column. I'm currently
thinking that something like ROWS FROM(setof_record_sql() AS ()) would
do the trick. That'd also considerably simplify the handling of
functions returning known composite types - my current POC patch
generates a ROW(a,b,..) type expression for those.I'm open to better syntax suggestions.
I definitely agree that having some syntax to avoid row-expansion in
this case (and maybe in other cases) would be a good thing; I suspect
that would get a good bit of use. I don't care much for that
particular choice of syntax, which seems fairly magical, but I'm not
sure what would be better.
I'm not a fan either, but until somebody ocmes up with something better
:/
That sounds good.
I've implemented ([2]) a prototype of this. My basic approach is:
I) During parse-analysis, remember whether a query has any tSRFs
(Query->hasTargetSRF). That avoids doing a useless pass over the
query, if no tSRFs are present.
II) At the beginning of subquery_planner(), before doing any work
operating on subqueries and such, implement SRFs if ->hasTargetSRF().
(unsrfify() in the POC)
III) Unconditionally move the "current" query into a subquery. For that
do a mutator pass over the query, replacing Vars/Aggrefs/... in the
original targetlist with Var references to the new subquery.
(unsrfify_reference_subquery_mutator() in the POC)
IV) Do a pass over the outer query's targetlist, and implement any tSRFs
using a ROWS FROM() RTE (or multiple ones in case of nested tSRFs).
(unsrfify_implement_srfs_mutator() in the POC)that seems to mostly work well.
I gather that III and IV are skipped if hasTargetSRF isn't set.
Precisely.
d) Not a problem with the patch per-se, but I'm doubful that that's ok:
=# SELECT 1 ORDER BY generate_series(1, 10);
returns 10 rows ;) - maybe we should forbid that?OK by me. I feel like this isn't the only case where the presence of
resjunk columns has user-visible effects, although I can't think of
another one right at the moment. It seems like something to avoid,
though.
An early patch in the series now errors out if ORDER BY or GROUP BY adds
a retset resjunk element.
Regards,
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Sep 2, 2016 at 9:11 AM, Andres Freund <andres@anarazel.de> wrote:
On 2016-09-02 09:05:35 -0500, Kevin Grittner wrote:
On Fri, Sep 2, 2016 at 3:31 AM, Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, Aug 23, 2016 at 3:10 AM, Andres Freund <andres@anarazel.de> wrote:
=# SELECT * FROM few, ROWS FROM(generate_series(1,3));
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 1 │
│ 1 │ 2 │
│ 2 │ 2 │
│ 1 │ 3 │
│ 2 │ 3 │
└────┴─────────────────┘
(6 rows)
surely isn't what was intended. So the join order needs to be enforced.In general, we've been skeptical about giving any guarantees about
result ordering.Well, it's historically how we behaved for SRFs.
And until we had synchronized scans a sequential scan always
returned rows in the order they were present in the heap.
Implementation details are not guarantees.
I'm pretty sure that people will be confused if
SELECT generate_series(1, 10) FROM sometbl;
suddenly returns rows in an order that reverse from what
generate_series() returns.
If this changes, it is probably worth a mentioning in the release
notes.
I think it is a very bad idea to move away from the statement that
a query generates a set of rows, and that no order is guaranteed
unless the top level has an ORDER BY clause. How hard is it to add
ORDER BY 1, 2 to the above query? Let the optimizer notice when a
node returns data in the needed order and skip the sort if possible.There's no such infrastructure for SRFS/ROWS FROM.
Well, that's something to fix (or not), but not a justification for
"except on Tuesdays when the moon is full" sorts of exceptions to
simple rules about what to expect. No ORDER BY means no order
guaranteed.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-02 07:11:10 -0700, Andres Freund wrote:
On 2016-09-02 09:05:35 -0500, Kevin Grittner wrote:
On Fri, Sep 2, 2016 at 3:31 AM, Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, Aug 23, 2016 at 3:10 AM, Andres Freund <andres@anarazel.de> wrote:
=# SELECT * FROM few, ROWS FROM(generate_series(1,3));
┌────┬─────────────────┐
│ id │ generate_series │
├────┼─────────────────┤
│ 1 │ 1 │
│ 2 │ 1 │
│ 1 │ 2 │
│ 2 │ 2 │
│ 1 │ 3 │
│ 2 │ 3 │
└────┴─────────────────┘
(6 rows)
surely isn't what was intended. So the join order needs to be enforced.In general, we've been skeptical about giving any guarantees about
result ordering.Well, it's historically how we behaved for SRFs. I'm pretty sure that
people will be confused if
SELECT generate_series(1, 10) FROM sometbl;
suddenly returns rows in an order that reverse from what
generate_series() returns.
Oh, and we've previously re-added that based on
complaints. C.f. d543170f2fdd6d9845aaf91dc0f6be7a2bf0d9e7 (and others
IIRC).
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-02 09:05:35 -0500, Kevin Grittner wrote:
In general, we've been skeptical about giving any guarantees about
result ordering.
Well, it's historically how we behaved for SRFs. I'm pretty sure that
people will be confused if
SELECT generate_series(1, 10) FROM sometbl;
suddenly returns rows in an order that reverse from what
generate_series() returns.
True, but how much "enforcement" do we need really? This will be a cross
product join, which means that it can only be done as a nestloop not as a
merge or hash (there being no join key to merge or hash on). ISTM all we
need is that the SRF be on the inside of the join, which is automatic
if it's LATERAL.
I think it is a very bad idea to move away from the statement that
a query generates a set of rows, and that no order is guaranteed
unless the top level has an ORDER BY clause. How hard is it to add
ORDER BY 1, 2 to the above query? Let the optimizer notice when a
node returns data in the needed order and skip the sort if possible.
There's no such infrastructure for SRFS/ROWS FROM.
And in particular nothing to ORDER BY in this example.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
Oh, and we've previously re-added that based on
complaints. C.f. d543170f2fdd6d9845aaf91dc0f6be7a2bf0d9e7 (and others
IIRC).
That one wasn't about row order per se, but I agree that people *will*
bitch if we change the behavior, especially if we don't provide a way
to fix it. ORDER BY is not a useful suggestion when there is nothing
you could order by to get the old behavior.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-02 10:20:42 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-02 09:05:35 -0500, Kevin Grittner wrote:
In general, we've been skeptical about giving any guarantees about
result ordering.Well, it's historically how we behaved for SRFs. I'm pretty sure that
people will be confused if
SELECT generate_series(1, 10) FROM sometbl;
suddenly returns rows in an order that reverse from what
generate_series() returns.True, but how much "enforcement" do we need really? This will be a cross
product join, which means that it can only be done as a nestloop not as a
merge or hash (there being no join key to merge or hash on). ISTM all we
need is that the SRF be on the inside of the join, which is automatic
if it's LATERAL.
Right. But there's nothing to force a lateral reference to be there
intrinsically. I've added a "fake" lateral reference to the ROWS FROM
RTE to the subquery, when there's none otherwise, but that's not
entirely pretty. I'm inclined to go with that though, unless somebody
has a better idea.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-02 10:20:42 -0400, Tom Lane wrote:
... ISTM all we
need is that the SRF be on the inside of the join, which is automatic
if it's LATERAL.
Right. But there's nothing to force a lateral reference to be there
intrinsically. I've added a "fake" lateral reference to the ROWS FROM
RTE to the subquery, when there's none otherwise, but that's not
entirely pretty.
Hm, do you get cases like this right:
select generate_series(1, t1.a) from t1, t2;
That would result in a lateral ref from the SRF RTE to t1, but you really
need to treat it as laterally dependent on the join of t1/t2 in order to
preserve the old semantics. That is, you need to be laterally dependent
on the whole FROM clause regardless of which variable references appear.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Sep 2, 2016 at 9:25 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
Oh, and we've previously re-added that based on
complaints. C.f. d543170f2fdd6d9845aaf91dc0f6be7a2bf0d9e7 (and others
IIRC).That one wasn't about row order per se, but I agree that people *will*
bitch if we change the behavior, especially if we don't provide a way
to fix it.
They might also bitch if you add any overhead to put rows in a
specific order when they subsequently sort the rows into some
different order. You might even destroy an order that would have
allowed a sort step to be skipped, so you would pay twice -- once
to put them into some "implied" order and then to sort them back
into the order they would have had without that extra effort.
ORDER BY is not a useful suggestion when there is nothing
you could order by to get the old behavior.
I'm apparently missing something, because I see a column with the
header "generate_series" in the result set.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner <kgrittn@gmail.com> writes:
On Fri, Sep 2, 2016 at 9:25 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
ORDER BY is not a useful suggestion when there is nothing
you could order by to get the old behavior.
I'm apparently missing something, because I see a column with the
header "generate_series" in the result set.
You are apparently only thinking about generate_series and not any
other SRF. Other SRFs don't necessarily produce outputs that are
in a nice sortable order. Even for one that does, sorting by it
would destroy the existing behavior:
regression=# select *, generate_series(1,3) from int8_tbl;
q1 | q2 | generate_series
------------------+-------------------+-----------------
123 | 456 | 1
123 | 456 | 2
123 | 456 | 3
123 | 4567890123456789 | 1
123 | 4567890123456789 | 2
123 | 4567890123456789 | 3
4567890123456789 | 123 | 1
4567890123456789 | 123 | 2
4567890123456789 | 123 | 3
4567890123456789 | 4567890123456789 | 1
4567890123456789 | 4567890123456789 | 2
4567890123456789 | 4567890123456789 | 3
4567890123456789 | -4567890123456789 | 1
4567890123456789 | -4567890123456789 | 2
4567890123456789 | -4567890123456789 | 3
(15 rows)
Now you could argue that the ordering of the table rows
themselves is poorly defined, and you'd be right, but that
doesn't change the fact that the generate_series output
has a well-defined repeating sequence. People might be
relying on that property.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Sep 2, 2016 at 9:51 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
regression=# select *, generate_series(1,3) from int8_tbl;
I'm sure that you realize that running a query of that form twice
against a table with more than one heap page could result in rows
in a different order, even if no changes had been made to the
database (including no vacuum activity, auto- or otherwise). If
someone reported that as a bug, what would we tell them?
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner <kgrittn@gmail.com> writes:
On Fri, Sep 2, 2016 at 9:51 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
regression=# select *, generate_series(1,3) from int8_tbl;
I'm sure that you realize that running a query of that form twice
against a table with more than one heap page could result in rows
in a different order, even if no changes had been made to the
database (including no vacuum activity, auto- or otherwise).
You missed my point: they might complain about the generate_series
output not being in the order they expect, independently of what
the table rows are.
Also, before getting too high and mighty with users who expect
"select * from table" to produce rows in a predictable order,
you should reflect on the number of places in our regression
tests that assume exactly that ...
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-02 10:34:54 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-02 10:20:42 -0400, Tom Lane wrote:
... ISTM all we
need is that the SRF be on the inside of the join, which is automatic
if it's LATERAL.Right. But there's nothing to force a lateral reference to be there
intrinsically. I've added a "fake" lateral reference to the ROWS FROM
RTE to the subquery, when there's none otherwise, but that's not
entirely pretty.Hm, do you get cases like this right:
select generate_series(1, t1.a) from t1, t2;
That would result in a lateral ref from the SRF RTE to t1, but you really
need to treat it as laterally dependent on the join of t1/t2 in order to
preserve the old semantics. That is, you need to be laterally dependent
on the whole FROM clause regardless of which variable references appear.
Yes - as the original query is moved into a subquery, the lateral
dependency I force-add simply is to the entire subquery atm (as a
wholerow var).
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-02 09:41:28 -0500, Kevin Grittner wrote:
On Fri, Sep 2, 2016 at 9:25 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
Oh, and we've previously re-added that based on
complaints. C.f. d543170f2fdd6d9845aaf91dc0f6be7a2bf0d9e7 (and others
IIRC).That one wasn't about row order per se, but I agree that people *will*
bitch if we change the behavior, especially if we don't provide a way
to fix it.They might also bitch if you add any overhead to put rows in a
specific order when they subsequently sort the rows into some
different order.
Huh? It's just the order the SRFs are returning rows. If they
subsequently ORDER, there's no issue. And that doesn't have a
performance impact afaict.
You might even destroy an order that would have
allowed a sort step to be skipped, so you would pay twice -- once
to put them into some "implied" order and then to sort them back
into the order they would have had without that extra effort.
So you're arguing that you can't rely on order, but that users rely on
order?
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Sep 2, 2016 at 10:31 AM, Andres Freund <andres@anarazel.de> wrote:
On 2016-09-02 09:41:28 -0500, Kevin Grittner wrote:
On Fri, Sep 2, 2016 at 9:25 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
Oh, and we've previously re-added that based on
complaints. C.f. d543170f2fdd6d9845aaf91dc0f6be7a2bf0d9e7 (and others
IIRC).That one wasn't about row order per se, but I agree that people *will*
bitch if we change the behavior, especially if we don't provide a way
to fix it.They might also bitch if you add any overhead to put rows in a
specific order when they subsequently sort the rows into some
different order.Huh? It's just the order the SRFs are returning rows. If they
subsequently ORDER, there's no issue. And that doesn't have a
performance impact afaict.
If it has no significant performance impact to maintain the
historical order, then I have no problem with doing so. If you
burn resources putting them into historical order, that is going to
be completely wasted effort in some queries. THAT is what I would
object to. I'm certainly not arguing that we have any reason to go
out of our way to change the order.
You might even destroy an order that would have
allowed a sort step to be skipped, so you would pay twice -- once
to put them into some "implied" order and then to sort them back
into the order they would have had without that extra effort.So you're arguing that you can't rely on order, but that users rely on
order?
No. I'm arguing that we track the order coming out of different
nodes during planning, and sometimes take advantage of it to avoid
a sort which would otherwise be required.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Sep 2, 2016 at 10:10 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Also, before getting too high and mighty with users who expect
"select * from table" to produce rows in a predictable order,
you should reflect on the number of places in our regression
tests that assume exactly that ...
An assumption that not infrequently breaks. AFAIK, we generally
adjust the tests when that happens, rather than considering it a
bug in the code. I never thought we did that because there was a
secret, undocumented guarantee of order, but to allow different
code paths to be tested than we would test if we always specified
an order.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Sep 2, 2016 at 10:10 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Kevin Grittner <kgrittn@gmail.com> writes:
On Fri, Sep 2, 2016 at 9:51 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
regression=# select *, generate_series(1,3) from int8_tbl;
I'm sure that you realize that running a query of that form twice
against a table with more than one heap page could result in rows
in a different order, even if no changes had been made to the
database (including no vacuum activity, auto- or otherwise).You missed my point: they might complain about the generate_series
output not being in the order they expect, independently of what
the table rows are.
I didn't miss it, I just never thought that anyone would care about
a secondary sort key if the primary sort key was random. I have
trouble imagining a use-case for that.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-02 10:58:59 -0500, Kevin Grittner wrote:
If it has no significant performance impact to maintain the
historical order, then I have no problem with doing so.
It's not really a runtime issue, it's just a question of how to nicely
constraint the join order. There's no additional sorting or such.
No. I'm arguing that we track the order coming out of different
nodes during planning, and sometimes take advantage of it to avoid
a sort which would otherwise be required.
I don't think that's realistically possible with SRFs, given they're
often in some language which we have no insight on from the planner
point of view. We could possibly hack something up for SQL SRFs (that'd
be nice, but I'm doubtful it's worth it), but for everything else it
seems unrealistic. What we could do is to add efficient
ROWS FROM (..) WITH ORDINALITY ORDER bY ordinality;
support.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
... What we could do is to add efficient
ROWS FROM (..) WITH ORDINALITY ORDER bY ordinality;
support.
Hm?
regression=# explain select * from rows from (generate_series(1,10)) with ordinality order by ordinality;
QUERY PLAN
-------------------------------------------------------------------------
Function Scan on generate_series (cost=0.00..10.00 rows=1000 width=12)
(1 row)
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-02 14:04:24 +0530, Robert Haas wrote:
On Sun, Aug 28, 2016 at 3:18 AM, Andres Freund <andres@anarazel.de> wrote:
0003-Avoid-materializing-SRFs-in-the-FROM-list.patch
To avoid performance regressions from moving SRFM_ValuePerCall SRFs to
ROWS FROM, nodeFunctionscan.c needs to support not materializing
output.In my present patch I've *ripped out* the support for materialization
in nodeFunctionscan.c entirely. That means that rescans referencing
volatile functions can change their behaviour (if a function is
rescanned, without having it's parameters changed), and that native
backward scan support is gone. I don't think that's actually an issue.Can you expand on why you think those things aren't an issue? Because
it seems like they might be.
Backward scans are, by the planner, easily implemented by adding a
materialize node. Which will, when ordinality or multiple ROWS FROM
expressions are present, even be more runtime & memory efficient. I
also don't think all that many people use FOR SCROLL cursors over SRFs
containing queries.
The part about rewinding is a bit more complicated. As of HEAD, a
rewound scan where some of the SRFs have to change due to parameter
inputs, but others don't, will only re-compute the ones with parameter
changes. I don't think it's more confusing to rescan the entire input,
rather parts of it in that case. If the entire input is re-scanned, the
planner knows how to materialize the entire scan output.
I think it'd be pretty annoying to continue to always materialize
ValuePerCall SRFs just to support that type of re-scan behaviour. We
don't really, to my knowledge, flag well whether rescans are required
atm, so we can't even easily do it conditionally.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
Attached is a significantly updated patch series (see the mail one up
for details about what this is, I don't want to quote it in its
entirety).
I've finally cleared my plate enough to start reviewing this patchset.
0001-Add-some-more-targetlist-srf-tests.patch
Add some test.
I think you should go ahead and push this tests-adding patch now, as it
adds documentation of the current behavior that is a good thing to have
independently of what the rest of the patchset does. Also, I'm worried
that some of the GROUP BY tests might have machine-dependent results
(if they are implemented by hashing) so it would be good to get in a few
buildfarm cycles and let that settle out before we start changing the
answers.
I do have some trivial nitpicks about 0001:
# rules cannot run concurrently with any test that creates a view
-test: rules psql_crosstab amutils
+test: rules psql_crosstab amutils tsrf
Although tsrf.sql doesn't currently need to create any views, it doesn't
seem like a great idea to assume that it never will. Maybe add this
after misc_functions in the previous parallel group, instead?
+-- it's weird to GROUP BYs that increase the number of results
"it's weird to have ..."
+-- nonsensically that seems to be allowed
+UPDATE fewmore SET data = generate_series(4,9);
"nonsense that seems to be allowed..."
+-- SRFs are now allowed in RETURNING
+INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
s/now/not/, apparently
More to come later, but 0001 is pushable with these fixes.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
0002-Shore-up-some-weird-corner-cases-for-targetlist-SRFs.patch
Forbid UPDATE ... SET foo = SRF() and ORDER BY / GROUP BY containing
SRFs that would change the number of returned rows. Without the
latter e.g. SELECT 1 ORDER BY generate_series(1,10); returns 10 rows.
I'm on board with disallowing SRFs in UPDATE, because it produces
underdetermined and unspecified results; but the other restriction
seems 100% arbitrary. There is no semantic difference between
SELECT a, b FROM ... ORDER BY srf();
and
SELECT a, b, srf() FROM ... ORDER BY 3;
except that in the first case the ordering column doesn't get returned to
the client. I do not see why that's so awful that we should make it fail
after twenty years of allowing it. And I certainly don't see why there
would be an implementation reason why we couldn't support it anymore
if we can still do the second one.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
0003-Avoid-materializing-SRFs-in-the-FROM-list.patch
To avoid performance regressions from moving SRFM_ValuePerCall SRFs to
ROWS FROM, nodeFunctionscan.c needs to support not materializing
output.
Personally I'd put this one later, as it's a performance optimization not
part of the core patch IMO --- or is there something in the later ones
that directly depends on it? Anyway I'm setting it aside for now.
0004-Allow-ROWS-FROM-to-return-functions-as-single-record.patch
To allow transforming SELECT record_srf(); nodeFunctionscan.c needs to
learn to return the result as a record. I chose
ROWS FROM (record_srf() AS ()) as the syntax for that. It doesn't
necessarily have to be SQL exposed, but it does make testing easier.
The proposed commit message is wrong, as it claims aclexplode()
demonstrates the problem which it doesn't --- we get the column set
from the function's declared OUT parameters.
I can't say that I like the proposed syntax much. What about leaving out
any syntax changes, and simply saying that "if the function returns record
and hasn't got OUT parameters, then return its result as an unexpanded
record"? That might not take much more than removing the error check ;-)
A possible objection is that then you could not get the no-expansion
behavior for functions that return named composite types or have OUT
parameters that effectively give them known composite types. From a
semantic standpoint we could fix that by saying "just cast the result to
record", ie ROWS FROM (aclexplode('whatever')::record) would give the
no-expansion behavior. I'm not sure if there might be any implementation
glitches in the way of doing it like that. Also there seems to be some
syntactic issue with it ... actually, the current behavior there is just
weird:
regression=# select * from rows from (aclexplode('{=r/postgres}')::record);
ERROR: syntax error at or near "::"
LINE 1: ...lect * from rows from (aclexplode('{=r/postgres}')::record);
^
regression=# select * from rows from (cast(aclexplode('{=r/postgres}') as record));
grantor | grantee | privilege_type | is_grantable
---------+---------+----------------+--------------
10 | 0 | SELECT | f
(1 row)
I was not aware that there was *anyplace* in the grammar where :: and CAST
behaved differently, and I'm not very pleased to find this one.
I haven't looked at the code, as there probably isn't much point in
reviewing in any detail till we choose the syntax.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-12 10:19:14 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
0001-Add-some-more-targetlist-srf-tests.patch
Add some test.I think you should go ahead and push this tests-adding patch now, as it
adds documentation of the current behavior that is a good thing to have
independently of what the rest of the patchset does. Also, I'm worried
that some of the GROUP BY tests might have machine-dependent results
(if they are implemented by hashing) so it would be good to get in a few
buildfarm cycles and let that settle out before we start changing the
answers.
Generally a sound plan - I started to noticeably expand it though,
there's some important edge cases it didn't cover.
Although tsrf.sql doesn't currently need to create any views, it doesn't
seem like a great idea to assume that it never will. Maybe add this
after misc_functions in the previous parallel group, instead?
WFM
+-- it's weird to GROUP BYs that increase the number of results
"it's weird to have ..."
+-- nonsensically that seems to be allowed +UPDATE fewmore SET data = generate_series(4,9);"nonsense that seems to be allowed..."
+-- SRFs are now allowed in RETURNING +INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);s/now/not/, apparently
Err, yes. Will update.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-12 11:29:37 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
0002-Shore-up-some-weird-corner-cases-for-targetlist-SRFs.patch
Forbid UPDATE ... SET foo = SRF() and ORDER BY / GROUP BY containing
SRFs that would change the number of returned rows. Without the
latter e.g. SELECT 1 ORDER BY generate_series(1,10); returns 10 rows.I'm on board with disallowing SRFs in UPDATE, because it produces
underdetermined and unspecified results; but the other restriction
seems 100% arbitrary. There is no semantic difference between
SELECT a, b FROM ... ORDER BY srf();
and
SELECT a, b, srf() FROM ... ORDER BY 3;
except that in the first case the ordering column doesn't get returned to
the client. I do not see why that's so awful that we should make it fail
after twenty years of allowing it.
I do think it's awful that an ORDER BY / GROUP BY changes the number of
rows processed. This should never have been allowed. You just need a
little typo somewhere that makes the targetlist entry not match the
ORDER/GROUP BY anymore and your results will differ in weird ways -
rather hard to debug in the GROUP BY case.
And I certainly don't see why there
would be an implementation reason why we couldn't support it anymore
if we can still do the second one.
There's nothing requiring this here, indeed.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 11:29:37 -0400, Tom Lane wrote:
I'm on board with disallowing SRFs in UPDATE, because it produces
underdetermined and unspecified results; but the other restriction
seems 100% arbitrary. There is no semantic difference between
SELECT a, b FROM ... ORDER BY srf();
and
SELECT a, b, srf() FROM ... ORDER BY 3;
except that in the first case the ordering column doesn't get returned to
the client. I do not see why that's so awful that we should make it fail
after twenty years of allowing it.
I do think it's awful that an ORDER BY / GROUP BY changes the number of
rows processed. This should never have been allowed.
Meh. That's just an opinion, and it's a bit late to be making such
changes. I think the general consensus of the previous discussion was
that we would preserve existing tSRF behavior as far as it was reasonably
practical to do so, with the exception that there's wide agreement that
the least-common-multiple rule for number of rows emitted is bad. I do
not think you're going to get anywhere near that level of agreement that
a SRF appearing only in ORDER BY is bad.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2016-09-12 12:10:01 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
0003-Avoid-materializing-SRFs-in-the-FROM-list.patch
To avoid performance regressions from moving SRFM_ValuePerCall SRFs to
ROWS FROM, nodeFunctionscan.c needs to support not materializing
output.Personally I'd put this one later, as it's a performance optimization not
part of the core patch IMO --- or is there something in the later ones
that directly depends on it? Anyway I'm setting it aside for now.
Not strongly dependant. But the ROWS FROM stuff touches a lot of the
same code. And I wanted to implement this before ripping out the current
implementation, to allow for meaningful performance tests.
0004-Allow-ROWS-FROM-to-return-functions-as-single-record.patch
To allow transforming SELECT record_srf(); nodeFunctionscan.c needs to
learn to return the result as a record. I chose
ROWS FROM (record_srf() AS ()) as the syntax for that. It doesn't
necessarily have to be SQL exposed, but it does make testing easier.The proposed commit message is wrong, as it claims aclexplode()
demonstrates the problem which it doesn't --- we get the column set
from the function's declared OUT parameters.
Oops. I'd probably tested with some self defined function and was
looking for an example...
I can't say that I like the proposed syntax much.
Me neither. But I haven't really found a better approach. It seems
kinda consistent to have ROWS FROM (... AS ()) change the picked out
columns to 0, and just return the whole thing.
What about leaving out
any syntax changes, and simply saying that "if the function returns record
and hasn't got OUT parameters, then return its result as an unexpanded
record"? That might not take much more than removing the error check ;-)
Having the ability to do this for non-record returning functions turned
out to be quite convenient. Otherwise we need to create ROW()
expressions for composite returning functions, which is neither cheap,
nor fun..
As you say, that might be doable with some form of casting, but,
ugh. I'm actually kind of surprised that even works. The function call
that nodeFunctionscan.c sees, isn't a function call, much less a set
returning one. Which means this hits the direct_function_call == false
path in ExecMakeTableFunctionResult(). If it didn't, we'd have hit
/* We don't allow sets in the arguments of the table function */
if (argDone != ExprSingleResult)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("set-valued function called in context that cannot accept a set")));
therein. Which you'd hit e.g. with
SELECT * FROM ROWS FROM (int4mul(generate_series(1, 2), 3));
Thus this actually relies on the SRF code path in execQual.c;
the thing we want to rip out...
A possible objection is that then you could not get the no-expansion
behavior for functions that return named composite types or have OUT
parameters that effectively give them known composite types. From a
semantic standpoint we could fix that by saying "just cast the result to
record", ie ROWS FROM (aclexplode('whatever')::record) would give the
no-expansion behavior. I'm not sure if there might be any implementation
glitches in the way of doing it like that.
See above. Personally I think just using explicit syntax is cleaner,
but I don't feel like arguing about this a whole lot.
- Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 12:10:01 -0400, Tom Lane wrote:
I can't say that I like the proposed syntax much.
Me neither. But I haven't really found a better approach. It seems
kinda consistent to have ROWS FROM (... AS ()) change the picked out
columns to 0, and just return the whole thing.
I just remembered that we allow zero-column composite types, which
makes this proposal formally ambiguous. So we really need a different
syntax. I'm not especially in love with the cast-to-record idea, but
it does dodge that problem.
Stepping back a little bit ... way back at the start of this thread
you muttered about possibly implementing tSRFs as a special pipeline
node type, a la Result. That would have the same benefits in terms
of being able to take SRF support out of the main execQual code paths.
I, and I think some other people, felt that the LATERAL approach would
be a cleaner answer --- but now that we're seeing some of the messy
details required to make the LATERAL way work, I'm beginning to have
second thoughts. I wonder if we should do at least a POC implementation
of the other way to get a better fix on which way is really cleaner.
Also, one of the points that's come up repeatedly in these discussions
is the way that the parser's implementation of *-expansion sucks for
composite-returning functions. That is, if you write
SELECT (foo(...)).* FROM ...
you get
SELECT (foo(...)).col1, (foo(...)).col2, ... FROM ...
so that the function is executed N times not once. We had discussed
fixing that for setof-composite-returning functions by folding multiple
identical SRF calls into a single LATERAL entry, but that doesn't
directly fix the problem for non-SRF composite functions. Also the
whole idea of having the planner undo the parser's damage in this way
is kinda grotty, not least because we can't safely combine multiple
calls of volatile functions, so it only works for not-volatile ones.
That line of thought leads to the idea that if we could have the *parser*
do the transformation to LATERAL form, we could avoid breaking a
composite-returning function call into multiple copies in the first place.
I had said that I didn't think we wanted this transformation done in the
parser, but maybe this is a sufficient reason to do so.
If we think in terms of pipeline evaluation nodes rather than LATERAL,
we could implement the above by having the parser emit multiple levels
of SELECT some-expressions FROM (SELECT some-expressions FROM ...),
with SRFs being rigidly separated into their own evaluation levels.
I'm not certain that any of these ideas are worth the electrons they're
written on, but I do think we ought to consider alternatives and not
just push forward with committing a first-draft implementation.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2016-09-12 13:26:20 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 12:10:01 -0400, Tom Lane wrote:
I can't say that I like the proposed syntax much.
Me neither. But I haven't really found a better approach. It seems
kinda consistent to have ROWS FROM (... AS ()) change the picked out
columns to 0, and just return the whole thing.I just remembered that we allow zero-column composite types, which
makes this proposal formally ambiguous.
Well, we errored out in the grammar for AS () so far... We might want to
fix that independently.
Stepping back a little bit ... way back at the start of this thread
you muttered about possibly implementing tSRFs as a special pipeline
node type, a la Result. That would have the same benefits in terms
of being able to take SRF support out of the main execQual code paths.
I, and I think some other people, felt that the LATERAL approach would
be a cleaner answer --- but now that we're seeing some of the messy
details required to make the LATERAL way work, I'm beginning to have
second thoughts. I wonder if we should do at least a POC implementation
of the other way to get a better fix on which way is really cleaner.
I'm not particularly in love in restarting with a different approach. I
think fixing the ROWS FROM expansion is the only really painful bit, and
that seems like it's independently beneficial to allow for suppression
of expansion there. I'm working on this to actually be finally able to
get some stuff from the "faster executor" thread in a committable
shape,... The other stuff like making SELECT * FROM func; not
materialize also seems independently useful; it's something people have
complained about repeatedly over the years.
I actually had started to work on a Result style approach, and I don't
think it turned out that nice. But I didn't complete it, so I might just
be wrong.
Also, one of the points that's come up repeatedly in these discussions
is the way that the parser's implementation of *-expansion sucks for
composite-returning functions. That is, if you write
SELECT (foo(...)).* FROM ...
you get
SELECT (foo(...)).col1, (foo(...)).col2, ... FROM ...
so that the function is executed N times not once. We had discussed
fixing that for setof-composite-returning functions by folding multiple
identical SRF calls into a single LATERAL entry, but that doesn't
directly fix the problem for non-SRF composite functions. Also the
whole idea of having the planner undo the parser's damage in this way
is kinda grotty, not least because we can't safely combine multiple
calls of volatile functions, so it only works for not-volatile ones.
That line of thought leads to the idea that if we could have the *parser*
do the transformation to LATERAL form, we could avoid breaking a
composite-returning function call into multiple copies in the first place.
I had said that I didn't think we wanted this transformation done in the
parser, but maybe this is a sufficient reason to do so.
I still don't like doing all this is in the parser. It'd just trigger
complaints of users that we're changing their query structure, and we'd
have to solve a good bit of the same problems we have to solve here.
If we really want to reduce the expansion cost - and to me that's a
largely independent issue from this - it seems better to have the parser
emit some structure that's easily recognized at plan time, rather than
have the praser do all the work.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-12 13:26:20 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 12:10:01 -0400, Tom Lane wrote:
I can't say that I like the proposed syntax much.
Me neither. But I haven't really found a better approach. It seems
kinda consistent to have ROWS FROM (... AS ()) change the picked out
columns to 0, and just return the whole thing.I just remembered that we allow zero-column composite types, which
makes this proposal formally ambiguous. So we really need a different
syntax. I'm not especially in love with the cast-to-record idea, but
it does dodge that problem.
I kind of like ROWS FROM (... AS VALUE), that seems to confer the
meaning quite well. As VALUE isn't a reserved keyword, that'd afaik only
really work inside ROWS FROM() where AS is required.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 13:26:20 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 12:10:01 -0400, Tom Lane wrote:
I can't say that I like the proposed syntax much.
Me neither. But I haven't really found a better approach. It seems
kinda consistent to have ROWS FROM (... AS ()) change the picked out
columns to 0, and just return the whole thing.
I just remembered that we allow zero-column composite types, which
makes this proposal formally ambiguous. So we really need a different
syntax. I'm not especially in love with the cast-to-record idea, but
it does dodge that problem.
I kind of like ROWS FROM (... AS VALUE), that seems to confer the
meaning quite well. As VALUE isn't a reserved keyword, that'd afaik only
really work inside ROWS FROM() where AS is required.
Hm, wouldn't ... AS RECORD convey the meaning better?
(Although once you look at it that way, it's just a cast spelled in
an idiosyncratic fashion.)
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-12 13:48:05 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 13:26:20 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 12:10:01 -0400, Tom Lane wrote:
I can't say that I like the proposed syntax much.
Me neither. But I haven't really found a better approach. It seems
kinda consistent to have ROWS FROM (... AS ()) change the picked out
columns to 0, and just return the whole thing.I just remembered that we allow zero-column composite types, which
makes this proposal formally ambiguous. So we really need a different
syntax. I'm not especially in love with the cast-to-record idea, but
it does dodge that problem.I kind of like ROWS FROM (... AS VALUE), that seems to confer the
meaning quite well. As VALUE isn't a reserved keyword, that'd afaik only
really work inside ROWS FROM() where AS is required.Hm, wouldn't ... AS RECORD convey the meaning better?
I was kind of envisioning AS VALUE to work for composite types without
removing their original type (possibly even for TYPEFUNC_SCALAR
ones). That, for one, makes the SRF to ROWS FROM conversion easier, and
for another seems generally useful. composites keeping their type with
AS RECORD seems a bit confusing. There's also the issue that VALUE is
already a keyword, record not...
(Although once you look at it that way, it's just a cast spelled in
an idiosyncratic fashion.)
Well, not quite, by virtue of keeping the original type around. After a
record cast you likely couldn't directly access the columns anymore,
even if it were a known composite type, right?
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 13:48:05 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
I kind of like ROWS FROM (... AS VALUE), that seems to confer the
meaning quite well. As VALUE isn't a reserved keyword, that'd afaik only
really work inside ROWS FROM() where AS is required.
Hm, wouldn't ... AS RECORD convey the meaning better?
I was kind of envisioning AS VALUE to work for composite types without
removing their original type (possibly even for TYPEFUNC_SCALAR
ones).
Maybe. A problem with any of these proposals though is that there's no
place to put a column alias. Yeah, you can stick it on outside the ROWS
FROM, but it seems a bit non-orthogonal to have to do it that way when
you can do it inside the ROWS FROM when adding a coldeflist.
Maybe we could do it like
ROWS FROM (func(...) AS alias)
where the difference from a coldeflist is that there's no parenthesized
list of names/types. It's a bit weird that adding an alias makes for
a semantic not just naming difference, but it's no weirder than these
other ideas.
(Although once you look at it that way, it's just a cast spelled in
an idiosyncratic fashion.)
Well, not quite, by virtue of keeping the original type around. After a
record cast you likely couldn't directly access the columns anymore,
even if it were a known composite type, right?
Same is true for any of these syntax proposals, no? So far as the rest of
the query is concerned, the function output is going to be an anonymous
record type.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-12 14:05:33 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 13:48:05 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
I kind of like ROWS FROM (... AS VALUE), that seems to confer the
meaning quite well. As VALUE isn't a reserved keyword, that'd afaik only
really work inside ROWS FROM() where AS is required.Hm, wouldn't ... AS RECORD convey the meaning better?
I was kind of envisioning AS VALUE to work for composite types without
removing their original type (possibly even for TYPEFUNC_SCALAR
ones).Maybe. A problem with any of these proposals though is that there's no
place to put a column alias. Yeah, you can stick it on outside the ROWS
FROM, but it seems a bit non-orthogonal to have to do it that way when
you can do it inside the ROWS FROM when adding a coldeflist.
I don't necessarily see that as a problem. The coldeflists inside ROWS
FROM() already don't allow assigning aliases for !record/composite
types, and they require specifying types.
(Although once you look at it that way, it's just a cast spelled in
an idiosyncratic fashion.)Well, not quite, by virtue of keeping the original type around. After a
record cast you likely couldn't directly access the columns anymore,
even if it were a known composite type, right?Same is true for any of these syntax proposals, no? So far as the rest of
the query is concerned, the function output is going to be an anonymous
record type.
Not for composite types, no. As implemented ROWS FROM (.. AS()) does:
CREATE OR REPLACE FUNCTION get_pg_class() RETURNS SETOF pg_class LANGUAGE sql AS $$SELECT * FROM pg_class;$$;
SELECT DISTINCT pg_typeof(f) FROM ROWS FROM (get_pg_class() AS ()) f;
┌───────────┐
│ pg_typeof │
├───────────┤
│ pg_class │
└───────────┘
(1 row)
SELECT (f).relname FROM ROWS FROM (get_pg_class() AS ()) f LIMIT 1;
┌────────────────┐
│ relname │
├────────────────┤
│ pg_toast_77994 │
└────────────────┘
(1 row)
which seems sensible to me.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 13:26:20 -0400, Tom Lane wrote:
Stepping back a little bit ... way back at the start of this thread
you muttered about possibly implementing tSRFs as a special pipeline
node type, a la Result. That would have the same benefits in terms
of being able to take SRF support out of the main execQual code paths.
I, and I think some other people, felt that the LATERAL approach would
be a cleaner answer --- but now that we're seeing some of the messy
details required to make the LATERAL way work, I'm beginning to have
second thoughts. I wonder if we should do at least a POC implementation
of the other way to get a better fix on which way is really cleaner.
I'm not particularly in love in restarting with a different approach. I
think fixing the ROWS FROM expansion is the only really painful bit, and
that seems like it's independently beneficial to allow for suppression
of expansion there.
Um, I dunno. You've added half a thousand lines of not-highly-readable-
nor-extensively-commented code to the planner; that certainly reaches *my*
threshold of pain. I'm also growing rather concerned that the LATERAL
approach is going to lock us into some unremovable incompatibilities
no matter how much we might regret that later (and in view of how quickly
I got my wrist slapped in <001201d18524$f84c4580$e8e4d080$@pcorp.us>,
I am afraid there may be more pushback awaiting us than we think).
If we go with a Result-like tSRF evaluation node, then whether we change
semantics or not becomes mostly a matter of what that node does. It could
become basically a wrapper around the existing ExecTargetList() logic if
we needed to provide backwards-compatible behavior.
I actually had started to work on a Result style approach, and I don't
think it turned out that nice. But I didn't complete it, so I might just
be wrong.
It's also possible that it's easier now in the post-pathification code
base than it was before. After contemplating my navel for awhile, it
seems like the core of the planner code for a Result-like approach would
be something very close to make_group_input_target(), plus a thing like
pull_var_clause() that extracts SRFs rather than Vars. Those two
functions, including their lengthy comments, weigh in at ~250 lines.
Admittedly there'd be some boilerplate on top of that, if we invent a
new plan node type rather than extending Result, but not all that much.
If you like, I'll have a go at drafting a patch in that style. Do you
happen to still have the executor side of what you did, so I don't have
to reinvent that?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-12 17:36:07 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 13:26:20 -0400, Tom Lane wrote:
Stepping back a little bit ... way back at the start of this thread
you muttered about possibly implementing tSRFs as a special pipeline
node type, a la Result. That would have the same benefits in terms
of being able to take SRF support out of the main execQual code paths.
I, and I think some other people, felt that the LATERAL approach would
be a cleaner answer --- but now that we're seeing some of the messy
details required to make the LATERAL way work, I'm beginning to have
second thoughts. I wonder if we should do at least a POC implementation
of the other way to get a better fix on which way is really cleaner.I'm not particularly in love in restarting with a different approach. I
think fixing the ROWS FROM expansion is the only really painful bit, and
that seems like it's independently beneficial to allow for suppression
of expansion there.Um, I dunno. You've added half a thousand lines of not-highly-readable-
nor-extensively-commented code to the planner; that certainly reaches *my*
threshold of pain.
Well, I certainly plan (and started to) make that code easier to
understand, and better commented. It also removes ~1400 LOC of not easy
to understand code... A good chunk of that'd would also be removed with
a Result style approach, but far from all.
I'm also growing rather concerned that the LATERAL
approach is going to lock us into some unremovable incompatibilities
no matter how much we might regret that later (and in view of how quickly
I got my wrist slapped in <001201d18524$f84c4580$e8e4d080$@pcorp.us>,
I am afraid there may be more pushback awaiting us than we think).
I don't think it'd be all that hard to add something like the current
LCM behaviour into nodeFunctionscan.c if we really wanted. But I think
it'll be better to just say no here.
If we go with a Result-like tSRF evaluation node, then whether we change
semantics or not becomes mostly a matter of what that node does. It could
become basically a wrapper around the existing ExecTargetList() logic if
we needed to provide backwards-compatible behavior.
As you previously objected: If we keep ExecTargetList() style logic, we
need to keep most of execQual.c's handling of ExprMultipleResult et al,
and that's going to prevent the stuff I want to work on. Because
handling ExprMultipleResult in all these places is a serious issue
WRT making expression evaluation faster. If we find a good answer to
that, I'd be more open to pursuing this approach.
I actually had started to work on a Result style approach, and I don't
think it turned out that nice. But I didn't complete it, so I might just
be wrong.It's also possible that it's easier now in the post-pathification code
base than it was before. After contemplating my navel for awhile, it
seems like the core of the planner code for a Result-like approach would
be something very close to make_group_input_target(), plus a thing like
pull_var_clause() that extracts SRFs rather than Vars. Those two
functions, including their lengthy comments, weigh in at ~250 lines.
Admittedly there'd be some boilerplate on top of that, if we invent a
new plan node type rather than extending Result, but not all that much.
That's pretty much what I did (or rather started to do), yes. I had
something that was called from grouping_planner() that added the new
node ontop of the original set of paths, after grouping or after
distinct, depending on where SRFs were referenced. The biggest benefit
I saw with that is that there's no need to push things into a subquery,
and that the ordering is a lot more explicit.
If you like, I'll have a go at drafting a patch in that style. Do you
happen to still have the executor side of what you did, so I don't have
to reinvent that?
The executor side is actually what I found harder here. Either we end up
keeping most of ExecQual's handling, or we reinvent a good deal of
separate logic.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 17:36:07 -0400, Tom Lane wrote:
Um, I dunno. You've added half a thousand lines of not-highly-readable-
nor-extensively-commented code to the planner; that certainly reaches *my*
threshold of pain.
Well, I certainly plan (and started to) make that code easier to
understand, and better commented. It also removes ~1400 LOC of not easy
to understand code... A good chunk of that'd would also be removed with
a Result style approach, but far from all.
Hm, I've not studied 0006 yet, but surely that's executor code that would
go away with *any* approach that takes away the need for generic execQual
to support SRFs? I don't see that it counts while discussing which way
we take to reach that point.
I'm also growing rather concerned that the LATERAL
approach is going to lock us into some unremovable incompatibilities
no matter how much we might regret that later (and in view of how quickly
I got my wrist slapped in <001201d18524$f84c4580$e8e4d080$@pcorp.us>,
I am afraid there may be more pushback awaiting us than we think).
I don't think it'd be all that hard to add something like the current
LCM behaviour into nodeFunctionscan.c if we really wanted. But I think
it'll be better to just say no here.
"Just say no" soon translates to memes about "disasters like the removal
of implicit casting" (which in fact is not what 8.3 did, but I've grown
pretty damn tired of the amount of bitching that that cleanup did and
still does provoke). In any case, it feels like the LATERAL approach is
locking us into more and subtler incompatibilities than just that one.
If we go with a Result-like tSRF evaluation node, then whether we change
semantics or not becomes mostly a matter of what that node does. It could
become basically a wrapper around the existing ExecTargetList() logic if
we needed to provide backwards-compatible behavior.
As you previously objected: If we keep ExecTargetList() style logic, we
need to keep most of execQual.c's handling of ExprMultipleResult et al,
and that's going to prevent the stuff I want to work on.
You're inventing objections. It won't require that any more than the
LATERAL approach does; it's basically the same code as whatever
nodeFunctionscan is going to do, but packaged as a pipeline eval node
rather than a base scan node. Or to be clearer: what I'm suggesting it
would contain is ExecTargetList's logic about restarting individual SRFs.
That wouldn't propagate into execQual because we would only allow SRFs at
the top level of the node's tlist, just like nodeFunctionscan does.
ExecMakeTableFunctionResult doesn't require the generic execQual code
to support SRFs today, and it still wouldn't.
In larger terms: the whole point here is to fish SRF calls up to the top
level of the tlist of whatever node is executing them, where they can be
special-cased by that node. Their SRF-free argument expressions would be
evaluated by generic execQual. AFAICS this goes through in the same way
from the executor's viewpoint whether we use LATERAL as the query
restructuring method or a SRF-capable variant of Result. But it's now
looking to me like the latter would be a lot simpler from the point of
view of planner complexity, and in particular from the point of view of
proving correctness (equivalence of the query transformation).
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2016-09-12 18:35:03 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
I don't think it'd be all that hard to add something like the current
LCM behaviour into nodeFunctionscan.c if we really wanted. But I think
it'll be better to just say no here."Just say no" soon translates to memes about "disasters like the removal
of implicit casting" (which in fact is not what 8.3 did, but I've grown
pretty damn tired of the amount of bitching that that cleanup did and
still does provoke). In any case, it feels like the LATERAL approach is
locking us into more and subtler incompatibilities than just that one.
I can't see those being equivalent impact-wise. Note that the person
bitching most loudly about the "implicit casting" thing (Merlin Moncure)
is voting to remove the LCM behaviour ;)
I think we'll continue to get more bitching about the confusing LCM
behaviour than complaints the backward compat break would generate.
If we go with a Result-like tSRF evaluation node, then whether we change
semantics or not becomes mostly a matter of what that node does. It could
become basically a wrapper around the existing ExecTargetList() logic if
we needed to provide backwards-compatible behavior.As you previously objected: If we keep ExecTargetList() style logic, we
need to keep most of execQual.c's handling of ExprMultipleResult et al,
and that's going to prevent the stuff I want to work on.You're inventing objections.
Heh, it's actually your own objection ;)
http://archives.postgresql.org/message-id/32673.1464023429%40sss.pgh.pa.us
It won't require that any more than the
LATERAL approach does; it's basically the same code as whatever
nodeFunctionscan is going to do, but packaged as a pipeline eval node
rather than a base scan node. Or to be clearer: what I'm suggesting it
would contain is ExecTargetList's logic about restarting individual SRFs.
That wouldn't propagate into execQual because we would only allow SRFs at
the top level of the node's tlist, just like nodeFunctionscan does.
ExecMakeTableFunctionResult doesn't require the generic execQual code
to support SRFs today, and it still wouldn't.
(it accidentally does (see your cast example), but that that's besides
your point)
That might work. It gets somewhat nasty though because you also need to
handle, SRF arguments to SRFs. And those again can contain nearly
arbitrary expressions inbetween. With the ROWS FROM approach that can be
fairly easily handled via LATERAL. I guess what we could do here is to
use one pipeline node to evaluate all the argument SRFs, and then
another for the outer expression. Where the outer node would evaluate
the SRF arguments using normal expression evaluation, with the inner SRF
output replaced by a var.
I wonder how much duplication we'd end up between nodeFunctionscan.c and
nodeSRF (or whatever). We'd need the latter node to support ValuePerCall
in an non-materializing fashion as well. Could we combine them somehow?
In larger terms: the whole point here is to fish SRF calls up to the
top level of the tlist of whatever node is executing them, where they
can be special-cased by that node. Their SRF-free argument
expressions would be evaluated by generic execQual. AFAICS this goes
through in the same way from the executor's viewpoint whether we use
LATERAL as the query restructuring method or a SRF-capable variant of
Result. But it's now looking to me like the latter would be a lot
simpler from the point of view of planner complexity, and in
particular from the point of view of proving correctness (equivalence
of the query transformation).
It's nicer not to introduce subqueries from a two angles from my pov:
1) EXPLAINs will look more like the original query
2) The evaluation order of the non-srf part of the query, and the query
itself, will be easier. That's by the thing I'm least happy with the
LATERAL approach.
I don't think the code for adding these intermediate SRF evaluating
nodes will be noticeably simpler than what's in my prototype. We'll
still have to do the whole conversion recursively, we'll still need
complexity of figuring out whether to put those SRFs evaluations
before/after group by, order by, distinct on and window functions.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 18:35:03 -0400, Tom Lane wrote:
You're inventing objections.
Heh, it's actually your own objection ;)
http://archives.postgresql.org/message-id/32673.1464023429%40sss.pgh.pa.us
I'm changing my opinion in the light of unfavorable evidence. Is that
wrong?
It won't require that any more than the
LATERAL approach does; it's basically the same code as whatever
nodeFunctionscan is going to do, but packaged as a pipeline eval node
rather than a base scan node.
That might work. It gets somewhat nasty though because you also need to
handle, SRF arguments to SRFs. And those again can contain nearly
arbitrary expressions inbetween. With the ROWS FROM approach that can be
fairly easily handled via LATERAL. I guess what we could do here is to
use one pipeline node to evaluate all the argument SRFs, and then
another for the outer expression. Where the outer node would evaluate
the SRF arguments using normal expression evaluation, with the inner SRF
output replaced by a var.
Right. Nested SRFs translate to multiple ROWS-FROM RTEs with lateral
references in the one approach, and nested Result-thingies in the other.
It's pretty much the same thing mutatis mutandis, but I think it will
likely be a lot easier to get there from here with the Result-based
approach --- for example, we don't have to worry about forcing lateral
join order, and the ordering constraints vis-a-vis GROUP BY etc won't take
any great effort either. Anyway I think it is worth trying.
I wonder how much duplication we'd end up between nodeFunctionscan.c and
nodeSRF (or whatever). We'd need the latter node to support ValuePerCall
in an non-materializing fashion as well. Could we combine them somehow?
Yeah, I was wondering that too. I doubt that we want to make one node
type do both things --- the fact that Result comes in two flavors with
different semantics (with or without an input node) isn't very nice IMO,
and this would be almost that identical case. But maybe they could share
some code at the level of ExecMakeTableFunctionResult. (I've not looked
at your executor changes yet, not sure how much of that still exists.)
I don't think the code for adding these intermediate SRF evaluating
nodes will be noticeably simpler than what's in my prototype. We'll
still have to do the whole conversion recursively, we'll still need
complexity of figuring out whether to put those SRFs evaluations
before/after group by, order by, distinct on and window functions.
I think it will slot into the code that's already there rather more
easily than what you've done, because we already *have* code that makes
decisions in that form. We just need to teach it to break down what
it now thinks of as a single projection step into N+1 steps when there
are N levels of nested SRF present. Anyway I'll draft a prototype and
then we can compare.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-12 19:35:22 -0400, Tom Lane wrote:
You're inventing objections.
Heh, it's actually your own objection ;)
http://archives.postgresql.org/message-id/32673.1464023429%40sss.pgh.pa.usI'm changing my opinion in the light of unfavorable evidence. Is that
wrong?
Nah, not at all. I was just referring to "inventing".
I wonder how much duplication we'd end up between nodeFunctionscan.c and
nodeSRF (or whatever). We'd need the latter node to support ValuePerCall
in an non-materializing fashion as well. Could we combine them somehow?Yeah, I was wondering that too. I doubt that we want to make one node
type do both things --- the fact that Result comes in two flavors with
different semantics (with or without an input node) isn't very nice IMO,
and this would be almost that identical case.
It might not, agreed. That imo has to be taken into acount calculating
the code costs - although the executor stuff usually is pretty boring in
comparison.
But maybe they could share
some code at the level of ExecMakeTableFunctionResult. (I've not looked
at your executor changes yet, not sure how much of that still exists.)
I'd split ExecMakeTableFunctionResult up, to allow for a percall mode,
and that seems like it'd still be needed to avoid performance
regressions.
It'd be an awfully large amount of code those two nodes would
duplicate. Excepting different rescan logic and ORDINALITY support,
nearly all the nodeFunctionscan.c code would be needed in both nodes.
Anyway I'll draft a prototype and then we can compare.
Ok, cool.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-12 09:14:47 -0700, Andres Freund wrote:
On 2016-09-12 10:19:14 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
0001-Add-some-more-targetlist-srf-tests.patch
Add some test.I think you should go ahead and push this tests-adding patch now, as it
adds documentation of the current behavior that is a good thing to have
independently of what the rest of the patchset does. Also, I'm worried
that some of the GROUP BY tests might have machine-dependent results
(if they are implemented by hashing) so it would be good to get in a few
buildfarm cycles and let that settle out before we start changing the
answers.Generally a sound plan - I started to noticeably expand it though,
there's some important edge cases it didn't cover.
Attached is a noticeably expanded set of tests, with fixes for the stuff
you'd found. I plan to push that soon-ish. Independent of the approach
we'll be choosing, increased coverage seems quite useful.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-12 16:56:32 -0700, Andres Freund wrote:
On 2016-09-12 09:14:47 -0700, Andres Freund wrote:
On 2016-09-12 10:19:14 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
0001-Add-some-more-targetlist-srf-tests.patch
Add some test.I think you should go ahead and push this tests-adding patch now, as it
adds documentation of the current behavior that is a good thing to have
independently of what the rest of the patchset does. Also, I'm worried
that some of the GROUP BY tests might have machine-dependent results
(if they are implemented by hashing) so it would be good to get in a few
buildfarm cycles and let that settle out before we start changing the
answers.Generally a sound plan - I started to noticeably expand it though,
there's some important edge cases it didn't cover.Attached is a noticeably expanded set of tests, with fixes for the stuff
you'd found. I plan to push that soon-ish. Independent of the approach
we'll be choosing, increased coverage seems quite useful.
And for real.
Attachments:
0001-Add-more-tests-for-targetlist-SRFs.patchtext/x-patch; charset=us-asciiDownload
From 3bdaab7028c0ae7cf9bea666a6e555adbc68640e Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Wed, 3 Aug 2016 18:29:42 -0700
Subject: [PATCH] Add more tests for targetlist SRFs.
We're considering changing the implementation of targetlist SRFs
considerably, and a lot of the current behaviour isn't tested in our
regression tests. Thus it seems useful to increase coverage to avoid
accidental behaviour changes.
It's quite possible that some of the plans here will require adjustments
to avoid falling afoul of ordering differences (e.g. hashed group
bys). The buildfarm will tell us.
Reviewed-By: Tom Lane
Discussion: <20160827214829.zo2dfb5jaikii5nw@alap3.anarazel.de>
---
src/test/regress/expected/tsrf.out | 501 +++++++++++++++++++++++++++++++++++++
src/test/regress/parallel_schedule | 2 +-
src/test/regress/serial_schedule | 1 +
src/test/regress/sql/tsrf.sql | 124 +++++++++
4 files changed, 627 insertions(+), 1 deletion(-)
create mode 100644 src/test/regress/expected/tsrf.out
create mode 100644 src/test/regress/sql/tsrf.sql
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
new file mode 100644
index 0000000..983ce17
--- /dev/null
+++ b/src/test/regress/expected/tsrf.out
@@ -0,0 +1,501 @@
+--
+-- tsrf - targetlist set returning function tests
+--
+-- simple srf
+SELECT generate_series(1, 3);
+ generate_series
+-----------------
+ 1
+ 2
+ 3
+(3 rows)
+
+-- parallel iteration
+SELECT generate_series(1, 3), generate_series(3,5);
+ generate_series | generate_series
+-----------------+-----------------
+ 1 | 3
+ 2 | 4
+ 3 | 5
+(3 rows)
+
+-- parallel iteration, different number of rows
+SELECT generate_series(1, 2), generate_series(1,4);
+ generate_series | generate_series
+-----------------+-----------------
+ 1 | 1
+ 2 | 2
+ 1 | 3
+ 2 | 4
+(4 rows)
+
+-- srf, with SRF argument
+SELECT generate_series(1, generate_series(1, 3));
+ generate_series
+-----------------
+ 1
+ 1
+ 2
+ 1
+ 2
+ 3
+(6 rows)
+
+-- srf, with two SRF arguments
+SELECT generate_series(generate_series(1,3), generate_series(2, 4));
+ERROR: functions and operators can take at most one set argument
+CREATE TABLE few(id int, dataa text, datab text);
+INSERT INTO few VALUES(1, 'a', 'foo'),(2, 'a', 'bar'),(3, 'b', 'bar');
+-- SRF output order of sorting is maintained, if SRF is not referenced
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id DESC;
+ id | g
+----+---
+ 3 | 1
+ 3 | 2
+ 3 | 3
+ 2 | 1
+ 2 | 2
+ 2 | 3
+ 1 | 1
+ 1 | 2
+ 1 | 3
+(9 rows)
+
+-- but SRFs can be referenced in sort
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, g DESC;
+ id | g
+----+---
+ 1 | 3
+ 1 | 2
+ 1 | 1
+ 2 | 3
+ 2 | 2
+ 2 | 1
+ 3 | 3
+ 3 | 2
+ 3 | 1
+(9 rows)
+
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, generate_series(1,3) DESC;
+ id | g
+----+---
+ 1 | 3
+ 1 | 2
+ 1 | 1
+ 2 | 3
+ 2 | 2
+ 2 | 1
+ 3 | 3
+ 3 | 2
+ 3 | 1
+(9 rows)
+
+-- it's weird to have ORDER BYs that increase the number of results
+SELECT few.id FROM few ORDER BY id, generate_series(1,3) DESC;
+ id
+----
+ 1
+ 1
+ 1
+ 2
+ 2
+ 2
+ 3
+ 3
+ 3
+(9 rows)
+
+-- SRFs are computed after aggregation
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa;
+ dataa | count | min | max | unnest
+-------+-------+-----+-----+--------
+ a | 1 | 1 | 1 | 1
+ a | 1 | 1 | 1 | 1
+ a | 1 | 1 | 1 | 3
+(3 rows)
+
+-- unless referenced in GROUP BY clause
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
+ dataa | count | min | max | unnest
+-------+-------+-----+-----+--------
+ a | 2 | 1 | 1 | 1
+ a | 1 | 1 | 1 | 3
+(2 rows)
+
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, 5;
+ dataa | count | min | max | unnest
+-------+-------+-----+-----+--------
+ a | 2 | 1 | 1 | 1
+ a | 1 | 1 | 1 | 3
+(2 rows)
+
+-- check HAVING works when GROUP BY does [not] reference SRF output
+SELECT dataa, generate_series(1,3), count(*) FROM few GROUP BY 1 HAVING count(*) > 1;
+ dataa | generate_series | count
+-------+-----------------+-------
+ a | 1 | 2
+ a | 2 | 2
+ a | 3 | 2
+(3 rows)
+
+SELECT dataa, generate_series(1,3), count(*) FROM few GROUP BY 1, 2 HAVING count(*) > 1;
+ dataa | generate_series | count
+-------+-----------------+-------
+ a | 1 | 2
+ a | 2 | 2
+ a | 3 | 2
+(3 rows)
+
+-- it's weird to have GROUP BYs that increase the number of results
+SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa;
+ dataa | count | min | max
+-------+-------+-----+-----
+ b | 1 | 3 | 3
+ a | 2 | 1 | 2
+(2 rows)
+
+SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
+ dataa | count | min | max
+-------+-------+-----+-----
+ b | 2 | 3 | 3
+ a | 4 | 1 | 2
+ b | 1 | 3 | 3
+ a | 2 | 1 | 2
+(4 rows)
+
+-- SRFs are not allowed in aggregate arguments
+SELECT min(generate_series(1, 3)) FROM few;
+ERROR: set-valued function called in context that cannot accept a set
+-- SRFs are normally computed after window functions
+SELECT id,lag(id) OVER(), count(*) OVER(), generate_series(1,3) FROM few;
+ id | lag | count | generate_series
+----+-----+-------+-----------------
+ 1 | | 3 | 1
+ 1 | | 3 | 2
+ 1 | | 3 | 3
+ 2 | 1 | 3 | 1
+ 2 | 1 | 3 | 2
+ 2 | 1 | 3 | 3
+ 3 | 2 | 3 | 1
+ 3 | 2 | 3 | 2
+ 3 | 2 | 3 | 3
+(9 rows)
+
+-- unless referencing SRFs
+SELECT SUM(count(*)) OVER(PARTITION BY generate_series(1,3) ORDER BY generate_series(1,3)), generate_series(1,3) g FROM few GROUP BY g;
+ sum | g
+-----+---
+ 3 | 1
+ 3 | 2
+ 3 | 3
+(3 rows)
+
+-- sorting + grouping
+SELECT few.dataa, count(*), min(id), max(id), generate_series(1,3) FROM few GROUP BY few.dataa ORDER BY 5;
+ dataa | count | min | max | generate_series
+-------+-------+-----+-----+-----------------
+ b | 1 | 3 | 3 | 1
+ a | 2 | 1 | 2 | 1
+ b | 1 | 3 | 3 | 2
+ a | 2 | 1 | 2 | 2
+ b | 1 | 3 | 3 | 3
+ a | 2 | 1 | 2 | 3
+(6 rows)
+
+-- grouping sets are a bit special, they produce NULLs in columns not actually NULL
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab);
+ dataa | b | g | count
+-------+-----+---+-------
+ a | bar | 1 | 1
+ a | bar | 2 | 1
+ a | foo | 1 | 1
+ a | foo | 2 | 1
+ a | | 1 | 2
+ a | | 2 | 2
+ b | bar | 1 | 1
+ b | bar | 2 | 1
+ b | | 1 | 1
+ b | | 2 | 1
+ | | 1 | 3
+ | | 2 | 3
+ | bar | 1 | 2
+ | bar | 2 | 2
+ | foo | 1 | 1
+ | foo | 2 | 1
+(16 rows)
+
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab) ORDER BY dataa;
+ dataa | b | g | count
+-------+-----+---+-------
+ a | bar | 1 | 1
+ a | bar | 2 | 1
+ a | foo | 1 | 1
+ a | foo | 2 | 1
+ a | | 1 | 2
+ a | | 2 | 2
+ b | bar | 1 | 1
+ b | bar | 2 | 1
+ b | | 1 | 1
+ b | | 2 | 1
+ | | 1 | 3
+ | | 2 | 3
+ | bar | 1 | 2
+ | bar | 2 | 2
+ | foo | 1 | 1
+ | foo | 2 | 1
+(16 rows)
+
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab) ORDER BY g;
+ dataa | b | g | count
+-------+-----+---+-------
+ a | bar | 1 | 1
+ a | foo | 1 | 1
+ a | | 1 | 2
+ b | bar | 1 | 1
+ b | | 1 | 1
+ | | 1 | 3
+ | bar | 1 | 2
+ | foo | 1 | 1
+ | foo | 2 | 1
+ a | bar | 2 | 1
+ b | | 2 | 1
+ a | foo | 2 | 1
+ | bar | 2 | 2
+ a | | 2 | 2
+ | | 2 | 3
+ b | bar | 2 | 1
+(16 rows)
+
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab, g);
+ dataa | b | g | count
+-------+-----+---+-------
+ a | bar | 1 | 1
+ a | bar | 2 | 1
+ a | bar | | 2
+ a | foo | 1 | 1
+ a | foo | 2 | 1
+ a | foo | | 2
+ a | | | 4
+ b | bar | 1 | 1
+ b | bar | 2 | 1
+ b | bar | | 2
+ b | | | 2
+ | | | 6
+ a | | 1 | 2
+ b | | 1 | 1
+ | | 1 | 3
+ a | | 2 | 2
+ b | | 2 | 1
+ | | 2 | 3
+ | bar | 1 | 2
+ | bar | 2 | 2
+ | bar | | 4
+ | foo | 1 | 1
+ | foo | 2 | 1
+ | foo | | 2
+(24 rows)
+
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab, g) ORDER BY dataa;
+ dataa | b | g | count
+-------+-----+---+-------
+ a | bar | 1 | 1
+ a | bar | 2 | 1
+ a | bar | | 2
+ a | foo | 1 | 1
+ a | foo | 2 | 1
+ a | foo | | 2
+ a | | | 4
+ a | | 1 | 2
+ a | | 2 | 2
+ b | bar | 2 | 1
+ b | | | 2
+ b | | 1 | 1
+ b | | 2 | 1
+ b | bar | 1 | 1
+ b | bar | | 2
+ | foo | | 2
+ | foo | 1 | 1
+ | | 2 | 3
+ | bar | 1 | 2
+ | bar | 2 | 2
+ | | | 6
+ | foo | 2 | 1
+ | bar | | 4
+ | | 1 | 3
+(24 rows)
+
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab, g) ORDER BY g;
+ dataa | b | g | count
+-------+-----+---+-------
+ a | bar | 1 | 1
+ a | foo | 1 | 1
+ b | bar | 1 | 1
+ a | | 1 | 2
+ b | | 1 | 1
+ | | 1 | 3
+ | bar | 1 | 2
+ | foo | 1 | 1
+ | foo | 2 | 1
+ | bar | 2 | 2
+ a | | 2 | 2
+ b | | 2 | 1
+ a | bar | 2 | 1
+ | | 2 | 3
+ a | foo | 2 | 1
+ b | bar | 2 | 1
+ a | foo | | 2
+ b | bar | | 2
+ b | | | 2
+ | | | 6
+ a | | | 4
+ | bar | | 4
+ | foo | | 2
+ a | bar | | 2
+(24 rows)
+
+-- data modification
+CREATE TABLE fewmore AS SELECT generate_series(1,3) AS data;
+INSERT INTO fewmore VALUES(generate_series(4,5));
+SELECT * FROM fewmore;
+ data
+------
+ 1
+ 2
+ 3
+ 4
+ 5
+(5 rows)
+
+-- nonsense that seems to be allowed
+UPDATE fewmore SET data = generate_series(4,9);
+-- SRFs are not allowed in RETURNING
+INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
+ERROR: set-valued function called in context that cannot accept a set
+-- nor aggregate arguments
+SELECT count(generate_series(1,3)) FROM few;
+ERROR: set-valued function called in context that cannot accept a set
+-- nor proper VALUES
+VALUES(1, generate_series(1,2));
+ERROR: set-valued function called in context that cannot accept a set
+-- DISTINCT ON is evaluated before tSRF evaluation if SRF is not
+-- referenced either in ORDER BY or in the DISTINCT ON list. The ORDER
+-- BY reference can be implicitly generated, if there's no other ORDER BY.
+-- implicit reference (via implicit ORDER) to all columns
+SELECT DISTINCT ON (a) a, b, generate_series(1,3) g
+FROM (VALUES (3, 2), (3,1), (1,1), (1,4), (5,3), (5,1)) AS t(a, b);
+ a | b | g
+---+---+---
+ 1 | 1 | 1
+ 3 | 2 | 1
+ 5 | 3 | 1
+(3 rows)
+
+-- unreferenced in DISTINCT ON or ORDER BY
+SELECT DISTINCT ON (a) a, b, generate_series(1,3) g
+FROM (VALUES (3, 2), (3,1), (1,1), (1,4), (5,3), (5,1)) AS t(a, b)
+ORDER BY a, b DESC;
+ a | b | g
+---+---+---
+ 1 | 4 | 1
+ 1 | 4 | 2
+ 1 | 4 | 3
+ 3 | 2 | 1
+ 3 | 2 | 2
+ 3 | 2 | 3
+ 5 | 3 | 1
+ 5 | 3 | 2
+ 5 | 3 | 3
+(9 rows)
+
+-- referenced in ORDER BY
+SELECT DISTINCT ON (a) a, b, generate_series(1,3) g
+FROM (VALUES (3, 2), (3,1), (1,1), (1,4), (5,3), (5,1)) AS t(a, b)
+ORDER BY a, b DESC, g DESC;
+ a | b | g
+---+---+---
+ 1 | 4 | 3
+ 3 | 2 | 3
+ 5 | 3 | 3
+(3 rows)
+
+-- referenced in ORDER BY and DISTINCT ON
+SELECT DISTINCT ON (a, b, g) a, b, generate_series(1,3) g
+FROM (VALUES (3, 2), (3,1), (1,1), (1,4), (5,3), (5,1)) AS t(a, b)
+ORDER BY a, b DESC, g DESC;
+ a | b | g
+---+---+---
+ 1 | 4 | 3
+ 1 | 4 | 2
+ 1 | 4 | 1
+ 1 | 1 | 3
+ 1 | 1 | 2
+ 1 | 1 | 1
+ 3 | 2 | 3
+ 3 | 2 | 2
+ 3 | 2 | 1
+ 3 | 1 | 3
+ 3 | 1 | 2
+ 3 | 1 | 1
+ 5 | 3 | 3
+ 5 | 3 | 2
+ 5 | 3 | 1
+ 5 | 1 | 3
+ 5 | 1 | 2
+ 5 | 1 | 1
+(18 rows)
+
+-- only SRF mentioned in DISTINCT ON
+SELECT DISTINCT ON (g) a, b, generate_series(1,3) g
+FROM (VALUES (3, 2), (3,1), (1,1), (1,4), (5,3), (5,1)) AS t(a, b);
+ a | b | g
+---+---+---
+ 3 | 2 | 1
+ 5 | 1 | 2
+ 3 | 1 | 3
+(3 rows)
+
+-- LIMIT / OFFSET is evaluated after SRF evaluation
+SELECT a, generate_series(1,2) FROM (VALUES(1),(2),(3)) r(a) LIMIT 2 OFFSET 2;
+ a | generate_series
+---+-----------------
+ 2 | 1
+ 2 | 2
+(2 rows)
+
+-- SRFs are not allowed in LIMIT.
+SELECT 1 LIMIT generate_series(1,3);
+ERROR: argument of LIMIT must not return a set
+LINE 1: SELECT 1 LIMIT generate_series(1,3);
+ ^
+-- tSRF in correlated subquery, referencing table outside
+SELECT (SELECT generate_series(1,3) LIMIT 1 OFFSET few.id) FROM few;
+ generate_series
+-----------------
+ 2
+ 3
+
+(3 rows)
+
+-- tSRF in correlated subquery, referencing SRF outside
+SELECT (SELECT generate_series(1,3) LIMIT 1 OFFSET g.i) FROM generate_series(0,3) g(i);
+ generate_series
+-----------------
+ 1
+ 2
+ 3
+
+(4 rows)
+
+-- Operators can return sets too
+CREATE OPERATOR |@| (PROCEDURE = unnest, RIGHTARG = ANYARRAY);
+SELECT |@|ARRAY[1,2,3];
+ ?column?
+----------
+ 1
+ 2
+ 3
+(3 rows)
+
+-- Clean up
+DROP TABLE few;
+DROP TABLE fewmore;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 1cb5dfc..8641769 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -89,7 +89,7 @@ test: brin gin gist spgist privileges init_privs security_label collate matview
# ----------
# Another group of parallel tests
# ----------
-test: alter_generic alter_operator misc psql async dbsize misc_functions
+test: alter_generic alter_operator misc psql async dbsize misc_functions tsrf
# rules cannot run concurrently with any test that creates a view
test: rules psql_crosstab amutils
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 8958d8c..835cf35 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -123,6 +123,7 @@ test: psql
test: async
test: dbsize
test: misc_functions
+test: tsrf
test: rules
test: psql_crosstab
test: select_parallel
diff --git a/src/test/regress/sql/tsrf.sql b/src/test/regress/sql/tsrf.sql
new file mode 100644
index 0000000..633dfd6
--- /dev/null
+++ b/src/test/regress/sql/tsrf.sql
@@ -0,0 +1,124 @@
+--
+-- tsrf - targetlist set returning function tests
+--
+
+-- simple srf
+SELECT generate_series(1, 3);
+
+-- parallel iteration
+SELECT generate_series(1, 3), generate_series(3,5);
+
+-- parallel iteration, different number of rows
+SELECT generate_series(1, 2), generate_series(1,4);
+
+-- srf, with SRF argument
+SELECT generate_series(1, generate_series(1, 3));
+
+-- srf, with two SRF arguments
+SELECT generate_series(generate_series(1,3), generate_series(2, 4));
+
+CREATE TABLE few(id int, dataa text, datab text);
+INSERT INTO few VALUES(1, 'a', 'foo'),(2, 'a', 'bar'),(3, 'b', 'bar');
+
+-- SRF output order of sorting is maintained, if SRF is not referenced
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id DESC;
+
+-- but SRFs can be referenced in sort
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, g DESC;
+SELECT few.id, generate_series(1,3) g FROM few ORDER BY id, generate_series(1,3) DESC;
+
+-- it's weird to have ORDER BYs that increase the number of results
+SELECT few.id FROM few ORDER BY id, generate_series(1,3) DESC;
+
+-- SRFs are computed after aggregation
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa;
+-- unless referenced in GROUP BY clause
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
+SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, 5;
+
+-- check HAVING works when GROUP BY does [not] reference SRF output
+SELECT dataa, generate_series(1,3), count(*) FROM few GROUP BY 1 HAVING count(*) > 1;
+SELECT dataa, generate_series(1,3), count(*) FROM few GROUP BY 1, 2 HAVING count(*) > 1;
+
+-- it's weird to have GROUP BYs that increase the number of results
+SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa;
+SELECT few.dataa, count(*), min(id), max(id) FROM few GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
+
+-- SRFs are not allowed in aggregate arguments
+SELECT min(generate_series(1, 3)) FROM few;
+
+-- SRFs are normally computed after window functions
+SELECT id,lag(id) OVER(), count(*) OVER(), generate_series(1,3) FROM few;
+-- unless referencing SRFs
+SELECT SUM(count(*)) OVER(PARTITION BY generate_series(1,3) ORDER BY generate_series(1,3)), generate_series(1,3) g FROM few GROUP BY g;
+
+-- sorting + grouping
+SELECT few.dataa, count(*), min(id), max(id), generate_series(1,3) FROM few GROUP BY few.dataa ORDER BY 5;
+
+-- grouping sets are a bit special, they produce NULLs in columns not actually NULL
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab);
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab) ORDER BY dataa;
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab) ORDER BY g;
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab, g);
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab, g) ORDER BY dataa;
+SELECT dataa, datab b, generate_series(1,2) g, count(*) FROM few GROUP BY CUBE(dataa, datab, g) ORDER BY g;
+
+-- data modification
+CREATE TABLE fewmore AS SELECT generate_series(1,3) AS data;
+INSERT INTO fewmore VALUES(generate_series(4,5));
+SELECT * FROM fewmore;
+
+-- nonsense that seems to be allowed
+UPDATE fewmore SET data = generate_series(4,9);
+
+-- SRFs are not allowed in RETURNING
+INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
+-- nor aggregate arguments
+SELECT count(generate_series(1,3)) FROM few;
+-- nor proper VALUES
+VALUES(1, generate_series(1,2));
+
+-- DISTINCT ON is evaluated before tSRF evaluation if SRF is not
+-- referenced either in ORDER BY or in the DISTINCT ON list. The ORDER
+-- BY reference can be implicitly generated, if there's no other ORDER BY.
+
+-- implicit reference (via implicit ORDER) to all columns
+SELECT DISTINCT ON (a) a, b, generate_series(1,3) g
+FROM (VALUES (3, 2), (3,1), (1,1), (1,4), (5,3), (5,1)) AS t(a, b);
+
+-- unreferenced in DISTINCT ON or ORDER BY
+SELECT DISTINCT ON (a) a, b, generate_series(1,3) g
+FROM (VALUES (3, 2), (3,1), (1,1), (1,4), (5,3), (5,1)) AS t(a, b)
+ORDER BY a, b DESC;
+
+-- referenced in ORDER BY
+SELECT DISTINCT ON (a) a, b, generate_series(1,3) g
+FROM (VALUES (3, 2), (3,1), (1,1), (1,4), (5,3), (5,1)) AS t(a, b)
+ORDER BY a, b DESC, g DESC;
+
+-- referenced in ORDER BY and DISTINCT ON
+SELECT DISTINCT ON (a, b, g) a, b, generate_series(1,3) g
+FROM (VALUES (3, 2), (3,1), (1,1), (1,4), (5,3), (5,1)) AS t(a, b)
+ORDER BY a, b DESC, g DESC;
+
+-- only SRF mentioned in DISTINCT ON
+SELECT DISTINCT ON (g) a, b, generate_series(1,3) g
+FROM (VALUES (3, 2), (3,1), (1,1), (1,4), (5,3), (5,1)) AS t(a, b);
+
+-- LIMIT / OFFSET is evaluated after SRF evaluation
+SELECT a, generate_series(1,2) FROM (VALUES(1),(2),(3)) r(a) LIMIT 2 OFFSET 2;
+-- SRFs are not allowed in LIMIT.
+SELECT 1 LIMIT generate_series(1,3);
+
+-- tSRF in correlated subquery, referencing table outside
+SELECT (SELECT generate_series(1,3) LIMIT 1 OFFSET few.id) FROM few;
+-- tSRF in correlated subquery, referencing SRF outside
+SELECT (SELECT generate_series(1,3) LIMIT 1 OFFSET g.i) FROM generate_series(0,3) g(i);
+
+-- Operators can return sets too
+CREATE OPERATOR |@| (PROCEDURE = unnest, RIGHTARG = ANYARRAY);
+SELECT |@|ARRAY[1,2,3];
+
+-- Clean up
+DROP TABLE few;
+DROP TABLE fewmore;
--
2.9.3
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 16:56:32 -0700, Andres Freund wrote:
Attached is a noticeably expanded set of tests, with fixes for the stuff
you'd found. I plan to push that soon-ish. Independent of the approach
we'll be choosing, increased coverage seems quite useful.
And for real.
Looks good to me, please push.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-12 20:15:46 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 16:56:32 -0700, Andres Freund wrote:
Attached is a noticeably expanded set of tests, with fixes for the stuff
you'd found. I plan to push that soon-ish. Independent of the approach
we'll be choosing, increased coverage seems quite useful.And for real.
Looks good to me, please push.
Done.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
Attached is a significantly updated patch series (see the mail one up
for details about what this is, I don't want to quote it in its
entirety).
I've reviewed the portions of 0005 that have to do with making the parser
mark queries with hasTargetSRF. The code as you had it was wrong because
it would set the flag as a consequence of SRFs in function RTEs, which
we don't want. It seemed to me that the best way to fix that was to rely
on the parser's p_expr_kind mechanism to tell which part of the query
we're in, whereupon we might as well make the parser act more like it does
for aggregates and window functions and give a suitable error at parse
time for misplaced SRFs. The attached isn't perfect, in that it doesn't
know about nesting restrictions (ie that SRFs must be at top level of a
function RTE), but we could improve that later if we wanted, and anyway
it's definitely a good bit nicer than before.
This also incorporates the part of 0002 that I agree with, namely
disallowing SRFs in UPDATE, since check_srf_call_placement() naturally
would do that.
I also renamed the flag to hasTargetSRFs, which is more parallel to
hasAggs and hasWindowFuncs, and made some effort to use it in place
of expression_returns_set() searches.
I'd like to go ahead and push this, since it's a necessary prerequisite
for either approach we might adopt for the rest of the patch series,
and the improved error reporting and avoidance of expensive
expression_returns_set searches make it a win IMO even if we were not
planning to do anything more with SRFs.
regards, tom lane
Attachments:
add-Query.hasTargetSRFs-field.patchtext/x-diff; charset=us-ascii; name=add-Query.hasTargetSRFs-field.patchDownload
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index e997b57..dbd6094 100644
*** a/src/backend/catalog/heap.c
--- b/src/backend/catalog/heap.c
*************** cookDefault(ParseState *pstate,
*** 2560,2573 ****
/*
* transformExpr() should have already rejected subqueries, aggregates,
! * and window functions, based on the EXPR_KIND_ for a default expression.
! *
! * It can't return a set either.
*/
- if (expression_returns_set(expr))
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("default expression must not return a set")));
/*
* Coerce the expression to the correct type and typmod, if given. This
--- 2560,2568 ----
/*
* transformExpr() should have already rejected subqueries, aggregates,
! * window functions, and SRFs, based on the EXPR_KIND_ for a default
! * expression.
*/
/*
* Coerce the expression to the correct type and typmod, if given. This
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 4f39dad..71714bc 100644
*** a/src/backend/nodes/copyfuncs.c
--- b/src/backend/nodes/copyfuncs.c
*************** _copyQuery(const Query *from)
*** 2731,2736 ****
--- 2731,2737 ----
COPY_SCALAR_FIELD(resultRelation);
COPY_SCALAR_FIELD(hasAggs);
COPY_SCALAR_FIELD(hasWindowFuncs);
+ COPY_SCALAR_FIELD(hasTargetSRFs);
COPY_SCALAR_FIELD(hasSubLinks);
COPY_SCALAR_FIELD(hasDistinctOn);
COPY_SCALAR_FIELD(hasRecursive);
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 4800165..29a090f 100644
*** a/src/backend/nodes/equalfuncs.c
--- b/src/backend/nodes/equalfuncs.c
*************** _equalQuery(const Query *a, const Query
*** 921,926 ****
--- 921,927 ----
COMPARE_SCALAR_FIELD(resultRelation);
COMPARE_SCALAR_FIELD(hasAggs);
COMPARE_SCALAR_FIELD(hasWindowFuncs);
+ COMPARE_SCALAR_FIELD(hasTargetSRFs);
COMPARE_SCALAR_FIELD(hasSubLinks);
COMPARE_SCALAR_FIELD(hasDistinctOn);
COMPARE_SCALAR_FIELD(hasRecursive);
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 90fecb1..7e092d7 100644
*** a/src/backend/nodes/outfuncs.c
--- b/src/backend/nodes/outfuncs.c
*************** _outQuery(StringInfo str, const Query *n
*** 2683,2688 ****
--- 2683,2689 ----
WRITE_INT_FIELD(resultRelation);
WRITE_BOOL_FIELD(hasAggs);
WRITE_BOOL_FIELD(hasWindowFuncs);
+ WRITE_BOOL_FIELD(hasTargetSRFs);
WRITE_BOOL_FIELD(hasSubLinks);
WRITE_BOOL_FIELD(hasDistinctOn);
WRITE_BOOL_FIELD(hasRecursive);
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 894a48f..917e6c8 100644
*** a/src/backend/nodes/readfuncs.c
--- b/src/backend/nodes/readfuncs.c
*************** _readQuery(void)
*** 238,243 ****
--- 238,244 ----
READ_INT_FIELD(resultRelation);
READ_BOOL_FIELD(hasAggs);
READ_BOOL_FIELD(hasWindowFuncs);
+ READ_BOOL_FIELD(hasTargetSRFs);
READ_BOOL_FIELD(hasSubLinks);
READ_BOOL_FIELD(hasDistinctOn);
READ_BOOL_FIELD(hasRecursive);
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 04264b4..99b6bc8 100644
*** a/src/backend/optimizer/path/allpaths.c
--- b/src/backend/optimizer/path/allpaths.c
*************** check_output_expressions(Query *subquery
*** 2422,2428 ****
continue;
/* Functions returning sets are unsafe (point 1) */
! if (expression_returns_set((Node *) tle->expr))
{
safetyInfo->unsafeColumns[tle->resno] = true;
continue;
--- 2422,2429 ----
continue;
/* Functions returning sets are unsafe (point 1) */
! if (subquery->hasTargetSRFs &&
! expression_returns_set((Node *) tle->expr))
{
safetyInfo->unsafeColumns[tle->resno] = true;
continue;
*************** remove_unused_subquery_outputs(Query *su
*** 2835,2841 ****
* If it contains a set-returning function, we can't remove it since
* that could change the number of rows returned by the subquery.
*/
! if (expression_returns_set(texpr))
continue;
/*
--- 2836,2843 ----
* If it contains a set-returning function, we can't remove it since
* that could change the number of rows returned by the subquery.
*/
! if (subquery->hasTargetSRFs &&
! expression_returns_set(texpr))
continue;
/*
diff --git a/src/backend/optimizer/plan/analyzejoins.c b/src/backend/optimizer/plan/analyzejoins.c
index e28a8dc..74e4245 100644
*** a/src/backend/optimizer/plan/analyzejoins.c
--- b/src/backend/optimizer/plan/analyzejoins.c
*************** rel_is_distinct_for(PlannerInfo *root, R
*** 650,655 ****
--- 650,660 ----
bool
query_supports_distinctness(Query *query)
{
+ /* we don't cope with SRFs, see comment below */
+ if (query->hasTargetSRFs)
+ return false;
+
+ /* check for features we can prove distinctness with */
if (query->distinctClause != NIL ||
query->groupClause != NIL ||
query->groupingSets != NIL ||
*************** query_is_distinct_for(Query *query, List
*** 695,701 ****
* specified columns, since those must be evaluated before de-duplication;
* but it doesn't presently seem worth the complication to check that.)
*/
! if (expression_returns_set((Node *) query->targetList))
return false;
/*
--- 700,706 ----
* specified columns, since those must be evaluated before de-duplication;
* but it doesn't presently seem worth the complication to check that.)
*/
! if (query->hasTargetSRFs)
return false;
/*
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 174210b..f657ffc 100644
*** a/src/backend/optimizer/plan/planner.c
--- b/src/backend/optimizer/plan/planner.c
*************** subquery_planner(PlannerGlobal *glob, Qu
*** 604,609 ****
--- 604,613 ----
preprocess_expression(root, (Node *) parse->targetList,
EXPRKIND_TARGET);
+ /* Constant-folding might have removed all set-returning functions */
+ if (parse->hasTargetSRFs)
+ parse->hasTargetSRFs = expression_returns_set((Node *) parse->targetList);
+
newWithCheckOptions = NIL;
foreach(l, parse->withCheckOptions)
{
*************** grouping_planner(PlannerInfo *root, bool
*** 1702,1717 ****
* Figure out whether there's a hard limit on the number of rows that
* query_planner's result subplan needs to return. Even if we know a
* hard limit overall, it doesn't apply if the query has any
! * grouping/aggregation operations. (XXX it also doesn't apply if the
! * tlist contains any SRFs; but checking for that here seems more
! * costly than it's worth, since root->limit_tuples is only used for
! * cost estimates, and only in a small number of cases.)
*/
if (parse->groupClause ||
parse->groupingSets ||
parse->distinctClause ||
parse->hasAggs ||
parse->hasWindowFuncs ||
root->hasHavingQual)
root->limit_tuples = -1.0;
else
--- 1706,1719 ----
* Figure out whether there's a hard limit on the number of rows that
* query_planner's result subplan needs to return. Even if we know a
* hard limit overall, it doesn't apply if the query has any
! * grouping/aggregation operations, or SRFs in the tlist.
*/
if (parse->groupClause ||
parse->groupingSets ||
parse->distinctClause ||
parse->hasAggs ||
parse->hasWindowFuncs ||
+ parse->hasTargetSRFs ||
root->hasHavingQual)
root->limit_tuples = -1.0;
else
*************** grouping_planner(PlannerInfo *root, bool
*** 1928,1934 ****
* weird usage that it doesn't seem worth greatly complicating matters to
* account for it.
*/
! tlist_rows = tlist_returns_set_rows(tlist);
if (tlist_rows > 1)
{
foreach(lc, current_rel->pathlist)
--- 1930,1940 ----
* weird usage that it doesn't seem worth greatly complicating matters to
* account for it.
*/
! if (parse->hasTargetSRFs)
! tlist_rows = tlist_returns_set_rows(tlist);
! else
! tlist_rows = 1;
!
if (tlist_rows > 1)
{
foreach(lc, current_rel->pathlist)
*************** make_sort_input_target(PlannerInfo *root
*** 4995,5001 ****
* Check for SRF or volatile functions. Check the SRF case first
* because we must know whether we have any postponed SRFs.
*/
! if (expression_returns_set((Node *) expr))
{
/* We'll decide below whether these are postponable */
col_is_srf[i] = true;
--- 5001,5008 ----
* Check for SRF or volatile functions. Check the SRF case first
* because we must know whether we have any postponed SRFs.
*/
! if (parse->hasTargetSRFs &&
! expression_returns_set((Node *) expr))
{
/* We'll decide below whether these are postponable */
col_is_srf[i] = true;
*************** make_sort_input_target(PlannerInfo *root
*** 5034,5039 ****
--- 5041,5047 ----
{
/* For sortgroupref cols, just check if any contain SRFs */
if (!have_srf_sortcols &&
+ parse->hasTargetSRFs &&
expression_returns_set((Node *) expr))
have_srf_sortcols = true;
}
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 6edefb1..b5d3e94 100644
*** a/src/backend/optimizer/plan/subselect.c
--- b/src/backend/optimizer/plan/subselect.c
*************** simplify_EXISTS_query(PlannerInfo *root,
*** 1562,1568 ****
{
/*
* We don't try to simplify at all if the query uses set operations,
! * aggregates, grouping sets, modifying CTEs, HAVING, OFFSET, or FOR
* UPDATE/SHARE; none of these seem likely in normal usage and their
* possible effects are complex. (Note: we could ignore an "OFFSET 0"
* clause, but that traditionally is used as an optimization fence, so we
--- 1562,1568 ----
{
/*
* We don't try to simplify at all if the query uses set operations,
! * aggregates, SRFs, grouping sets, modifying CTEs, HAVING, OFFSET, or FOR
* UPDATE/SHARE; none of these seem likely in normal usage and their
* possible effects are complex. (Note: we could ignore an "OFFSET 0"
* clause, but that traditionally is used as an optimization fence, so we
*************** simplify_EXISTS_query(PlannerInfo *root,
*** 1573,1578 ****
--- 1573,1579 ----
query->hasAggs ||
query->groupingSets ||
query->hasWindowFuncs ||
+ query->hasTargetSRFs ||
query->hasModifyingCTE ||
query->havingQual ||
query->limitOffset ||
*************** simplify_EXISTS_query(PlannerInfo *root,
*** 1614,1626 ****
}
/*
- * Mustn't throw away the targetlist if it contains set-returning
- * functions; those could affect whether zero rows are returned!
- */
- if (expression_returns_set((Node *) query->targetList))
- return false;
-
- /*
* Otherwise, we can throw away the targetlist, as well as any GROUP,
* WINDOW, DISTINCT, and ORDER BY clauses; none of those clauses will
* change a nonzero-rows result to zero rows or vice versa. (Furthermore,
--- 1615,1620 ----
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index a334f15..878db9b 100644
*** a/src/backend/optimizer/prep/prepjointree.c
--- b/src/backend/optimizer/prep/prepjointree.c
*************** pull_up_simple_subquery(PlannerInfo *roo
*** 1188,1195 ****
parse->hasSubLinks |= subquery->hasSubLinks;
/*
! * subquery won't be pulled up if it hasAggs or hasWindowFuncs, so no work
! * needed on those flags
*/
/*
--- 1188,1195 ----
parse->hasSubLinks |= subquery->hasSubLinks;
/*
! * subquery won't be pulled up if it hasAggs, hasWindowFuncs, or
! * hasTargetSRFs, so no work needed on those flags
*/
/*
*************** is_simple_subquery(Query *subquery, Rang
*** 1419,1426 ****
return false;
/*
! * Can't pull up a subquery involving grouping, aggregation, sorting,
! * limiting, or WITH. (XXX WITH could possibly be allowed later)
*
* We also don't pull up a subquery that has explicit FOR UPDATE/SHARE
* clauses, because pullup would cause the locking to occur semantically
--- 1419,1426 ----
return false;
/*
! * Can't pull up a subquery involving grouping, aggregation, SRFs,
! * sorting, limiting, or WITH. (XXX WITH could possibly be allowed later)
*
* We also don't pull up a subquery that has explicit FOR UPDATE/SHARE
* clauses, because pullup would cause the locking to occur semantically
*************** is_simple_subquery(Query *subquery, Rang
*** 1430,1435 ****
--- 1430,1436 ----
*/
if (subquery->hasAggs ||
subquery->hasWindowFuncs ||
+ subquery->hasTargetSRFs ||
subquery->groupClause ||
subquery->groupingSets ||
subquery->havingQual ||
*************** is_simple_subquery(Query *subquery, Rang
*** 1543,1557 ****
}
/*
- * Don't pull up a subquery that has any set-returning functions in its
- * targetlist. Otherwise we might well wind up inserting set-returning
- * functions into places where they mustn't go, such as quals of higher
- * queries. This also ensures deletion of an empty jointree is valid.
- */
- if (expression_returns_set((Node *) subquery->targetList))
- return false;
-
- /*
* Don't pull up a subquery that has any volatile functions in its
* targetlist. Otherwise we might introduce multiple evaluations of these
* functions, if they get copied to multiple places in the upper query,
--- 1544,1549 ----
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index e1baf71..663ffe0 100644
*** a/src/backend/optimizer/util/clauses.c
--- b/src/backend/optimizer/util/clauses.c
*************** inline_function(Oid funcid, Oid result_t
*** 4449,4454 ****
--- 4449,4455 ----
querytree->utilityStmt ||
querytree->hasAggs ||
querytree->hasWindowFuncs ||
+ querytree->hasTargetSRFs ||
querytree->hasSubLinks ||
querytree->cteList ||
querytree->rtable ||
*************** inline_function(Oid funcid, Oid result_t
*** 4489,4505 ****
Assert(!modifyTargetList);
/*
! * Additional validity checks on the expression. It mustn't return a set,
! * and it mustn't be more volatile than the surrounding function (this is
! * to avoid breaking hacks that involve pretending a function is immutable
! * when it really ain't). If the surrounding function is declared strict,
! * then the expression must contain only strict constructs and must use
! * all of the function parameters (this is overkill, but an exact analysis
! * is hard).
*/
- if (expression_returns_set(newexpr))
- goto fail;
-
if (funcform->provolatile == PROVOLATILE_IMMUTABLE &&
contain_mutable_functions(newexpr))
goto fail;
--- 4490,4502 ----
Assert(!modifyTargetList);
/*
! * Additional validity checks on the expression. It mustn't be more
! * volatile than the surrounding function (this is to avoid breaking hacks
! * that involve pretending a function is immutable when it really ain't).
! * If the surrounding function is declared strict, then the expression
! * must contain only strict constructs and must use all of the function
! * parameters (this is overkill, but an exact analysis is hard).
*/
if (funcform->provolatile == PROVOLATILE_IMMUTABLE &&
contain_mutable_functions(newexpr))
goto fail;
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index eac86cc..870fae3 100644
*** a/src/backend/parser/analyze.c
--- b/src/backend/parser/analyze.c
*************** transformDeleteStmt(ParseState *pstate,
*** 417,422 ****
--- 417,423 ----
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
+ qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasAggs = pstate->p_hasAggs;
if (pstate->p_hasAggs)
parseCheckAggregates(pstate, qry);
*************** transformInsertStmt(ParseState *pstate,
*** 819,824 ****
--- 820,826 ----
qry->rtable = pstate->p_rtable;
qry->jointree = makeFromExpr(pstate->p_joinlist, NULL);
+ qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasSubLinks = pstate->p_hasSubLinks;
assign_query_collations(pstate, qry);
*************** transformSelectStmt(ParseState *pstate,
*** 1231,1236 ****
--- 1233,1239 ----
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
+ qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasAggs = pstate->p_hasAggs;
if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
parseCheckAggregates(pstate, qry);
*************** transformSetOperationStmt(ParseState *ps
*** 1691,1696 ****
--- 1694,1700 ----
qry->hasSubLinks = pstate->p_hasSubLinks;
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
+ qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasAggs = pstate->p_hasAggs;
if (pstate->p_hasAggs || qry->groupClause || qry->groupingSets || qry->havingQual)
parseCheckAggregates(pstate, qry);
*************** transformUpdateStmt(ParseState *pstate,
*** 2170,2175 ****
--- 2174,2180 ----
qry->rtable = pstate->p_rtable;
qry->jointree = makeFromExpr(pstate->p_joinlist, qual);
+ qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasSubLinks = pstate->p_hasSubLinks;
assign_query_collations(pstate, qry);
*************** CheckSelectLocking(Query *qry, LockClaus
*** 2565,2571 ****
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s is not allowed with window functions",
LCS_asString(strength))));
! if (expression_returns_set((Node *) qry->targetList))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
--- 2570,2576 ----
translator: %s is a SQL row locking clause such as FOR UPDATE */
errmsg("%s is not allowed with window functions",
LCS_asString(strength))));
! if (qry->hasTargetSRFs)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
/*------
diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c
index 61af484..56c9a42 100644
*** a/src/backend/parser/parse_func.c
--- b/src/backend/parser/parse_func.c
***************
*** 25,30 ****
--- 25,31 ----
#include "parser/parse_agg.h"
#include "parser/parse_clause.h"
#include "parser/parse_coerce.h"
+ #include "parser/parse_expr.h"
#include "parser/parse_func.h"
#include "parser/parse_relation.h"
#include "parser/parse_target.h"
*************** ParseFuncOrColumn(ParseState *pstate, Li
*** 625,630 ****
--- 626,635 ----
exprLocation((Node *) llast(fargs)))));
}
+ /* if it returns a set, check that's OK */
+ if (retset)
+ check_srf_call_placement(pstate, location);
+
/* build the appropriate output structure */
if (fdresult == FUNCDETAIL_NORMAL)
{
*************** LookupAggNameTypeNames(List *aggname, Li
*** 2040,2042 ****
--- 2045,2190 ----
return oid;
}
+
+
+ /*
+ * check_srf_call_placement
+ * Verify that a set-returning function is called in a valid place,
+ * and throw a nice error if not.
+ *
+ * A side-effect is to set pstate->p_hasTargetSRFs true if appropriate.
+ */
+ void
+ check_srf_call_placement(ParseState *pstate, int location)
+ {
+ const char *err;
+ bool errkind;
+
+ /*
+ * Check to see if the set-returning function is in an invalid place
+ * within the query. Basically, we don't allow SRFs anywhere except in
+ * the targetlist (which includes GROUP BY/ORDER BY expressions), VALUES,
+ * and functions in FROM.
+ *
+ * For brevity we support two schemes for reporting an error here: set
+ * "err" to a custom message, or set "errkind" true if the error context
+ * is sufficiently identified by what ParseExprKindName will return, *and*
+ * what it will return is just a SQL keyword. (Otherwise, use a custom
+ * message to avoid creating translation problems.)
+ */
+ err = NULL;
+ errkind = false;
+ switch (pstate->p_expr_kind)
+ {
+ case EXPR_KIND_NONE:
+ Assert(false); /* can't happen */
+ break;
+ case EXPR_KIND_OTHER:
+ /* Accept SRF here; caller must throw error if wanted */
+ break;
+ case EXPR_KIND_JOIN_ON:
+ case EXPR_KIND_JOIN_USING:
+ err = _("set-returning functions are not allowed in JOIN conditions");
+ break;
+ case EXPR_KIND_FROM_SUBSELECT:
+ /* can't get here, but just in case, throw an error */
+ errkind = true;
+ break;
+ case EXPR_KIND_FROM_FUNCTION:
+ /* okay ... but we can't check nesting here */
+ break;
+ case EXPR_KIND_WHERE:
+ errkind = true;
+ break;
+ case EXPR_KIND_POLICY:
+ err = _("set-returning functions are not allowed in policy expressions");
+ break;
+ case EXPR_KIND_HAVING:
+ errkind = true;
+ break;
+ case EXPR_KIND_FILTER:
+ errkind = true;
+ break;
+ case EXPR_KIND_WINDOW_PARTITION:
+ case EXPR_KIND_WINDOW_ORDER:
+ /* okay, these are effectively GROUP BY/ORDER BY */
+ pstate->p_hasTargetSRFs = true;
+ break;
+ case EXPR_KIND_WINDOW_FRAME_RANGE:
+ case EXPR_KIND_WINDOW_FRAME_ROWS:
+ err = _("set-returning functions are not allowed in window definitions");
+ break;
+ case EXPR_KIND_SELECT_TARGET:
+ case EXPR_KIND_INSERT_TARGET:
+ /* okay */
+ pstate->p_hasTargetSRFs = true;
+ break;
+ case EXPR_KIND_UPDATE_SOURCE:
+ case EXPR_KIND_UPDATE_TARGET:
+ /* disallowed because it would be ambiguous what to do */
+ errkind = true;
+ break;
+ case EXPR_KIND_GROUP_BY:
+ case EXPR_KIND_ORDER_BY:
+ /* okay */
+ pstate->p_hasTargetSRFs = true;
+ break;
+ case EXPR_KIND_DISTINCT_ON:
+ /* okay */
+ pstate->p_hasTargetSRFs = true;
+ break;
+ case EXPR_KIND_LIMIT:
+ case EXPR_KIND_OFFSET:
+ errkind = true;
+ break;
+ case EXPR_KIND_RETURNING:
+ errkind = true;
+ break;
+ case EXPR_KIND_VALUES:
+ /* okay */
+ break;
+ case EXPR_KIND_CHECK_CONSTRAINT:
+ case EXPR_KIND_DOMAIN_CHECK:
+ err = _("set-returning functions are not allowed in check constraints");
+ break;
+ case EXPR_KIND_COLUMN_DEFAULT:
+ case EXPR_KIND_FUNCTION_DEFAULT:
+ err = _("set-returning functions are not allowed in DEFAULT expressions");
+ break;
+ case EXPR_KIND_INDEX_EXPRESSION:
+ err = _("set-returning functions are not allowed in index expressions");
+ break;
+ case EXPR_KIND_INDEX_PREDICATE:
+ err = _("set-returning functions are not allowed in index predicates");
+ break;
+ case EXPR_KIND_ALTER_COL_TRANSFORM:
+ err = _("set-returning functions are not allowed in transform expressions");
+ break;
+ case EXPR_KIND_EXECUTE_PARAMETER:
+ err = _("set-returning functions are not allowed in EXECUTE parameters");
+ break;
+ case EXPR_KIND_TRIGGER_WHEN:
+ err = _("set-returning functions are not allowed in trigger WHEN conditions");
+ break;
+
+ /*
+ * There is intentionally no default: case here, so that the
+ * compiler will warn if we add a new ParseExprKind without
+ * extending this switch. If we do see an unrecognized value at
+ * runtime, the behavior will be the same as for EXPR_KIND_OTHER,
+ * which is sane anyway.
+ */
+ }
+ if (err)
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg_internal("%s", err),
+ parser_errposition(pstate, location)));
+ if (errkind)
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ /* translator: %s is name of a SQL construct, eg GROUP BY */
+ errmsg("set-returning functions are not allowed in %s",
+ ParseExprKindName(pstate->p_expr_kind)),
+ parser_errposition(pstate, location)));
+ }
diff --git a/src/backend/parser/parse_oper.c b/src/backend/parser/parse_oper.c
index e913d05..aecda6d 100644
*** a/src/backend/parser/parse_oper.c
--- b/src/backend/parser/parse_oper.c
*************** make_op(ParseState *pstate, List *opname
*** 839,844 ****
--- 839,848 ----
result->args = args;
result->location = location;
+ /* if it returns a set, check that's OK */
+ if (result->opretset)
+ check_srf_call_placement(pstate, location);
+
ReleaseSysCache(tup);
return (Expr *) result;
diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c
index 7a2950e..eaffc49 100644
*** a/src/backend/parser/parse_utilcmd.c
--- b/src/backend/parser/parse_utilcmd.c
*************** transformIndexStmt(Oid relid, IndexStmt
*** 2106,2122 ****
/*
* transformExpr() should have already rejected subqueries,
! * aggregates, and window functions, based on the EXPR_KIND_ for
! * an index expression.
*
- * Also reject expressions returning sets; this is for consistency
- * with what transformWhereClause() checks for the predicate.
* DefineIndex() will make more checks.
*/
- if (expression_returns_set(ielem->expr))
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("index expression cannot return a set")));
}
}
--- 2106,2116 ----
/*
* transformExpr() should have already rejected subqueries,
! * aggregates, window functions, and SRFs, based on the EXPR_KIND_
! * for an index expression.
*
* DefineIndex() will make more checks.
*/
}
}
*************** transformAlterTableStmt(Oid relid, Alter
*** 2594,2605 ****
def->cooked_default =
transformExpr(pstate, def->raw_default,
EXPR_KIND_ALTER_COL_TRANSFORM);
-
- /* it can't return a set */
- if (expression_returns_set(def->cooked_default))
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("transform expression must not return a set")));
}
newcmds = lappend(newcmds, cmd);
--- 2588,2593 ----
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index a22a11e..b828e3c 100644
*** a/src/backend/rewrite/rewriteHandler.c
--- b/src/backend/rewrite/rewriteHandler.c
*************** view_query_is_auto_updatable(Query *view
*** 2221,2227 ****
if (viewquery->hasWindowFuncs)
return gettext_noop("Views that return window functions are not automatically updatable.");
! if (expression_returns_set((Node *) viewquery->targetList))
return gettext_noop("Views that return set-returning functions are not automatically updatable.");
/*
--- 2221,2227 ----
if (viewquery->hasWindowFuncs)
return gettext_noop("Views that return window functions are not automatically updatable.");
! if (viewquery->hasTargetSRFs)
return gettext_noop("Views that return set-returning functions are not automatically updatable.");
/*
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 8d3dcf4..6de2cab 100644
*** a/src/include/nodes/parsenodes.h
--- b/src/include/nodes/parsenodes.h
*************** typedef struct Query
*** 116,121 ****
--- 116,122 ----
bool hasAggs; /* has aggregates in tlist or havingQual */
bool hasWindowFuncs; /* has window functions in tlist */
+ bool hasTargetSRFs; /* has set-returning functions in tlist */
bool hasSubLinks; /* has subquery SubLink */
bool hasDistinctOn; /* distinctClause is from DISTINCT ON */
bool hasRecursive; /* WITH RECURSIVE was specified */
diff --git a/src/include/parser/parse_func.h b/src/include/parser/parse_func.h
index 0cefdf1..ed16d36 100644
*** a/src/include/parser/parse_func.h
--- b/src/include/parser/parse_func.h
*************** extern Oid LookupFuncNameTypeNames(List
*** 67,70 ****
--- 67,72 ----
extern Oid LookupAggNameTypeNames(List *aggname, List *argtypes,
bool noError);
+ extern void check_srf_call_placement(ParseState *pstate, int location);
+
#endif /* PARSE_FUNC_H */
diff --git a/src/include/parser/parse_node.h b/src/include/parser/parse_node.h
index e3e359c..6633586 100644
*** a/src/include/parser/parse_node.h
--- b/src/include/parser/parse_node.h
***************
*** 27,33 ****
* by extension code that might need to call transformExpr(). The core code
* will not enforce any context-driven restrictions on EXPR_KIND_OTHER
* expressions, so the caller would have to check for sub-selects, aggregates,
! * and window functions if those need to be disallowed.
*/
typedef enum ParseExprKind
{
--- 27,33 ----
* by extension code that might need to call transformExpr(). The core code
* will not enforce any context-driven restrictions on EXPR_KIND_OTHER
* expressions, so the caller would have to check for sub-selects, aggregates,
! * window functions, SRFs, etc if those need to be disallowed.
*/
typedef enum ParseExprKind
{
*************** struct ParseState
*** 150,155 ****
--- 150,156 ----
Node *p_value_substitute; /* what to replace VALUE with, if any */
bool p_hasAggs;
bool p_hasWindowFuncs;
+ bool p_hasTargetSRFs;
bool p_hasSubLinks;
bool p_hasModifyingCTE;
bool p_is_insert;
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
index 6141b7a..470cf93 100644
*** a/src/pl/plpgsql/src/pl_exec.c
--- b/src/pl/plpgsql/src/pl_exec.c
*************** exec_simple_check_plan(PLpgSQL_execstate
*** 6799,6804 ****
--- 6799,6805 ----
*/
if (query->hasAggs ||
query->hasWindowFuncs ||
+ query->hasTargetSRFs ||
query->hasSubLinks ||
query->hasForUpdate ||
query->cteList ||
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
index 805e8db..622f755 100644
*** a/src/test/regress/expected/tsrf.out
--- b/src/test/regress/expected/tsrf.out
*************** SELECT * FROM fewmore;
*** 359,373 ****
5
(5 rows)
! -- nonsense that seems to be allowed
UPDATE fewmore SET data = generate_series(4,9);
-- SRFs are not allowed in RETURNING
INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
! ERROR: set-valued function called in context that cannot accept a set
-- nor aggregate arguments
SELECT count(generate_series(1,3)) FROM few;
ERROR: set-valued function called in context that cannot accept a set
! -- nor proper VALUES
VALUES(1, generate_series(1,2));
ERROR: set-valued function called in context that cannot accept a set
-- DISTINCT ON is evaluated before tSRF evaluation if SRF is not
--- 359,378 ----
5
(5 rows)
! -- SRFs are not allowed in UPDATE (they once were, but it was nonsense)
UPDATE fewmore SET data = generate_series(4,9);
+ ERROR: set-returning functions are not allowed in UPDATE
+ LINE 1: UPDATE fewmore SET data = generate_series(4,9);
+ ^
-- SRFs are not allowed in RETURNING
INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
! ERROR: set-returning functions are not allowed in RETURNING
! LINE 1: INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3)...
! ^
-- nor aggregate arguments
SELECT count(generate_series(1,3)) FROM few;
ERROR: set-valued function called in context that cannot accept a set
! -- nor standalone VALUES (but surely this is a bug?)
VALUES(1, generate_series(1,2));
ERROR: set-valued function called in context that cannot accept a set
-- DISTINCT ON is evaluated before tSRF evaluation if SRF is not
*************** SELECT a, generate_series(1,2) FROM (VAL
*** 457,463 ****
-- SRFs are not allowed in LIMIT.
SELECT 1 LIMIT generate_series(1,3);
! ERROR: argument of LIMIT must not return a set
LINE 1: SELECT 1 LIMIT generate_series(1,3);
^
-- tSRF in correlated subquery, referencing table outside
--- 462,468 ----
-- SRFs are not allowed in LIMIT.
SELECT 1 LIMIT generate_series(1,3);
! ERROR: set-returning functions are not allowed in LIMIT
LINE 1: SELECT 1 LIMIT generate_series(1,3);
^
-- tSRF in correlated subquery, referencing table outside
diff --git a/src/test/regress/sql/tsrf.sql b/src/test/regress/sql/tsrf.sql
index 5247795..c28dd01 100644
*** a/src/test/regress/sql/tsrf.sql
--- b/src/test/regress/sql/tsrf.sql
*************** CREATE TABLE fewmore AS SELECT generate_
*** 68,81 ****
INSERT INTO fewmore VALUES(generate_series(4,5));
SELECT * FROM fewmore;
! -- nonsense that seems to be allowed
UPDATE fewmore SET data = generate_series(4,9);
-- SRFs are not allowed in RETURNING
INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
-- nor aggregate arguments
SELECT count(generate_series(1,3)) FROM few;
! -- nor proper VALUES
VALUES(1, generate_series(1,2));
-- DISTINCT ON is evaluated before tSRF evaluation if SRF is not
--- 68,81 ----
INSERT INTO fewmore VALUES(generate_series(4,5));
SELECT * FROM fewmore;
! -- SRFs are not allowed in UPDATE (they once were, but it was nonsense)
UPDATE fewmore SET data = generate_series(4,9);
-- SRFs are not allowed in RETURNING
INSERT INTO fewmore VALUES(1) RETURNING generate_series(1,3);
-- nor aggregate arguments
SELECT count(generate_series(1,3)) FROM few;
! -- nor standalone VALUES (but surely this is a bug?)
VALUES(1, generate_series(1,2));
-- DISTINCT ON is evaluated before tSRF evaluation if SRF is not
On September 13, 2016 9:07:35 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
Attached is a significantly updated patch series (see the mail one up
for details about what this is, I don't want to quote it in its
entirety).I've reviewed the portions of 0005 that have to do with making the
parser
mark queries with hasTargetSRF. The code as you had it was wrong
because
it would set the flag as a consequence of SRFs in function RTEs, which
we don't want.
I'd taken it more as the possibility that there's an srf, than guarantee so far. There might be cases where the planner removes the srf during folding or such. Makes sense to make it more accurate.
It seemed to me that the best way to fix that was to
rely
on the parser's p_expr_kind mechanism to tell which part of the query
we're in, whereupon we might as well make the parser act more like it
does
for aggregates and window functions and give a suitable error at parse
time for misplaced SRFs.
That's a nice improvement. The execution time errors are ugly.
I also renamed the flag to hasTargetSRFs, which is more parallel to
hasAggs and hasWindowFuncs, and made some effort to use it in place
of expression_returns_set() searches.I'd like to go ahead and push this, since it's a necessary prerequisite
for either approach we might adopt for the rest of the patch series,
and the improved error reporting and avoidance of expensive
expression_returns_set searches make it a win IMO even if we were not
planning to do anything more with SRFs.
Can't look are the code just now, on my way to the airport for pgopen, but the idea sounds good to me.
Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On September 13, 2016 9:07:35 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I'd like to go ahead and push this, since it's a necessary prerequisite
for either approach we might adopt for the rest of the patch series,
and the improved error reporting and avoidance of expensive
expression_returns_set searches make it a win IMO even if we were not
planning to do anything more with SRFs.
Can't look are the code just now, on my way to the airport for pgopen, but the idea sounds good to me.
OK, I went ahead and pushed it. We can tweak later if needed.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-13 12:07:35 -0400, Tom Lane wrote:
diff --git a/src/backend/optimizer/plan/analindex e28a8dc..74e4245 100644 --- a/src/backend/optimizer/plan/analyzejoins.c +++ b/src/backend/optimizer/plan/analyzejoins.c @@ -650,6 +650,11 @@ rel_is_distinct_for(PlannerInfo *root, R bool query_supports_distinctness(Query *query) { + /* we don't cope with SRFs, see comment below */ + if (query->hasTargetSRFs) + return false; + + /* check for features we can prove distinctness with */ if (query->distinctClause != NIL || query->groupClause != NIL || query->groupingSets != NIL || @@ -695,7 +700,7 @@ query_is_distinct_for(Query *query, List * specified columns, since those must be evaluated before de-duplication; * but it doesn't presently seem worth the complication to check that.) */ - if (expression_returns_set((Node *) query->targetList)) + if (query->hasTargetSRFs) return false;
Maybe make this hasTargetSRFs && expression_returns_set()? The SRF could
have been optimized away. (Oh, I see you recheck below. Forget that then).
@@ -1419,8 +1419,8 @@ is_simple_subquery(Query *subquery, Rang
return false;/* - * Can't pull up a subquery involving grouping, aggregation, sorting, - * limiting, or WITH. (XXX WITH could possibly be allowed later) + * Can't pull up a subquery involving grouping, aggregation, SRFs, + * sorting, limiting, or WITH. (XXX WITH could possibly be allowed later) * * We also don't pull up a subquery that has explicit FOR UPDATE/SHARE * clauses, because pullup would cause the locking to occur semantically @@ -1430,6 +1430,7 @@ is_simple_subquery(Query *subquery, Rang */ if (subquery->hasAggs || subquery->hasWindowFuncs || + subquery->hasTargetSRFs || subquery->groupClause || subquery->groupingSets || subquery->havingQual || @@ -1543,15 +1544,6 @@ is_simple_subquery(Query *subquery, Rang }/*
- * Don't pull up a subquery that has any set-returning functions in its
- * targetlist. Otherwise we might well wind up inserting set-returning
- * functions into places where they mustn't go, such as quals of higher
- * queries. This also ensures deletion of an empty jointree is valid.
- */
- if (expression_returns_set((Node *) subquery->targetList))
- return false;
I don't quite understand parts of the comment you removed here. What
does "This also ensures deletion of an empty jointree is valid." mean?
Looks good, except that you didn't adopt the hunk adjusting
src/backend/executor/README, which still seems to read:
We disallow set-returning functions in the targetlist of SELECT FOR UPDATE,
so as to ensure that at most one tuple can be returned for any particular
set of scan tuples. Otherwise we'd get duplicates due to the original
query returning the same set of scan tuples multiple times. (Note: there
is no explicit prohibition on SRFs in UPDATE, but the net effect will be
that only the first result row of an SRF counts, because all subsequent
rows will result in attempts to re-update an already updated target row.
This is historical behavior and seems not worth changing.)
Regards,
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-13 12:07:35 -0400, Tom Lane wrote:
/*
- * Don't pull up a subquery that has any set-returning functions in its
- * targetlist. Otherwise we might well wind up inserting set-returning
- * functions into places where they mustn't go, such as quals of higher
- * queries. This also ensures deletion of an empty jointree is valid.
- */
- if (expression_returns_set((Node *) subquery->targetList))
- return false;
I don't quite understand parts of the comment you removed here. What
does "This also ensures deletion of an empty jointree is valid." mean?
TBH, I don't remember what that was about anymore. Whatever it was might
not apply now, anyway. If there was something to it, maybe we'll
rediscover it while we're fooling with tSRFs, and then we can insert a
less cryptic comment.
Looks good, except that you didn't adopt the hunk adjusting
src/backend/executor/README, which still seems to read:
Ah, I missed that there was anything to change docs-wise. Will fix.
Thanks for looking it over!
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 19:35:22 -0400, Tom Lane wrote:
Anyway I'll draft a prototype and then we can compare.
Ok, cool.
Here's a draft patch that is just meant to investigate what the planner
changes might look like if we do it in the pipelined-result way.
Accordingly, I didn't touch the executor, but just had it emit regular
Result nodes for SRF-execution steps. However, the SRFs are all
guaranteed to appear at top level of their respective tlists, so that
those Results could be replaced with something that works like
nodeFunctionscan.
A difficulty with this restriction is that if you have a query like
"select f1, generate_series(1,2) / 10 from tab" then you end up with both
a SRF-executing Result and a separate scalar-projection Result above it,
because the division-by-ten has to happen in a separate plan level.
The planner's notions about the cost of Result make it think that this is
quite expensive --- mainly because the upper Result will be iterated once
per SRF output row, so that you get hit with cpu_tuple_cost per output row.
And that in turn leads it to change plans in one or two cases in the
regression tests. Maybe that's fine. I'm worried though that it's right
that this will be unduly expensive. So I'm kind of tempted to define the
SRF-executing node as acting more like, say, Agg or WindowFunc, in that
it has a list of SRFs to execute and then it has the ability to project a
scalar tlist on top of those results. That would likely save some cycles
at execution, and it would also eliminate a few of the planner warts seen
below, like the rule about not pushing a new scalar tlist down onto a
SRF-executing Result. I'd have to rewrite split_pathtarget_at_srfs(),
because it'd be implementing quite different rules about how to refactor
targetlists, but that's not a big problem.
On the whole I'm pretty pleased with this approach, at least from the
point of view of the planner. The net addition of planner code is
smaller than what you had, and though I'm no doubt biased, I think this
version is much cleaner. Also, though this patch doesn't address exactly
how we might do it, it's fairly clear that it'd be possible to allow
FDWs and CustomScans to implement SRF execution, eg pushing a SRF down to
a foreign server, in a reasonably straightforward extension of the
existing upper-pathification hooks. If we go with the lateral function
RTE approach, that's going to be somewhere between hard and impossible.
So I think we should continue investigating this way of doing things.
I'll try to take a look at the executor end of it tomorrow. However
I'm leaving Friday for a week's vacation, and may not have anything to
show before that.
regards, tom lane
Attachments:
put-srfs-in-separate-result-nodes-1.patchtext/x-diff; charset=us-ascii; name=put-srfs-in-separate-result-nodes-1.patchDownload
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 7e092d7..9052273 100644
*** a/src/backend/nodes/outfuncs.c
--- b/src/backend/nodes/outfuncs.c
*************** _outProjectionPath(StringInfo str, const
*** 1817,1822 ****
--- 1817,1823 ----
WRITE_NODE_FIELD(subpath);
WRITE_BOOL_FIELD(dummypp);
+ WRITE_BOOL_FIELD(srfpp);
}
static void
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 47158f6..7c59c3d 100644
*** a/src/backend/optimizer/plan/createplan.c
--- b/src/backend/optimizer/plan/createplan.c
*************** create_projection_plan(PlannerInfo *root
*** 1421,1428 ****
Plan *subplan;
List *tlist;
! /* Since we intend to project, we don't need to constrain child tlist */
! subplan = create_plan_recurse(root, best_path->subpath, 0);
tlist = build_path_tlist(root, &best_path->path);
--- 1421,1441 ----
Plan *subplan;
List *tlist;
! /*
! * XXX Possibly-temporary hack: if the subpath is a dummy ResultPath,
! * don't bother with it, just make a Result with no input. This avoids an
! * extra Result plan node when doing "SELECT srf()". Depending on what we
! * decide about the desired plan structure for SRF-expanding nodes, this
! * optimization might have to go away, and in any case it'll probably look
! * a good bit different.
! */
! if (IsA(best_path->subpath, ResultPath) &&
! ((ResultPath *) best_path->subpath)->path.pathtarget->exprs == NIL &&
! ((ResultPath *) best_path->subpath)->quals == NIL)
! subplan = NULL;
! else
! /* Since we intend to project, we don't need to constrain child tlist */
! subplan = create_plan_recurse(root, best_path->subpath, 0);
tlist = build_path_tlist(root, &best_path->path);
*************** create_projection_plan(PlannerInfo *root
*** 1441,1448 ****
* creation, but that would add expense to creating Paths we might end up
* not using.)
*/
! if (is_projection_capable_path(best_path->subpath) ||
! tlist_same_exprs(tlist, subplan->targetlist))
{
/* Don't need a separate Result, just assign tlist to subplan */
plan = subplan;
--- 1454,1462 ----
* creation, but that would add expense to creating Paths we might end up
* not using.)
*/
! if (!best_path->srfpp &&
! (is_projection_capable_path(best_path->subpath) ||
! tlist_same_exprs(tlist, subplan->targetlist)))
{
/* Don't need a separate Result, just assign tlist to subplan */
plan = subplan;
*************** is_projection_capable_path(Path *path)
*** 6185,6190 ****
--- 6199,6215 ----
* projection to its dummy path.
*/
return IS_DUMMY_PATH(path);
+ case T_Result:
+
+ /*
+ * If the path is doing SRF evaluation, claim it can't project, so
+ * we don't jam a new tlist into it and thereby break the property
+ * that the SRFs appear at top level.
+ */
+ if (IsA(path, ProjectionPath) &&
+ ((ProjectionPath *) path)->srfpp)
+ return false;
+ break;
default:
break;
}
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index f657ffc..8fff294 100644
*** a/src/backend/optimizer/plan/planner.c
--- b/src/backend/optimizer/plan/planner.c
*************** static List *make_pathkeys_for_window(Pl
*** 153,158 ****
--- 153,160 ----
static PathTarget *make_sort_input_target(PlannerInfo *root,
PathTarget *final_target,
bool *have_postponed_srfs);
+ static void adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel,
+ List *targets, List *targets_contain_srfs);
/*****************************************************************************
*************** grouping_planner(PlannerInfo *root, bool
*** 1440,1447 ****
int64 count_est = 0;
double limit_tuples = -1.0;
bool have_postponed_srfs = false;
- double tlist_rows;
PathTarget *final_target;
RelOptInfo *current_rel;
RelOptInfo *final_rel;
ListCell *lc;
--- 1442,1450 ----
int64 count_est = 0;
double limit_tuples = -1.0;
bool have_postponed_srfs = false;
PathTarget *final_target;
+ List *final_targets;
+ List *final_targets_contain_srfs;
RelOptInfo *current_rel;
RelOptInfo *final_rel;
ListCell *lc;
*************** grouping_planner(PlannerInfo *root, bool
*** 1504,1509 ****
--- 1507,1516 ----
/* Also extract the PathTarget form of the setop result tlist */
final_target = current_rel->cheapest_total_path->pathtarget;
+ /* The setop result tlist couldn't contain any SRFs */
+ Assert(!parse->hasTargetSRFs);
+ final_targets = final_targets_contain_srfs = NIL;
+
/*
* Can't handle FOR [KEY] UPDATE/SHARE here (parser should have
* checked already, but let's make sure).
*************** grouping_planner(PlannerInfo *root, bool
*** 1529,1536 ****
--- 1536,1549 ----
{
/* No set operations, do regular planning */
PathTarget *sort_input_target;
+ List *sort_input_targets;
+ List *sort_input_targets_contain_srfs;
PathTarget *grouping_target;
+ List *grouping_targets;
+ List *grouping_targets_contain_srfs;
PathTarget *scanjoin_target;
+ List *scanjoin_targets;
+ List *scanjoin_targets_contain_srfs;
bool have_grouping;
AggClauseCosts agg_costs;
WindowFuncLists *wflists = NULL;
*************** grouping_planner(PlannerInfo *root, bool
*** 1781,1788 ****
scanjoin_target = grouping_target;
/*
! * Forcibly apply scan/join target to all the Paths for the scan/join
! * rel.
*
* In principle we should re-run set_cheapest() here to identify the
* cheapest path, but it seems unlikely that adding the same tlist
--- 1794,1843 ----
scanjoin_target = grouping_target;
/*
! * If there are any SRFs in the targetlist, we must separate each of
! * these PathTargets into SRF-computing and SRF-free targets. Replace
! * each of the named targets with a SRF-free version, and remember the
! * list of additional projection steps we need to add afterwards.
! */
! if (parse->hasTargetSRFs)
! {
! /* final_target doesn't recompute any SRFs in sort_input_target */
! split_pathtarget_at_srfs(root, final_target, sort_input_target,
! &final_targets,
! &final_targets_contain_srfs);
! final_target = (PathTarget *) linitial(final_targets);
! Assert(!linitial_int(final_targets_contain_srfs));
! /* likewise for sort_input_target vs. grouping_target */
! split_pathtarget_at_srfs(root, sort_input_target, grouping_target,
! &sort_input_targets,
! &sort_input_targets_contain_srfs);
! sort_input_target = (PathTarget *) linitial(sort_input_targets);
! Assert(!linitial_int(sort_input_targets_contain_srfs));
! /* likewise for grouping_target vs. scanjoin_target */
! split_pathtarget_at_srfs(root, grouping_target, scanjoin_target,
! &grouping_targets,
! &grouping_targets_contain_srfs);
! grouping_target = (PathTarget *) linitial(grouping_targets);
! Assert(!linitial_int(grouping_targets_contain_srfs));
! /* scanjoin_target will not have any SRFs precomputed for it */
! split_pathtarget_at_srfs(root, scanjoin_target, NULL,
! &scanjoin_targets,
! &scanjoin_targets_contain_srfs);
! scanjoin_target = (PathTarget *) linitial(scanjoin_targets);
! Assert(!linitial_int(scanjoin_targets_contain_srfs));
! }
! else
! {
! /* initialize lists, just to keep compiler quiet */
! final_targets = final_targets_contain_srfs = NIL;
! sort_input_targets = sort_input_targets_contain_srfs = NIL;
! grouping_targets = grouping_targets_contain_srfs = NIL;
! scanjoin_targets = scanjoin_targets_contain_srfs = NIL;
! }
!
! /*
! * Forcibly apply SRF-free scan/join target to all the Paths for the
! * scan/join rel.
*
* In principle we should re-run set_cheapest() here to identify the
* cheapest path, but it seems unlikely that adding the same tlist
*************** grouping_planner(PlannerInfo *root, bool
*** 1853,1858 ****
--- 1908,1919 ----
current_rel->partial_pathlist = NIL;
}
+ /* Now fix things up if scan/join target contains SRFs */
+ if (parse->hasTargetSRFs)
+ adjust_paths_for_srfs(root, current_rel,
+ scanjoin_targets,
+ scanjoin_targets_contain_srfs);
+
/*
* Save the various upper-rel PathTargets we just computed into
* root->upper_targets[]. The core code doesn't use this, but it
*************** grouping_planner(PlannerInfo *root, bool
*** 1877,1882 ****
--- 1938,1948 ----
&agg_costs,
rollup_lists,
rollup_groupclauses);
+ /* Fix things up if grouping_target contains SRFs */
+ if (parse->hasTargetSRFs)
+ adjust_paths_for_srfs(root, current_rel,
+ grouping_targets,
+ grouping_targets_contain_srfs);
}
/*
*************** grouping_planner(PlannerInfo *root, bool
*** 1892,1897 ****
--- 1958,1968 ----
tlist,
wflists,
activeWindows);
+ /* Fix things up if sort_input_target contains SRFs */
+ if (parse->hasTargetSRFs)
+ adjust_paths_for_srfs(root, current_rel,
+ sort_input_targets,
+ sort_input_targets_contain_srfs);
}
/*
*************** grouping_planner(PlannerInfo *root, bool
*** 1920,1959 ****
final_target,
have_postponed_srfs ? -1.0 :
limit_tuples);
! }
!
! /*
! * If there are set-returning functions in the tlist, scale up the output
! * rowcounts of all surviving Paths to account for that. Note that if any
! * SRFs appear in sorting or grouping columns, we'll have underestimated
! * the numbers of rows passing through earlier steps; but that's such a
! * weird usage that it doesn't seem worth greatly complicating matters to
! * account for it.
! */
! if (parse->hasTargetSRFs)
! tlist_rows = tlist_returns_set_rows(tlist);
! else
! tlist_rows = 1;
!
! if (tlist_rows > 1)
! {
! foreach(lc, current_rel->pathlist)
! {
! Path *path = (Path *) lfirst(lc);
!
! /*
! * We assume that execution costs of the tlist as such were
! * already accounted for. However, it still seems appropriate to
! * charge something more for the executor's general costs of
! * processing the added tuples. The cost is probably less than
! * cpu_tuple_cost, though, so we arbitrarily use half of that.
! */
! path->total_cost += path->rows * (tlist_rows - 1) *
! cpu_tuple_cost / 2;
!
! path->rows *= tlist_rows;
! }
! /* No need to run set_cheapest; we're keeping all paths anyway. */
}
/*
--- 1991,2001 ----
final_target,
have_postponed_srfs ? -1.0 :
limit_tuples);
! /* Fix things up if final_target contains SRFs */
! if (parse->hasTargetSRFs)
! adjust_paths_for_srfs(root, current_rel,
! final_targets,
! final_targets_contain_srfs);
}
/*
*************** get_cheapest_fractional_path(RelOptInfo
*** 5151,5156 ****
--- 5193,5301 ----
}
/*
+ * adjust_paths_for_srfs
+ * Fix up the Paths of the given upperrel to handle tSRFs properly.
+ *
+ * The executor can only handle set-returning functions that appear at the
+ * top level of the targetlist of a Result plan node. If we have any SRFs
+ * that are not at top level, we need to split up the evaluation into multiple
+ * plan levels in which each level satisfies this constraint. This function
+ * modifies each Path of an upperrel that (might) compute any SRFs in its
+ * output tlist to insert appropriate projection steps.
+ *
+ * The given targets and targets_contain_srfs lists are from
+ * split_pathtarget_at_srfs(). We assume the existing Paths emit the first
+ * target in targets.
+ */
+ static void
+ adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel,
+ List *targets, List *targets_contain_srfs)
+ {
+ ListCell *lc;
+
+ Assert(list_length(targets) == list_length(targets_contain_srfs));
+ Assert(!linitial_int(targets_contain_srfs));
+
+ /* If no SRFs appear at this plan level, nothing to do */
+ if (list_length(targets) == 1)
+ return;
+
+ /*
+ * Stack SRF-evaluation nodes atop each path for the rel.
+ *
+ * In principle we should re-run set_cheapest() here to identify the
+ * cheapest path, but it seems unlikely that adding the same tlist eval
+ * costs to all the paths would change that, so we don't bother. Instead,
+ * just assume that the cheapest-startup and cheapest-total paths remain
+ * so. (There should be no parameterized paths anymore, so we needn't
+ * worry about updating cheapest_parameterized_paths.)
+ */
+ foreach(lc, rel->pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ Path *newpath = subpath;
+ ListCell *lc1,
+ *lc2;
+
+ Assert(subpath->param_info == NULL);
+ forboth(lc1, targets, lc2, targets_contain_srfs)
+ {
+ PathTarget *thistarget = (PathTarget *) lfirst(lc1);
+ bool contains_srfs = (bool) lfirst_int(lc2);
+
+ /* If this level doesn't contain SRFs, do regular projection */
+ if (contains_srfs)
+ newpath = (Path *) create_srf_projection_path(root,
+ rel,
+ newpath,
+ thistarget);
+ else
+ newpath = (Path *) apply_projection_to_path(root,
+ rel,
+ newpath,
+ thistarget);
+ }
+ lfirst(lc) = newpath;
+ if (subpath == rel->cheapest_startup_path)
+ rel->cheapest_startup_path = newpath;
+ if (subpath == rel->cheapest_total_path)
+ rel->cheapest_total_path = newpath;
+ }
+
+ /* Likewise for partial paths, if any */
+ foreach(lc, rel->partial_pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ Path *newpath = subpath;
+ ListCell *lc1,
+ *lc2;
+
+ Assert(subpath->param_info == NULL);
+ forboth(lc1, targets, lc2, targets_contain_srfs)
+ {
+ PathTarget *thistarget = (PathTarget *) lfirst(lc1);
+ bool contains_srfs = (bool) lfirst_int(lc2);
+
+ /* If this level doesn't contain SRFs, do regular projection */
+ if (contains_srfs)
+ newpath = (Path *) create_srf_projection_path(root,
+ rel,
+ newpath,
+ thistarget);
+ else
+ {
+ /* avoid apply_projection_to_path, in case of multiple refs */
+ newpath = (Path *) create_projection_path(root,
+ rel,
+ newpath,
+ thistarget);
+ }
+ }
+ lfirst(lc) = newpath;
+ }
+ }
+
+ /*
* expression_planner
* Perform planner's transformations on a standalone expression.
*
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 663ffe0..0aa4339 100644
*** a/src/backend/optimizer/util/clauses.c
--- b/src/backend/optimizer/util/clauses.c
*************** static bool contain_agg_clause_walker(No
*** 99,105 ****
static bool get_agg_clause_costs_walker(Node *node,
get_agg_clause_costs_context *context);
static bool find_window_functions_walker(Node *node, WindowFuncLists *lists);
- static bool expression_returns_set_rows_walker(Node *node, double *count);
static bool contain_subplans_walker(Node *node, void *context);
static bool contain_mutable_functions_walker(Node *node, void *context);
static bool contain_volatile_functions_walker(Node *node, void *context);
--- 99,104 ----
*************** find_window_functions_walker(Node *node,
*** 780,893 ****
/*
* expression_returns_set_rows
* Estimate the number of rows returned by a set-returning expression.
! * The result is 1 if there are no set-returning functions.
*
! * We use the product of the rowcount estimates of all the functions in
! * the given tree (this corresponds to the behavior of ExecMakeFunctionResult
! * for nested set-returning functions).
*
* Note: keep this in sync with expression_returns_set() in nodes/nodeFuncs.c.
*/
double
expression_returns_set_rows(Node *clause)
{
! double result = 1;
!
! (void) expression_returns_set_rows_walker(clause, &result);
! return clamp_row_est(result);
! }
!
! static bool
! expression_returns_set_rows_walker(Node *node, double *count)
! {
! if (node == NULL)
! return false;
! if (IsA(node, FuncExpr))
{
! FuncExpr *expr = (FuncExpr *) node;
if (expr->funcretset)
! *count *= get_func_rows(expr->funcid);
}
! if (IsA(node, OpExpr))
{
! OpExpr *expr = (OpExpr *) node;
if (expr->opretset)
{
set_opfuncid(expr);
! *count *= get_func_rows(expr->opfuncid);
}
}
!
! /* Avoid recursion for some cases that can't return a set */
! if (IsA(node, Aggref))
! return false;
! if (IsA(node, WindowFunc))
! return false;
! if (IsA(node, DistinctExpr))
! return false;
! if (IsA(node, NullIfExpr))
! return false;
! if (IsA(node, ScalarArrayOpExpr))
! return false;
! if (IsA(node, BoolExpr))
! return false;
! if (IsA(node, SubLink))
! return false;
! if (IsA(node, SubPlan))
! return false;
! if (IsA(node, AlternativeSubPlan))
! return false;
! if (IsA(node, ArrayExpr))
! return false;
! if (IsA(node, RowExpr))
! return false;
! if (IsA(node, RowCompareExpr))
! return false;
! if (IsA(node, CoalesceExpr))
! return false;
! if (IsA(node, MinMaxExpr))
! return false;
! if (IsA(node, XmlExpr))
! return false;
!
! return expression_tree_walker(node, expression_returns_set_rows_walker,
! (void *) count);
! }
!
! /*
! * tlist_returns_set_rows
! * Estimate the number of rows returned by a set-returning targetlist.
! * The result is 1 if there are no set-returning functions.
! *
! * Here, the result is the largest rowcount estimate of any of the tlist's
! * expressions, not the product as you would get from naively applying
! * expression_returns_set_rows() to the whole tlist. The behavior actually
! * implemented by ExecTargetList produces a number of rows equal to the least
! * common multiple of the expression rowcounts, so that the product would be
! * a worst-case estimate that is typically not realistic. Taking the max as
! * we do here is a best-case estimate that might not be realistic either,
! * but it's probably closer for typical usages. We don't try to compute the
! * actual LCM because we're working with very approximate estimates, so their
! * LCM would be unduly noisy.
! */
! double
! tlist_returns_set_rows(List *tlist)
! {
! double result = 1;
! ListCell *lc;
!
! foreach(lc, tlist)
! {
! TargetEntry *tle = (TargetEntry *) lfirst(lc);
! double colresult;
!
! colresult = expression_returns_set_rows((Node *) tle->expr);
! if (result < colresult)
! result = colresult;
! }
! return result;
}
--- 779,815 ----
/*
* expression_returns_set_rows
* Estimate the number of rows returned by a set-returning expression.
! * The result is 1 if it's not a set-returning expression.
*
! * We should only examine the top-level function or operator; it used to be
! * appropriate to recurse, but not anymore. (Even if there are more SRFs in
! * the function's inputs, their multipliers are accounted for separately.)
*
* Note: keep this in sync with expression_returns_set() in nodes/nodeFuncs.c.
*/
double
expression_returns_set_rows(Node *clause)
{
! if (clause == NULL)
! return 1.0;
! if (IsA(clause, FuncExpr))
{
! FuncExpr *expr = (FuncExpr *) clause;
if (expr->funcretset)
! return clamp_row_est(get_func_rows(expr->funcid));
}
! if (IsA(clause, OpExpr))
{
! OpExpr *expr = (OpExpr *) clause;
if (expr->opretset)
{
set_opfuncid(expr);
! return clamp_row_est(get_func_rows(expr->opfuncid));
}
}
! return 1.0;
}
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index abb7507..5a7891f 100644
*** a/src/backend/optimizer/util/pathnode.c
--- b/src/backend/optimizer/util/pathnode.c
*************** create_projection_path(PlannerInfo *root
*** 2227,2232 ****
--- 2227,2235 ----
(cpu_tuple_cost + target->cost.per_tuple) * subpath->rows;
}
+ /* Assume no SRFs around */
+ pathnode->srfpp = false;
+
return pathnode;
}
*************** apply_projection_to_path(PlannerInfo *ro
*** 2320,2325 ****
--- 2323,2400 ----
}
/*
+ * create_srf_projection_path
+ * Creates a pathnode that represents performing a SRF projection.
+ *
+ * For the moment, we just use ProjectionPath for this, and generate a
+ * Result plan node. That's likely to change.
+ *
+ * 'rel' is the parent relation associated with the result
+ * 'subpath' is the path representing the source of data
+ * 'target' is the PathTarget to be computed
+ */
+ ProjectionPath *
+ create_srf_projection_path(PlannerInfo *root,
+ RelOptInfo *rel,
+ Path *subpath,
+ PathTarget *target)
+ {
+ ProjectionPath *pathnode = makeNode(ProjectionPath);
+ double tlist_rows;
+ ListCell *lc;
+
+ pathnode->path.pathtype = T_Result;
+ pathnode->path.parent = rel;
+ pathnode->path.pathtarget = target;
+ /* For now, assume we are above any joins, so no parameterization */
+ pathnode->path.param_info = NULL;
+ pathnode->path.parallel_aware = false;
+ pathnode->path.parallel_safe = rel->consider_parallel &&
+ subpath->parallel_safe &&
+ is_parallel_safe(root, (Node *) target->exprs);
+ pathnode->path.parallel_workers = subpath->parallel_workers;
+ /* Projection does not change the sort order */
+ pathnode->path.pathkeys = subpath->pathkeys;
+
+ pathnode->subpath = subpath;
+
+ /* Always need the Result node */
+ pathnode->dummypp = false;
+ pathnode->srfpp = true;
+
+ /*
+ * Estimate number of rows produced by SRFs for each row of input; if
+ * there's more than one in this node, use the maximum.
+ */
+ tlist_rows = 1;
+ foreach(lc, target->exprs)
+ {
+ Node *node = (Node *) lfirst(lc);
+ double itemrows;
+
+ itemrows = expression_returns_set_rows(node);
+ if (tlist_rows < itemrows)
+ tlist_rows = itemrows;
+ }
+
+ /*
+ * In addition to the cost of evaluating the tlist, charge cpu_tuple_cost
+ * per input row, and half of cpu_tuple_cost for each added output row.
+ * This is slightly bizarre maybe, but it's what 9.6 did; we may revisit
+ * this estimate later.
+ */
+ pathnode->path.rows = subpath->rows * tlist_rows;
+ pathnode->path.startup_cost = subpath->startup_cost +
+ target->cost.startup;
+ pathnode->path.total_cost = subpath->total_cost +
+ target->cost.startup +
+ (cpu_tuple_cost + target->cost.per_tuple) * subpath->rows +
+ (pathnode->path.rows - subpath->rows) * cpu_tuple_cost / 2;
+
+ return pathnode;
+ }
+
+ /*
* create_sort_path
* Creates a pathnode that represents performing an explicit sort.
*
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index 68096b3..ede7bb9 100644
*** a/src/backend/optimizer/util/tlist.c
--- b/src/backend/optimizer/util/tlist.c
***************
*** 16,24 ****
--- 16,35 ----
#include "nodes/makefuncs.h"
#include "nodes/nodeFuncs.h"
+ #include "optimizer/cost.h"
#include "optimizer/tlist.h"
+ typedef struct
+ {
+ List *nextlevel_tlist;
+ bool nextlevel_contains_srfs;
+ } split_pathtarget_context;
+
+ static bool split_pathtarget_walker(Node *node,
+ split_pathtarget_context *context);
+
+
/*****************************************************************************
* Target list creation and searching utilities
*****************************************************************************/
*************** apply_pathtarget_labeling_to_tlist(List
*** 759,761 ****
--- 770,960 ----
i++;
}
}
+
+ /*
+ * split_pathtarget_at_srfs
+ * Split given PathTarget into multiple levels to position SRFs safely
+ *
+ * The executor can only handle set-returning functions that appear at the
+ * top level of the targetlist of a Result plan node. If we have any SRFs
+ * that are not at top level, we need to split up the evaluation into multiple
+ * plan levels in which each level satisfies this constraint. This function
+ * creates appropriate PathTarget(s) for each level.
+ *
+ * As an example, consider the tlist expression
+ * x + srf1(srf2(y + z))
+ * This expression should appear as-is in the top PathTarget, but below that
+ * we must have a PathTarget containing
+ * x, srf1(srf2(y + z))
+ * and below that, another PathTarget containing
+ * x, srf2(y + z)
+ * and below that, another PathTarget containing
+ * x, y, z
+ * When these tlists are processed by setrefs.c, subexpressions that match
+ * output expressions of the next lower tlist will be replaced by Vars,
+ * so that what the executor gets are tlists looking like
+ * Var1 + Var2
+ * Var1, srf1(Var2)
+ * Var1, srf2(Var2 + Var3)
+ * x, y, z
+ * which satisfy the desired property.
+ *
+ * In some cases, a SRF has already been evaluated in some previous plan level
+ * and we shouldn't expand it again (that is, what we see in the target is
+ * already meant as a reference to a lower subexpression). So, don't expand
+ * any tlist expressions that appear in input_target, if that's not NULL.
+ * In principle we might need to consider matching subexpressions to
+ * input_target, but for now it's not necessary because only ORDER BY and
+ * GROUP BY expressions are at issue and those will look the same at both
+ * plan levels.
+ *
+ * The outputs of this function are two parallel lists, one a list of
+ * PathTargets and the other an integer list of bool flags indicating
+ * whether the corresponding PathTarget contains any top-level SRFs.
+ * The lists are given in the order they'd need to be evaluated in, with
+ * the "lowest" PathTarget first. So the last list entry is always the
+ * originally given PathTarget, and any entries before it indicate evaluation
+ * levels that must be inserted below it. The first list entry must not
+ * contain any SRFs, since it will typically be attached to a plan node
+ * that cannot evaluate SRFs.
+ *
+ * Note: using a list for the flags may seem like overkill, since there
+ * are only a few possible patterns for which levels contain SRFs.
+ * But this representation decouples callers from that knowledge.
+ */
+ void
+ split_pathtarget_at_srfs(PlannerInfo *root,
+ PathTarget *target, PathTarget *input_target,
+ List **targets, List **targets_contain_srfs)
+ {
+ /* Initialize output lists to empty; we prepend to them within loop */
+ *targets = *targets_contain_srfs = NIL;
+
+ /* Loop to consider each level of PathTarget we need */
+ for (;;)
+ {
+ bool target_contains_srfs = false;
+ split_pathtarget_context context;
+ ListCell *lc;
+
+ context.nextlevel_tlist = NIL;
+ context.nextlevel_contains_srfs = false;
+
+ /*
+ * Scan the PathTarget looking for SRFs. Top-level SRFs are handled
+ * in this loop, ones lower down are found by split_pathtarget_walker.
+ */
+ foreach(lc, target->exprs)
+ {
+ Node *node = (Node *) lfirst(lc);
+
+ /*
+ * A tlist item that is just a reference to an expression already
+ * computed in input_target need not be evaluated here, so just
+ * make sure it's included in the next PathTarget.
+ */
+ if (input_target && list_member(input_target->exprs, node))
+ {
+ context.nextlevel_tlist = lappend(context.nextlevel_tlist, node);
+ continue;
+ }
+
+ /* Else, we need to compute this expression. */
+ if (IsA(node, FuncExpr) &&
+ ((FuncExpr *) node)->funcretset)
+ {
+ /* Top-level SRF: it can be evaluated here */
+ target_contains_srfs = true;
+ /* Recursively examine SRF's inputs */
+ split_pathtarget_walker((Node *) ((FuncExpr *) node)->args,
+ &context);
+ }
+ else if (IsA(node, OpExpr) &&
+ ((OpExpr *) node)->opretset)
+ {
+ /* Same as above, but for set-returning operator */
+ target_contains_srfs = true;
+ split_pathtarget_walker((Node *) ((OpExpr *) node)->args,
+ &context);
+ }
+ else
+ {
+ /* Not a top-level SRF, so recursively examine expression */
+ split_pathtarget_walker(node, &context);
+ }
+ }
+
+ /*
+ * Prepend current target and associated flag to output lists.
+ */
+ *targets = lcons(target, *targets);
+ *targets_contain_srfs = lcons_int(target_contains_srfs,
+ *targets_contain_srfs);
+
+ /*
+ * Done if we found no SRFs anywhere in this target; the tentative
+ * tlist we built for the next level can be discarded.
+ */
+ if (!target_contains_srfs && !context.nextlevel_contains_srfs)
+ break;
+
+ /*
+ * Else build the next PathTarget down, and loop back to process it.
+ * Copy the subexpressions to make sure PathTargets don't share
+ * substructure (might be unnecessary, but be safe); and drop any
+ * duplicate entries in the sub-targetlist.
+ */
+ target = create_empty_pathtarget();
+ add_new_columns_to_pathtarget(target,
+ (List *) copyObject(context.nextlevel_tlist));
+ set_pathtarget_cost_width(root, target);
+ }
+ }
+
+ /* Recursively examine expressions for split_pathtarget_at_srfs */
+ static bool
+ split_pathtarget_walker(Node *node, split_pathtarget_context *context)
+ {
+ if (node == NULL)
+ return false;
+ if (IsA(node, Var) ||
+ IsA(node, PlaceHolderVar) ||
+ IsA(node, Aggref) ||
+ IsA(node, GroupingFunc) ||
+ IsA(node, WindowFunc))
+ {
+ /*
+ * Pass these items down to the child plan level for evaluation.
+ *
+ * We assume that these constructs cannot contain any SRFs (if one
+ * does, there will be an executor failure from a misplaced SRF).
+ */
+ context->nextlevel_tlist = lappend(context->nextlevel_tlist, node);
+
+ /* Having done that, we need not examine their sub-structure */
+ return false;
+ }
+ else if ((IsA(node, FuncExpr) &&
+ ((FuncExpr *) node)->funcretset) ||
+ (IsA(node, OpExpr) &&
+ ((OpExpr *) node)->opretset))
+ {
+ /*
+ * Pass SRFs down to the child plan level for evaluation, and mark
+ * that it contains SRFs. (We are not at top level of our own tlist,
+ * else this would have been picked up by split_pathtarget_at_srfs.)
+ */
+ context->nextlevel_tlist = lappend(context->nextlevel_tlist, node);
+ context->nextlevel_contains_srfs = true;
+
+ /* Inputs to the SRF need not be considered here, so we're done */
+ return false;
+ }
+
+ /*
+ * Otherwise, the node is evaluatable within the current PathTarget, so
+ * recurse to examine its inputs.
+ */
+ return expression_tree_walker(node, split_pathtarget_walker,
+ (void *) context);
+ }
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 2709cc7..0cb42b7 100644
*** a/src/include/nodes/relation.h
--- b/src/include/nodes/relation.h
*************** typedef struct ProjectionPath
*** 1293,1298 ****
--- 1293,1299 ----
Path path;
Path *subpath; /* path representing input source */
bool dummypp; /* true if no separate Result is needed */
+ bool srfpp; /* true if SRFs are being evaluated here */
} ProjectionPath;
/*
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 9abef37..1d0fa30 100644
*** a/src/include/optimizer/clauses.h
--- b/src/include/optimizer/clauses.h
*************** extern bool contain_window_function(Node
*** 54,60 ****
extern WindowFuncLists *find_window_functions(Node *clause, Index maxWinRef);
extern double expression_returns_set_rows(Node *clause);
- extern double tlist_returns_set_rows(List *tlist);
extern bool contain_subplans(Node *clause);
--- 54,59 ----
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index 71d9154..c452927 100644
*** a/src/include/optimizer/pathnode.h
--- b/src/include/optimizer/pathnode.h
*************** extern Path *apply_projection_to_path(Pl
*** 144,149 ****
--- 144,153 ----
RelOptInfo *rel,
Path *path,
PathTarget *target);
+ extern ProjectionPath *create_srf_projection_path(PlannerInfo *root,
+ RelOptInfo *rel,
+ Path *subpath,
+ PathTarget *target);
extern SortPath *create_sort_path(PlannerInfo *root,
RelOptInfo *rel,
Path *subpath,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index 0d745a0..edd1e80 100644
*** a/src/include/optimizer/tlist.h
--- b/src/include/optimizer/tlist.h
*************** extern void add_column_to_pathtarget(Pat
*** 61,66 ****
--- 61,69 ----
extern void add_new_column_to_pathtarget(PathTarget *target, Expr *expr);
extern void add_new_columns_to_pathtarget(PathTarget *target, List *exprs);
extern void apply_pathtarget_labeling_to_tlist(List *tlist, PathTarget *target);
+ extern void split_pathtarget_at_srfs(PlannerInfo *root,
+ PathTarget *target, PathTarget *input_target,
+ List **targets, List **targets_contain_srfs);
/* Convenience macro to get a PathTarget with valid cost/width fields */
#define create_pathtarget(root, tlist) \
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index 45208a6..e3804e9 100644
*** a/src/test/regress/expected/aggregates.out
--- b/src/test/regress/expected/aggregates.out
*************** explain (costs off)
*** 823,829 ****
-> Index Only Scan Backward using tenk1_unique2 on tenk1
Index Cond: (unique2 IS NOT NULL)
-> Result
! (7 rows)
select max(unique2), generate_series(1,3) as g from tenk1 order by g desc;
max | g
--- 823,830 ----
-> Index Only Scan Backward using tenk1_unique2 on tenk1
Index Cond: (unique2 IS NOT NULL)
-> Result
! -> Result
! (8 rows)
select max(unique2), generate_series(1,3) as g from tenk1 order by g desc;
max | g
diff --git a/src/test/regress/expected/limit.out b/src/test/regress/expected/limit.out
index 9c3eecf..a7ded3a 100644
*** a/src/test/regress/expected/limit.out
--- b/src/test/regress/expected/limit.out
*************** select currval('testseq');
*** 208,220 ****
explain (verbose, costs off)
select unique1, unique2, generate_series(1,10)
from tenk1 order by unique2 limit 7;
! QUERY PLAN
! ----------------------------------------------------------
Limit
Output: unique1, unique2, (generate_series(1, 10))
! -> Index Scan using tenk1_unique2 on public.tenk1
Output: unique1, unique2, generate_series(1, 10)
! (4 rows)
select unique1, unique2, generate_series(1,10)
from tenk1 order by unique2 limit 7;
--- 208,222 ----
explain (verbose, costs off)
select unique1, unique2, generate_series(1,10)
from tenk1 order by unique2 limit 7;
! QUERY PLAN
! -------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit
Output: unique1, unique2, (generate_series(1, 10))
! -> Result
Output: unique1, unique2, generate_series(1, 10)
! -> Index Scan using tenk1_unique2 on public.tenk1
! Output: unique1, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4
! (6 rows)
select unique1, unique2, generate_series(1,10)
from tenk1 order by unique2 limit 7;
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4..9634fa1 100644
*** a/src/test/regress/expected/rangefuncs.out
--- b/src/test/regress/expected/rangefuncs.out
*************** SELECT *,
*** 1995,2006 ****
END)
FROM
(VALUES (1,''), (2,'0000000049404'), (3,'FROM 10000000876')) v(id, str);
! id | str | lower
! ----+------------------+------------------
! 1 | |
! 2 | 0000000049404 | 49404
! 3 | FROM 10000000876 | from 10000000876
! (3 rows)
-- check whole-row-Var handling in nested lateral functions (bug #11703)
create function extractq2(t int8_tbl) returns int8 as $$
--- 1995,2004 ----
END)
FROM
(VALUES (1,''), (2,'0000000049404'), (3,'FROM 10000000876')) v(id, str);
! id | str | lower
! ----+---------------+-------
! 2 | 0000000049404 | 49404
! (1 row)
-- check whole-row-Var handling in nested lateral functions (bug #11703)
create function extractq2(t int8_tbl) returns int8 as $$
diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out
index 0fc93d9..e76cb6b 100644
*** a/src/test/regress/expected/subselect.out
--- b/src/test/regress/expected/subselect.out
*************** select * from int4_tbl where
*** 807,830 ****
explain (verbose, costs off)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,2) / 10 g from int4_tbl i group by f1);
! QUERY PLAN
! ----------------------------------------------------------------
! Hash Semi Join
Output: o.f1
! Hash Cond: (o.f1 = "ANY_subquery".f1)
-> Seq Scan on public.int4_tbl o
Output: o.f1
! -> Hash
Output: "ANY_subquery".f1, "ANY_subquery".g
-> Subquery Scan on "ANY_subquery"
Output: "ANY_subquery".f1, "ANY_subquery".g
Filter: ("ANY_subquery".f1 = "ANY_subquery".g)
! -> HashAggregate
! Output: i.f1, (generate_series(1, 2) / 10)
! Group Key: i.f1
! -> Seq Scan on public.int4_tbl i
! Output: i.f1
! (15 rows)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,2) / 10 g from int4_tbl i group by f1);
--- 807,834 ----
explain (verbose, costs off)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,2) / 10 g from int4_tbl i group by f1);
! QUERY PLAN
! -------------------------------------------------------------------
! Nested Loop Semi Join
Output: o.f1
! Join Filter: (o.f1 = "ANY_subquery".f1)
-> Seq Scan on public.int4_tbl o
Output: o.f1
! -> Materialize
Output: "ANY_subquery".f1, "ANY_subquery".g
-> Subquery Scan on "ANY_subquery"
Output: "ANY_subquery".f1, "ANY_subquery".g
Filter: ("ANY_subquery".f1 = "ANY_subquery".g)
! -> Result
! Output: i.f1, ((generate_series(1, 2)) / 10)
! -> Result
! Output: i.f1, generate_series(1, 2)
! -> HashAggregate
! Output: i.f1
! Group Key: i.f1
! -> Seq Scan on public.int4_tbl i
! Output: i.f1
! (19 rows)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,2) / 10 g from int4_tbl i group by f1);
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
index e9bea41..4e87186 100644
*** a/src/test/regress/expected/tsrf.out
--- b/src/test/regress/expected/tsrf.out
*************** SELECT generate_series(1, generate_serie
*** 43,49 ****
-- srf, with two SRF arguments
SELECT generate_series(generate_series(1,3), generate_series(2, 4));
! ERROR: functions and operators can take at most one set argument
CREATE TABLE few(id int, dataa text, datab text);
INSERT INTO few VALUES(1, 'a', 'foo'),(2, 'a', 'bar'),(3, 'b', 'bar');
-- SRF output order of sorting is maintained, if SRF is not referenced
--- 43,58 ----
-- srf, with two SRF arguments
SELECT generate_series(generate_series(1,3), generate_series(2, 4));
! generate_series
! -----------------
! 1
! 2
! 2
! 3
! 3
! 4
! (6 rows)
!
CREATE TABLE few(id int, dataa text, datab text);
INSERT INTO few VALUES(1, 'a', 'foo'),(2, 'a', 'bar'),(3, 'b', 'bar');
-- SRF output order of sorting is maintained, if SRF is not referenced
Andres Freund <andres@anarazel.de> writes:
0003-Avoid-materializing-SRFs-in-the-FROM-list.patch
To avoid performance regressions from moving SRFM_ValuePerCall SRFs to
ROWS FROM, nodeFunctionscan.c needs to support not materializing
output.
I looked over this patch a bit.
In my present patch I've *ripped out* the support for materialization
in nodeFunctionscan.c entirely. That means that rescans referencing
volatile functions can change their behaviour (if a function is
rescanned, without having it's parameters changed), and that native
backward scan support is gone. I don't think that's actually an issue.
I think you are wrong on this not being an issue: it is critical that
rescan deliver the same results as before, else for example having a
function RTE on the inside of a nestloop will give nonsensical/broken
results. I think what we'll have to do is allow the optimization of
skipping the tuplestore only when the function is declared nonvolatile.
(If it is, and it nonetheless gives different results on rescan, it's not
our fault if joins give haywire answers.) I'm okay with not supporting
backward scan, but wrong answers during rescan is a different animal
entirely.
Moreover, I think we'd all agreed that this effort needs to avoid any
not-absolutely-necessary semantics changes. This one is not only not
necessary, but it would result in subtle hard-to-detect breakage.
It's conceivable that we could allow the executor to be broken this way
and have the planner fix it by inserting a Material node when joining.
But I think it would be messy and would probably not perform as well as
an internal tuplestore --- for one thing, because the planner can't know
whether the function would return a tuplestore, making the external
materialization redundant.
Another idea is that we could extend the set of ExecInitNode flags
(EXEC_FLAG_REWIND etc) to tell child nodes whether they need to implement
rescan correctly in this sense; if they are not RHS children of nestloops,
and maybe one or two other cases, they don't. That would give another
route by which nodeFunctionscan could decide that it can skip
materialization in common cases.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-15 15:23:58 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
In my present patch I've *ripped out* the support for materialization
in nodeFunctionscan.c entirely. That means that rescans referencing
volatile functions can change their behaviour (if a function is
rescanned, without having it's parameters changed), and that native
backward scan support is gone. I don't think that's actually an issue.I think you are wrong on this not being an issue: it is critical that
rescan deliver the same results as before, else for example having a
function RTE on the inside of a nestloop will give nonsensical/broken
results.
I find that quite unconvincing. We quite freely re-evaluate functions in
the targetlist again, even if they're volatile and/or SRFs.
If we implement tSRFs as pipeline nodes, we can "simply" default to the
never materializing behaviour there I guess.
Moreover, I think we'd all agreed that this effort needs to avoid any
not-absolutely-necessary semantics changes.
I don't agree with that. Adding pointless complications for a niche
edge cases of niche features isn't worth it.
Another idea is that we could extend the set of ExecInitNode flags
(EXEC_FLAG_REWIND etc) to tell child nodes whether they need to implement
rescan correctly in this sense; if they are not RHS children of nestloops,
and maybe one or two other cases, they don't. That would give another
route by which nodeFunctionscan could decide that it can skip
materialization in common cases.
That's something I've wondered about too. Materializing if rescans are
required is quite acceptable, and probably rather rare in
practice. Seems not unlikely that that information would be valuable for
other node types too.
Regards,
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2016-09-14 19:28:25 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 19:35:22 -0400, Tom Lane wrote:
Anyway I'll draft a prototype and then we can compare.
Ok, cool.
Here's a draft patch that is just meant to investigate what the planner
changes might look like if we do it in the pipelined-result way.
Nice.
A difficulty with this restriction is that if you have a query like
"select f1, generate_series(1,2) / 10 from tab" then you end up with both
a SRF-executing Result and a separate scalar-projection Result above it,
because the division-by-ten has to happen in a separate plan level.
Makes sense. I guess we could teach the SRF pipeline node to execute a
series of such steps. Hm. That makes me think of something:
Hm. One thing I wonder about with this approach, is how we're going to
handle something absurd like:
SELECT generate_series(1, generate_series(1, 2)), generate_series(1, generate_series(2,4));
I guess we have to do here is
Step: generate_series(1,2), 1, 2, 4
Step: generate_series(1, Var(generate_series(1,2))), 1, 2, 4
Step: Var(generate_series(1, Var(generate_series(1,2)))), 1, generate_series(2, 4)
Step: Var(generate_series(1, Var(generate_series(1,2)))), generate_series(1, Var(generate_series(2, 4)))
But that'd still not have the same lockstepping behaviour, right? I'm
at a conference, and half-ill, so I might just standing on my own brain
here.
The planner's notions about the cost of Result make it think that this is
quite expensive --- mainly because the upper Result will be iterated once
per SRF output row, so that you get hit with cpu_tuple_cost per output row.
And that in turn leads it to change plans in one or two cases in the
regression tests. Maybe that's fine. I'm worried though that it's right
that this will be unduly expensive. So I'm kind of tempted to define the
SRF-executing node as acting more like, say, Agg or WindowFunc, in that
it has a list of SRFs to execute and then it has the ability to project a
scalar tlist on top of those results.
Hah, was thinking the same above ;)
On the whole I'm pretty pleased with this approach, at least from the
point of view of the planner. The net addition of planner code is
smaller than what you had,
Not by much. But I do agree that there's some advantages here.
and though I'm no doubt biased, I think this
version is much cleaner.
Certainly seems a bit easier to extend and adjust behaviour. Not having
to deal with enforcing join order, and having less issues with
determining what to push where is certainly advantageous. After all,
that was why I initially was thinking of tis approach.
Also, though this patch doesn't address exactly
how we might do it, it's fairly clear that it'd be possible to allow
FDWs and CustomScans to implement SRF execution, eg pushing a SRF down to
a foreign server, in a reasonably straightforward extension of the
existing upper-pathification hooks. If we go with the lateral function
RTE approach, that's going to be somewhere between hard and impossible.
Hm. Not sure if there's that much point in doing that, but I agree that
the LATERAL approach adds more restrictions.
So I think we should continue investigating this way of doing things.
I'll try to take a look at the executor end of it tomorrow. However
I'm leaving Friday for a week's vacation, and may not have anything to
show before that.
If you have something that's halfway recognizable, could you perhaps
post it?
Regards,
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
Hm. One thing I wonder about with this approach, is how we're going to
handle something absurd like:
SELECT generate_series(1, generate_series(1, 2)), generate_series(1, generate_series(2,4));
The patch that I posted would run both the generate_series(1, 2) and
generate_series(2,4) calls in the same SRF node, forcing them to run in
lockstep, after which their results would be fed to the SRF node doing
the top-level SRFs. We could probably change it to run them in separate
nodes, but I don't see any principled way to decide which one goes first
(and in some variants of this example, it would matter). I think the
LATERAL approach would face exactly the same issues: how many LATERAL
nodes do you use, and what's their join order?
I think we could get away with defining it like this (ie, SRFs at the same
SRF nesting level run in lockstep) as long as it's documented. Whatever
the current behavior is for such cases would be pretty bizarre too.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-09-15 16:48:59 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
Hm. One thing I wonder about with this approach, is how we're going to
handle something absurd like:
SELECT generate_series(1, generate_series(1, 2)), generate_series(1, generate_series(2,4));The patch that I posted would run both the generate_series(1, 2) and
generate_series(2,4) calls in the same SRF node, forcing them to run in
lockstep, after which their results would be fed to the SRF node doing
the top-level SRFs. We could probably change it to run them in separate
nodes, but I don't see any principled way to decide which one goes first
(and in some variants of this example, it would matter).
I think that's fine. I personally still think we're *much* better off
getting rid of the non-lockstep variants. You're still on the fence
about retaining the LCM behaviour (for the same nesting level at least)?
I think the LATERAL approach would face exactly the same issues: how
many LATERAL nodes do you use, and what's their join order?
I think this specific issue could be handled in a bit easier to grasp
variant. My PoC basically generated one RTE for each "query
level". There'd have been one RTE for generate_series(1,2), one for
gs(2,4) and one for gs(1, var(gs(1,2))), gs(1, var(gs(2,4))). Lateral
machiner would force the join order to have the argument srfs first, and
then the twoi combined srf with lateral arguments after that.
I think we could get away with defining it like this (ie, SRFs at the same
SRF nesting level run in lockstep) as long as it's documented. Whatever
the current behavior is for such cases would be pretty bizarre too.
Indeed.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-15 16:48:59 -0400, Tom Lane wrote:
The patch that I posted would run both the generate_series(1, 2) and
generate_series(2,4) calls in the same SRF node, forcing them to run in
lockstep, after which their results would be fed to the SRF node doing
the top-level SRFs. We could probably change it to run them in separate
nodes, but I don't see any principled way to decide which one goes first
(and in some variants of this example, it would matter).
I think that's fine. I personally still think we're *much* better off
getting rid of the non-lockstep variants. You're still on the fence
about retaining the LCM behaviour (for the same nesting level at least)?
I'm happy to get rid of the LCM behavior, I just want to have some wiggle
room to be able to get it back if somebody really needs it.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Aug 24, 2016 at 3:55 AM, Andres Freund <andres@anarazel.de> wrote:
Comments?
This thread has no activity for some time now and it is linked to this CF entry:
https://commitfest.postgresql.org/10/759/
I am marking it as returned with feedback..
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Sep 16, 2016 at 6:12 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I'm happy to get rid of the LCM behavior, I just want to have some wiggle
room to be able to get it back if somebody really needs it.
Er, actually no that's this thread for this CF entry:
https://commitfest.postgresql.org/10/759/
Still there has not been much activity.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Tom,
On 2016-09-14 19:28:25 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 19:35:22 -0400, Tom Lane wrote:
Anyway I'll draft a prototype and then we can compare.
Ok, cool.
Here's a draft patch that is just meant to investigate what the planner
changes might look like if we do it in the pipelined-result way.
Accordingly, I didn't touch the executor, but just had it emit regular
Result nodes for SRF-execution steps. However, the SRFs are all
guaranteed to appear at top level of their respective tlists, so that
those Results could be replaced with something that works like
nodeFunctionscan.
So I think we should continue investigating this way of doing things.
I'll try to take a look at the executor end of it tomorrow. However
I'm leaving Friday for a week's vacation, and may not have anything to
show before that.
Are you planning to work on the execution side of things? I otherwise
can take a stab...
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2016-09-14 19:28:25 -0400, Tom Lane wrote:
So I think we should continue investigating this way of doing things.
I'll try to take a look at the executor end of it tomorrow. However
I'm leaving Friday for a week's vacation, and may not have anything to
show before that.
Are you planning to work on the execution side of things? I otherwise
can take a stab...
My plan is to start on this when I go back into commitfest mode,
but right now I'm trying to produce a draft patch for RLS changes.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Aug 22, 2016 at 4:20 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-08-17 17:41:28 -0700, Andres Freund wrote:
Tom, do you think this is roughly going in the right direction?
I've not had time to look at this patch, I'm afraid. If you still
want me to, I can make time in a day or so.
Tom, it's been about 3.5 months since you wrote this. I think it
would be really valuable if you could get to this RSN because the
large patch set posted on the "Changed SRF in targetlist handling"
thread is backed up behind this -- and I think that's really valuable
work which I don't want to see slip out of this release. At the same
time, both that and this are quite invasive, and I don't want it all
to get committed the day before feature freeze, because that will mess
up the schedule.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
Tom, it's been about 3.5 months since you wrote this. I think it
would be really valuable if you could get to this RSN because the
large patch set posted on the "Changed SRF in targetlist handling"
thread is backed up behind this -- and I think that's really valuable
work which I don't want to see slip out of this release.
Yeah, I was busy with other stuff during the recent commitfest.
I'll try to get back to this. There's still only 24 hours in a day,
though. (And no, [1]https://www.theguardian.com/science/2016/dec/07/earths-day-lengthens-by-two-milliseconds-a-century-astronomers-find is not enough to help.)
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-10-31 09:06:39 -0700, Andres Freund wrote:
On 2016-09-14 19:28:25 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 19:35:22 -0400, Tom Lane wrote:
Here's a draft patch that is just meant to investigate what the planner
changes might look like if we do it in the pipelined-result way.
Accordingly, I didn't touch the executor, but just had it emit regular
Result nodes for SRF-execution steps. However, the SRFs are all
guaranteed to appear at top level of their respective tlists, so that
those Results could be replaced with something that works like
nodeFunctionscan.So I think we should continue investigating this way of doing things.
I'll try to take a look at the executor end of it tomorrow. However
I'm leaving Friday for a week's vacation, and may not have anything to
show before that.Are you planning to work on the execution side of things? I otherwise
can take a stab...
Doing so now.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-01-15 19:29:52 -0800, Andres Freund wrote:
On 2016-10-31 09:06:39 -0700, Andres Freund wrote:
On 2016-09-14 19:28:25 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2016-09-12 19:35:22 -0400, Tom Lane wrote:
Here's a draft patch that is just meant to investigate what the planner
changes might look like if we do it in the pipelined-result way.
Accordingly, I didn't touch the executor, but just had it emit regular
Result nodes for SRF-execution steps. However, the SRFs are all
guaranteed to appear at top level of their respective tlists, so that
those Results could be replaced with something that works like
nodeFunctionscan.So I think we should continue investigating this way of doing things.
I'll try to take a look at the executor end of it tomorrow. However
I'm leaving Friday for a week's vacation, and may not have anything to
show before that.Are you planning to work on the execution side of things? I otherwise
can take a stab...Doing so now.
That worked quite well. So we have a few questions, before I clean this
up:
- For now the node is named 'Srf' both internally and in explain - not
sure if we want to make that something longer/easier to understand for
others? Proposals? TargetFunctionScan? SetResult?
- We could alternatively add all this into the Result node - it's not
actually a lot of new code, and most of that is boilerplate stuff
about adding a new node. I'm ok with both.
- I continued with the division of Labor that Tom had set up, so we're
creating one Srf node for each "nested" set of SRFs. We'd discussed
nearby to change that for one node/path for all nested SRFs, partially
because of costing. But I don't like the idea that much anymore. The
implementation seems cleaner (and probably faster) this way, and I
don't think nested targetlist SRFs are something worth optimizing
for. Anybody wants to argue differently?
- I chose to error out if a retset function appears in ExecEvalFunc/Oper
and make both unconditionally set evalfunc to
ExecMakeFunctionResultNoSets. ExecMakeFunctionResult() is now
externally visible. That seems like the least noisy way to change
things over, but I'm open for different proposals.
Comments?
Regards,
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund wrote:
That worked quite well. So we have a few questions, before I clean this
up:- For now the node is named 'Srf' both internally and in explain - not
sure if we want to make that something longer/easier to understand for
others? Proposals? TargetFunctionScan? SetResult?- We could alternatively add all this into the Result node - it's not
actually a lot of new code, and most of that is boilerplate stuff
about adding a new node. I'm ok with both.
Hmm. I wonder if your stuff could be used as support code for
XMLTABLE[1]/messages/by-id/CAFj8pRA_KEukOBXvS4V-imoEEsXu0pD0AsHV0-MwRFDRWte8Lg@mail.gmail.com. Currently it has a bit of additional code of its own,
though admittedly it's very little code executor-side. Would you mind
sharing a patch, or more details on how it works?
[1]: /messages/by-id/CAFj8pRA_KEukOBXvS4V-imoEEsXu0pD0AsHV0-MwRFDRWte8Lg@mail.gmail.com
- I continued with the division of Labor that Tom had set up, so we're
creating one Srf node for each "nested" set of SRFs. We'd discussed
nearby to change that for one node/path for all nested SRFs, partially
because of costing. But I don't like the idea that much anymore. The
implementation seems cleaner (and probably faster) this way, and I
don't think nested targetlist SRFs are something worth optimizing
for. Anybody wants to argue differently?
Nested targetlist SRFs make my head spin. I suppose they may have some
use, but where would you really want this:
alvherre=# select generate_series(1, generate_series(2, 4));
generate_series
─────────────────
1
2
1
2
3
1
2
3
4
(9 filas)
instead of the much more sensible
alvherre=# select i, j from generate_series(2, 4) i, generate_series(1, i) j;
i │ j
───┼───
2 │ 1
2 │ 2
3 │ 1
3 │ 2
3 │ 3
4 │ 1
4 │ 2
4 │ 3
4 │ 4
(9 filas)
? If supporting the former makes it harder to support/optimize more
reasonable cases, it seems fair game to leave them behind.
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-01-16 12:17:46 -0300, Alvaro Herrera wrote:
Andres Freund wrote:
That worked quite well. So we have a few questions, before I clean this
up:- For now the node is named 'Srf' both internally and in explain - not
sure if we want to make that something longer/easier to understand for
others? Proposals? TargetFunctionScan? SetResult?- We could alternatively add all this into the Result node - it's not
actually a lot of new code, and most of that is boilerplate stuff
about adding a new node. I'm ok with both.Hmm. I wonder if your stuff could be used as support code for
XMLTABLE[1].
I don't immediately see what functionality overlaps, could you expand on
that?
Currently it has a bit of additional code of its own,
though admittedly it's very little code executor-side. Would you mind
sharing a patch, or more details on how it works?
Can do both; cleaning up the patch now. What we're talking about here is
a way to implement targetlist SRF that is based on:
1) a patch by Tom that creates additional Result (or now Srf) executor
nodes containing SRF evaluation. This guarantees that only Result/Srf
nodes have to deal with targetlist SRF evaluation.
2) new code to evaluate SRFs in the new Result/Srf node, that doesn't
rely on ExecEvalExpr et al. to have a IsDone argument. Instead
there's special code to handle that in the new node. That's possible
because it's now guaranted that all SRFs are "toplevel" in the
relevant targetlist(s).
3) Removal of all nearly tSRF related code execQual.c and other
executor/ files, including the node->ps.ps_TupFromTlist checks
everywhere.
makes sense?
- I continued with the division of Labor that Tom had set up, so we're
creating one Srf node for each "nested" set of SRFs. We'd discussed
nearby to change that for one node/path for all nested SRFs, partially
because of costing. But I don't like the idea that much anymore. The
implementation seems cleaner (and probably faster) this way, and I
don't think nested targetlist SRFs are something worth optimizing
for. Anybody wants to argue differently?Nested targetlist SRFs make my head spin. I suppose they may have some
use, but where would you really want this:
I think there's some cases where it can be useful. Targetlist SRFs as a
whole really are much more about backward compatibility than anything :)
? If supporting the former makes it harder to support/optimize more
reasonable cases, it seems fair game to leave them behind.
I don't want to desupport them, just don't want to restructure (one node
doing several levels of SRFs, instead of one per level) just to make it
easier to give good estimates.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund wrote:
On 2017-01-16 12:17:46 -0300, Alvaro Herrera wrote:
Andres Freund wrote:
That worked quite well. So we have a few questions, before I clean this
up:- For now the node is named 'Srf' both internally and in explain - not
sure if we want to make that something longer/easier to understand for
others? Proposals? TargetFunctionScan? SetResult?- We could alternatively add all this into the Result node - it's not
actually a lot of new code, and most of that is boilerplate stuff
about adding a new node. I'm ok with both.Hmm. I wonder if your stuff could be used as support code for
XMLTABLE[1].I don't immediately see what functionality overlaps, could you expand on
that?
Well, I haven't read any previous patches in this area, but the xmltable
patch adds a new way of handling set-returning expressions, so it
appears vaguely related. These aren't properly functions in the current
sense of the word, though. There is some parallel to what
ExecMakeFunctionResult does, which I suppose is related.
Currently it has a bit of additional code of its own,
though admittedly it's very little code executor-side. Would you mind
sharing a patch, or more details on how it works?Can do both; cleaning up the patch now. What we're talking about here is
a way to implement targetlist SRF that is based on:1) a patch by Tom that creates additional Result (or now Srf) executor
nodes containing SRF evaluation. This guarantees that only Result/Srf
nodes have to deal with targetlist SRF evaluation.2) new code to evaluate SRFs in the new Result/Srf node, that doesn't
rely on ExecEvalExpr et al. to have a IsDone argument. Instead
there's special code to handle that in the new node. That's possible
because it's now guaranted that all SRFs are "toplevel" in the
relevant targetlist(s).3) Removal of all nearly tSRF related code execQual.c and other
executor/ files, including the node->ps.ps_TupFromTlist checks
everywhere.makes sense?
Hmm, okay. (The ps_TupFromTlist thing has long seemed an ugly
construction.) I think the current term for this kind of thing is
TableFunction -- are you really naming this "Srf" literally? It seems
strange, but maybe it's just me.
Nested targetlist SRFs make my head spin. I suppose they may have some
use, but where would you really want this:I think there's some cases where it can be useful. Targetlist SRFs as a
whole really are much more about backward compatibility than anything :)
Sure.
? If supporting the former makes it harder to support/optimize more
reasonable cases, it seems fair game to leave them behind.I don't want to desupport them, just don't want to restructure (one node
doing several levels of SRFs, instead of one per level) just to make it
easier to give good estimates.
No objections.
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
That worked quite well. So we have a few questions, before I clean this
up:
- For now the node is named 'Srf' both internally and in explain - not
sure if we want to make that something longer/easier to understand for
others? Proposals? TargetFunctionScan? SetResult?
"Srf" is ugly as can be, and unintelligible. SetResult might be OK.
- I continued with the division of Labor that Tom had set up, so we're
creating one Srf node for each "nested" set of SRFs. We'd discussed
nearby to change that for one node/path for all nested SRFs, partially
because of costing. But I don't like the idea that much anymore. The
implementation seems cleaner (and probably faster) this way, and I
don't think nested targetlist SRFs are something worth optimizing
for. Anybody wants to argue differently?
Not me.
Comments?
Hard to comment on your other points without a patch to look at.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2017-01-16 14:13:18 -0500, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
That worked quite well. So we have a few questions, before I clean this
up:- For now the node is named 'Srf' both internally and in explain - not
sure if we want to make that something longer/easier to understand for
others? Proposals? TargetFunctionScan? SetResult?"Srf" is ugly as can be, and unintelligible. SetResult might be OK.
Named it SetResult - imo looks ok. I think I do prefer the separate
node type over re-using Result. The planner integration looks cleaner
to me due to not needing the srfpp special cases and such.
Comments?
Hard to comment on your other points without a patch to look at.
Attached the current version. There's a *lot* of pending cleanup needed
(especially in execQual.c) removing outdated code/comments etc, but this
seems good enough for a first review. I'd want that cleanup done in a
separate patch anyway.
Attached are two patches. The first is just a rebased version (just some
hunk offset changed) of your planner patch, on top of that is my
executor patch. My patch moves some minor detail in yours around, and I
do think they should eventually be merged; but leaving it split for a
round displays the changes more cleanly.
Additional questions:
- do we care about SRFs that don't actually return a set? If so we need
to change the error checking code in ExecEvalFunc/Oper and move it to
the actual invocation.
- the FuncExpr/OpExpr check in ExecMakeFunctionResult is fairly ugly imo
- but I don't quite see a much better solution.
Greetings,
Andres
Attachments:
0001-Put-SRF-into-a-separate-node-v1.patchtext/x-patch; charset=us-asciiDownload
From 2c16e67f46f418239ab90a51611f168508bac66e Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Sun, 15 Jan 2017 19:23:22 -0800
Subject: [PATCH 1/2] Put SRF into a separate node v1.
Author: Tom Lane
Discussion: https://postgr.es/m/557.1473895705@sss.pgh.pa.us
---
src/backend/nodes/outfuncs.c | 1 +
src/backend/optimizer/plan/createplan.c | 33 ++++-
src/backend/optimizer/plan/planner.c | 219 +++++++++++++++++++++++++------
src/backend/optimizer/util/clauses.c | 104 ++-------------
src/backend/optimizer/util/pathnode.c | 75 +++++++++++
src/backend/optimizer/util/tlist.c | 199 ++++++++++++++++++++++++++++
src/include/nodes/relation.h | 1 +
src/include/optimizer/clauses.h | 1 -
src/include/optimizer/pathnode.h | 4 +
src/include/optimizer/tlist.h | 3 +
src/test/regress/expected/aggregates.out | 3 +-
src/test/regress/expected/limit.out | 10 +-
src/test/regress/expected/rangefuncs.out | 10 +-
src/test/regress/expected/subselect.out | 26 ++--
src/test/regress/expected/tsrf.out | 11 +-
15 files changed, 544 insertions(+), 156 deletions(-)
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index cf0a6059e9..73fdc9706d 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -1805,6 +1805,7 @@ _outProjectionPath(StringInfo str, const ProjectionPath *node)
WRITE_NODE_FIELD(subpath);
WRITE_BOOL_FIELD(dummypp);
+ WRITE_BOOL_FIELD(srfpp);
}
static void
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index c7bcd9b84c..875de739a8 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1421,8 +1421,21 @@ create_projection_plan(PlannerInfo *root, ProjectionPath *best_path)
Plan *subplan;
List *tlist;
- /* Since we intend to project, we don't need to constrain child tlist */
- subplan = create_plan_recurse(root, best_path->subpath, 0);
+ /*
+ * XXX Possibly-temporary hack: if the subpath is a dummy ResultPath,
+ * don't bother with it, just make a Result with no input. This avoids an
+ * extra Result plan node when doing "SELECT srf()". Depending on what we
+ * decide about the desired plan structure for SRF-expanding nodes, this
+ * optimization might have to go away, and in any case it'll probably look
+ * a good bit different.
+ */
+ if (IsA(best_path->subpath, ResultPath) &&
+ ((ResultPath *) best_path->subpath)->path.pathtarget->exprs == NIL &&
+ ((ResultPath *) best_path->subpath)->quals == NIL)
+ subplan = NULL;
+ else
+ /* Since we intend to project, we don't need to constrain child tlist */
+ subplan = create_plan_recurse(root, best_path->subpath, 0);
tlist = build_path_tlist(root, &best_path->path);
@@ -1441,8 +1454,9 @@ create_projection_plan(PlannerInfo *root, ProjectionPath *best_path)
* creation, but that would add expense to creating Paths we might end up
* not using.)
*/
- if (is_projection_capable_path(best_path->subpath) ||
- tlist_same_exprs(tlist, subplan->targetlist))
+ if (!best_path->srfpp &&
+ (is_projection_capable_path(best_path->subpath) ||
+ tlist_same_exprs(tlist, subplan->targetlist)))
{
/* Don't need a separate Result, just assign tlist to subplan */
plan = subplan;
@@ -6192,6 +6206,17 @@ is_projection_capable_path(Path *path)
* projection to its dummy path.
*/
return IS_DUMMY_PATH(path);
+ case T_Result:
+
+ /*
+ * If the path is doing SRF evaluation, claim it can't project, so
+ * we don't jam a new tlist into it and thereby break the property
+ * that the SRFs appear at top level.
+ */
+ if (IsA(path, ProjectionPath) &&
+ ((ProjectionPath *) path)->srfpp)
+ return false;
+ break;
default:
break;
}
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index f936710171..70870bbbe0 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -153,6 +153,8 @@ static List *make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc,
static PathTarget *make_sort_input_target(PlannerInfo *root,
PathTarget *final_target,
bool *have_postponed_srfs);
+static void adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel,
+ List *targets, List *targets_contain_srfs);
/*****************************************************************************
@@ -1434,8 +1436,9 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
int64 count_est = 0;
double limit_tuples = -1.0;
bool have_postponed_srfs = false;
- double tlist_rows;
PathTarget *final_target;
+ List *final_targets;
+ List *final_targets_contain_srfs;
RelOptInfo *current_rel;
RelOptInfo *final_rel;
ListCell *lc;
@@ -1498,6 +1501,10 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
/* Also extract the PathTarget form of the setop result tlist */
final_target = current_rel->cheapest_total_path->pathtarget;
+ /* The setop result tlist couldn't contain any SRFs */
+ Assert(!parse->hasTargetSRFs);
+ final_targets = final_targets_contain_srfs = NIL;
+
/*
* Can't handle FOR [KEY] UPDATE/SHARE here (parser should have
* checked already, but let's make sure).
@@ -1523,8 +1530,14 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
{
/* No set operations, do regular planning */
PathTarget *sort_input_target;
+ List *sort_input_targets;
+ List *sort_input_targets_contain_srfs;
PathTarget *grouping_target;
+ List *grouping_targets;
+ List *grouping_targets_contain_srfs;
PathTarget *scanjoin_target;
+ List *scanjoin_targets;
+ List *scanjoin_targets_contain_srfs;
bool have_grouping;
AggClauseCosts agg_costs;
WindowFuncLists *wflists = NULL;
@@ -1775,8 +1788,50 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
scanjoin_target = grouping_target;
/*
- * Forcibly apply scan/join target to all the Paths for the scan/join
- * rel.
+ * If there are any SRFs in the targetlist, we must separate each of
+ * these PathTargets into SRF-computing and SRF-free targets. Replace
+ * each of the named targets with a SRF-free version, and remember the
+ * list of additional projection steps we need to add afterwards.
+ */
+ if (parse->hasTargetSRFs)
+ {
+ /* final_target doesn't recompute any SRFs in sort_input_target */
+ split_pathtarget_at_srfs(root, final_target, sort_input_target,
+ &final_targets,
+ &final_targets_contain_srfs);
+ final_target = (PathTarget *) linitial(final_targets);
+ Assert(!linitial_int(final_targets_contain_srfs));
+ /* likewise for sort_input_target vs. grouping_target */
+ split_pathtarget_at_srfs(root, sort_input_target, grouping_target,
+ &sort_input_targets,
+ &sort_input_targets_contain_srfs);
+ sort_input_target = (PathTarget *) linitial(sort_input_targets);
+ Assert(!linitial_int(sort_input_targets_contain_srfs));
+ /* likewise for grouping_target vs. scanjoin_target */
+ split_pathtarget_at_srfs(root, grouping_target, scanjoin_target,
+ &grouping_targets,
+ &grouping_targets_contain_srfs);
+ grouping_target = (PathTarget *) linitial(grouping_targets);
+ Assert(!linitial_int(grouping_targets_contain_srfs));
+ /* scanjoin_target will not have any SRFs precomputed for it */
+ split_pathtarget_at_srfs(root, scanjoin_target, NULL,
+ &scanjoin_targets,
+ &scanjoin_targets_contain_srfs);
+ scanjoin_target = (PathTarget *) linitial(scanjoin_targets);
+ Assert(!linitial_int(scanjoin_targets_contain_srfs));
+ }
+ else
+ {
+ /* initialize lists, just to keep compiler quiet */
+ final_targets = final_targets_contain_srfs = NIL;
+ sort_input_targets = sort_input_targets_contain_srfs = NIL;
+ grouping_targets = grouping_targets_contain_srfs = NIL;
+ scanjoin_targets = scanjoin_targets_contain_srfs = NIL;
+ }
+
+ /*
+ * Forcibly apply SRF-free scan/join target to all the Paths for the
+ * scan/join rel.
*
* In principle we should re-run set_cheapest() here to identify the
* cheapest path, but it seems unlikely that adding the same tlist
@@ -1847,6 +1902,12 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
current_rel->partial_pathlist = NIL;
}
+ /* Now fix things up if scan/join target contains SRFs */
+ if (parse->hasTargetSRFs)
+ adjust_paths_for_srfs(root, current_rel,
+ scanjoin_targets,
+ scanjoin_targets_contain_srfs);
+
/*
* Save the various upper-rel PathTargets we just computed into
* root->upper_targets[]. The core code doesn't use this, but it
@@ -1871,6 +1932,11 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
&agg_costs,
rollup_lists,
rollup_groupclauses);
+ /* Fix things up if grouping_target contains SRFs */
+ if (parse->hasTargetSRFs)
+ adjust_paths_for_srfs(root, current_rel,
+ grouping_targets,
+ grouping_targets_contain_srfs);
}
/*
@@ -1886,6 +1952,11 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
tlist,
wflists,
activeWindows);
+ /* Fix things up if sort_input_target contains SRFs */
+ if (parse->hasTargetSRFs)
+ adjust_paths_for_srfs(root, current_rel,
+ sort_input_targets,
+ sort_input_targets_contain_srfs);
}
/*
@@ -1914,40 +1985,11 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
final_target,
have_postponed_srfs ? -1.0 :
limit_tuples);
- }
-
- /*
- * If there are set-returning functions in the tlist, scale up the output
- * rowcounts of all surviving Paths to account for that. Note that if any
- * SRFs appear in sorting or grouping columns, we'll have underestimated
- * the numbers of rows passing through earlier steps; but that's such a
- * weird usage that it doesn't seem worth greatly complicating matters to
- * account for it.
- */
- if (parse->hasTargetSRFs)
- tlist_rows = tlist_returns_set_rows(tlist);
- else
- tlist_rows = 1;
-
- if (tlist_rows > 1)
- {
- foreach(lc, current_rel->pathlist)
- {
- Path *path = (Path *) lfirst(lc);
-
- /*
- * We assume that execution costs of the tlist as such were
- * already accounted for. However, it still seems appropriate to
- * charge something more for the executor's general costs of
- * processing the added tuples. The cost is probably less than
- * cpu_tuple_cost, though, so we arbitrarily use half of that.
- */
- path->total_cost += path->rows * (tlist_rows - 1) *
- cpu_tuple_cost / 2;
-
- path->rows *= tlist_rows;
- }
- /* No need to run set_cheapest; we're keeping all paths anyway. */
+ /* Fix things up if final_target contains SRFs */
+ if (parse->hasTargetSRFs)
+ adjust_paths_for_srfs(root, current_rel,
+ final_targets,
+ final_targets_contain_srfs);
}
/*
@@ -5151,6 +5193,109 @@ get_cheapest_fractional_path(RelOptInfo *rel, double tuple_fraction)
}
/*
+ * adjust_paths_for_srfs
+ * Fix up the Paths of the given upperrel to handle tSRFs properly.
+ *
+ * The executor can only handle set-returning functions that appear at the
+ * top level of the targetlist of a Result plan node. If we have any SRFs
+ * that are not at top level, we need to split up the evaluation into multiple
+ * plan levels in which each level satisfies this constraint. This function
+ * modifies each Path of an upperrel that (might) compute any SRFs in its
+ * output tlist to insert appropriate projection steps.
+ *
+ * The given targets and targets_contain_srfs lists are from
+ * split_pathtarget_at_srfs(). We assume the existing Paths emit the first
+ * target in targets.
+ */
+static void
+adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel,
+ List *targets, List *targets_contain_srfs)
+{
+ ListCell *lc;
+
+ Assert(list_length(targets) == list_length(targets_contain_srfs));
+ Assert(!linitial_int(targets_contain_srfs));
+
+ /* If no SRFs appear at this plan level, nothing to do */
+ if (list_length(targets) == 1)
+ return;
+
+ /*
+ * Stack SRF-evaluation nodes atop each path for the rel.
+ *
+ * In principle we should re-run set_cheapest() here to identify the
+ * cheapest path, but it seems unlikely that adding the same tlist eval
+ * costs to all the paths would change that, so we don't bother. Instead,
+ * just assume that the cheapest-startup and cheapest-total paths remain
+ * so. (There should be no parameterized paths anymore, so we needn't
+ * worry about updating cheapest_parameterized_paths.)
+ */
+ foreach(lc, rel->pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ Path *newpath = subpath;
+ ListCell *lc1,
+ *lc2;
+
+ Assert(subpath->param_info == NULL);
+ forboth(lc1, targets, lc2, targets_contain_srfs)
+ {
+ PathTarget *thistarget = (PathTarget *) lfirst(lc1);
+ bool contains_srfs = (bool) lfirst_int(lc2);
+
+ /* If this level doesn't contain SRFs, do regular projection */
+ if (contains_srfs)
+ newpath = (Path *) create_srf_projection_path(root,
+ rel,
+ newpath,
+ thistarget);
+ else
+ newpath = (Path *) apply_projection_to_path(root,
+ rel,
+ newpath,
+ thistarget);
+ }
+ lfirst(lc) = newpath;
+ if (subpath == rel->cheapest_startup_path)
+ rel->cheapest_startup_path = newpath;
+ if (subpath == rel->cheapest_total_path)
+ rel->cheapest_total_path = newpath;
+ }
+
+ /* Likewise for partial paths, if any */
+ foreach(lc, rel->partial_pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ Path *newpath = subpath;
+ ListCell *lc1,
+ *lc2;
+
+ Assert(subpath->param_info == NULL);
+ forboth(lc1, targets, lc2, targets_contain_srfs)
+ {
+ PathTarget *thistarget = (PathTarget *) lfirst(lc1);
+ bool contains_srfs = (bool) lfirst_int(lc2);
+
+ /* If this level doesn't contain SRFs, do regular projection */
+ if (contains_srfs)
+ newpath = (Path *) create_srf_projection_path(root,
+ rel,
+ newpath,
+ thistarget);
+ else
+ {
+ /* avoid apply_projection_to_path, in case of multiple refs */
+ newpath = (Path *) create_projection_path(root,
+ rel,
+ newpath,
+ thistarget);
+ }
+ }
+ lfirst(lc) = newpath;
+ }
+}
+
+/*
* expression_planner
* Perform planner's transformations on a standalone expression.
*
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 59ccdf43d4..a763c7fe24 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -99,7 +99,6 @@ static bool contain_agg_clause_walker(Node *node, void *context);
static bool get_agg_clause_costs_walker(Node *node,
get_agg_clause_costs_context *context);
static bool find_window_functions_walker(Node *node, WindowFuncLists *lists);
-static bool expression_returns_set_rows_walker(Node *node, double *count);
static bool contain_subplans_walker(Node *node, void *context);
static bool contain_mutable_functions_walker(Node *node, void *context);
static bool contain_volatile_functions_walker(Node *node, void *context);
@@ -790,114 +789,37 @@ find_window_functions_walker(Node *node, WindowFuncLists *lists)
/*
* expression_returns_set_rows
* Estimate the number of rows returned by a set-returning expression.
- * The result is 1 if there are no set-returning functions.
+ * The result is 1 if it's not a set-returning expression.
*
- * We use the product of the rowcount estimates of all the functions in
- * the given tree (this corresponds to the behavior of ExecMakeFunctionResult
- * for nested set-returning functions).
+ * We should only examine the top-level function or operator; it used to be
+ * appropriate to recurse, but not anymore. (Even if there are more SRFs in
+ * the function's inputs, their multipliers are accounted for separately.)
*
* Note: keep this in sync with expression_returns_set() in nodes/nodeFuncs.c.
*/
double
expression_returns_set_rows(Node *clause)
{
- double result = 1;
-
- (void) expression_returns_set_rows_walker(clause, &result);
- return clamp_row_est(result);
-}
-
-static bool
-expression_returns_set_rows_walker(Node *node, double *count)
-{
- if (node == NULL)
- return false;
- if (IsA(node, FuncExpr))
+ if (clause == NULL)
+ return 1.0;
+ if (IsA(clause, FuncExpr))
{
- FuncExpr *expr = (FuncExpr *) node;
+ FuncExpr *expr = (FuncExpr *) clause;
if (expr->funcretset)
- *count *= get_func_rows(expr->funcid);
+ return clamp_row_est(get_func_rows(expr->funcid));
}
- if (IsA(node, OpExpr))
+ if (IsA(clause, OpExpr))
{
- OpExpr *expr = (OpExpr *) node;
+ OpExpr *expr = (OpExpr *) clause;
if (expr->opretset)
{
set_opfuncid(expr);
- *count *= get_func_rows(expr->opfuncid);
+ return clamp_row_est(get_func_rows(expr->opfuncid));
}
}
-
- /* Avoid recursion for some cases that can't return a set */
- if (IsA(node, Aggref))
- return false;
- if (IsA(node, WindowFunc))
- return false;
- if (IsA(node, DistinctExpr))
- return false;
- if (IsA(node, NullIfExpr))
- return false;
- if (IsA(node, ScalarArrayOpExpr))
- return false;
- if (IsA(node, BoolExpr))
- return false;
- if (IsA(node, SubLink))
- return false;
- if (IsA(node, SubPlan))
- return false;
- if (IsA(node, AlternativeSubPlan))
- return false;
- if (IsA(node, ArrayExpr))
- return false;
- if (IsA(node, RowExpr))
- return false;
- if (IsA(node, RowCompareExpr))
- return false;
- if (IsA(node, CoalesceExpr))
- return false;
- if (IsA(node, MinMaxExpr))
- return false;
- if (IsA(node, XmlExpr))
- return false;
-
- return expression_tree_walker(node, expression_returns_set_rows_walker,
- (void *) count);
-}
-
-/*
- * tlist_returns_set_rows
- * Estimate the number of rows returned by a set-returning targetlist.
- * The result is 1 if there are no set-returning functions.
- *
- * Here, the result is the largest rowcount estimate of any of the tlist's
- * expressions, not the product as you would get from naively applying
- * expression_returns_set_rows() to the whole tlist. The behavior actually
- * implemented by ExecTargetList produces a number of rows equal to the least
- * common multiple of the expression rowcounts, so that the product would be
- * a worst-case estimate that is typically not realistic. Taking the max as
- * we do here is a best-case estimate that might not be realistic either,
- * but it's probably closer for typical usages. We don't try to compute the
- * actual LCM because we're working with very approximate estimates, so their
- * LCM would be unduly noisy.
- */
-double
-tlist_returns_set_rows(List *tlist)
-{
- double result = 1;
- ListCell *lc;
-
- foreach(lc, tlist)
- {
- TargetEntry *tle = (TargetEntry *) lfirst(lc);
- double colresult;
-
- colresult = expression_returns_set_rows((Node *) tle->expr);
- if (result < colresult)
- result = colresult;
- }
- return result;
+ return 1.0;
}
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 3b7c56d3c7..aa635fd057 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -2227,6 +2227,9 @@ create_projection_path(PlannerInfo *root,
(cpu_tuple_cost + target->cost.per_tuple) * subpath->rows;
}
+ /* Assume no SRFs around */
+ pathnode->srfpp = false;
+
return pathnode;
}
@@ -2320,6 +2323,78 @@ apply_projection_to_path(PlannerInfo *root,
}
/*
+ * create_srf_projection_path
+ * Creates a pathnode that represents performing a SRF projection.
+ *
+ * For the moment, we just use ProjectionPath for this, and generate a
+ * Result plan node. That's likely to change.
+ *
+ * 'rel' is the parent relation associated with the result
+ * 'subpath' is the path representing the source of data
+ * 'target' is the PathTarget to be computed
+ */
+ProjectionPath *
+create_srf_projection_path(PlannerInfo *root,
+ RelOptInfo *rel,
+ Path *subpath,
+ PathTarget *target)
+{
+ ProjectionPath *pathnode = makeNode(ProjectionPath);
+ double tlist_rows;
+ ListCell *lc;
+
+ pathnode->path.pathtype = T_Result;
+ pathnode->path.parent = rel;
+ pathnode->path.pathtarget = target;
+ /* For now, assume we are above any joins, so no parameterization */
+ pathnode->path.param_info = NULL;
+ pathnode->path.parallel_aware = false;
+ pathnode->path.parallel_safe = rel->consider_parallel &&
+ subpath->parallel_safe &&
+ is_parallel_safe(root, (Node *) target->exprs);
+ pathnode->path.parallel_workers = subpath->parallel_workers;
+ /* Projection does not change the sort order */
+ pathnode->path.pathkeys = subpath->pathkeys;
+
+ pathnode->subpath = subpath;
+
+ /* Always need the Result node */
+ pathnode->dummypp = false;
+ pathnode->srfpp = true;
+
+ /*
+ * Estimate number of rows produced by SRFs for each row of input; if
+ * there's more than one in this node, use the maximum.
+ */
+ tlist_rows = 1;
+ foreach(lc, target->exprs)
+ {
+ Node *node = (Node *) lfirst(lc);
+ double itemrows;
+
+ itemrows = expression_returns_set_rows(node);
+ if (tlist_rows < itemrows)
+ tlist_rows = itemrows;
+ }
+
+ /*
+ * In addition to the cost of evaluating the tlist, charge cpu_tuple_cost
+ * per input row, and half of cpu_tuple_cost for each added output row.
+ * This is slightly bizarre maybe, but it's what 9.6 did; we may revisit
+ * this estimate later.
+ */
+ pathnode->path.rows = subpath->rows * tlist_rows;
+ pathnode->path.startup_cost = subpath->startup_cost +
+ target->cost.startup;
+ pathnode->path.total_cost = subpath->total_cost +
+ target->cost.startup +
+ (cpu_tuple_cost + target->cost.per_tuple) * subpath->rows +
+ (pathnode->path.rows - subpath->rows) * cpu_tuple_cost / 2;
+
+ return pathnode;
+}
+
+/*
* create_sort_path
* Creates a pathnode that represents performing an explicit sort.
*
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index 45205a830f..4e92ebdf41 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -16,9 +16,20 @@
#include "nodes/makefuncs.h"
#include "nodes/nodeFuncs.h"
+#include "optimizer/cost.h"
#include "optimizer/tlist.h"
+typedef struct
+{
+ List *nextlevel_tlist;
+ bool nextlevel_contains_srfs;
+} split_pathtarget_context;
+
+static bool split_pathtarget_walker(Node *node,
+ split_pathtarget_context *context);
+
+
/*****************************************************************************
* Target list creation and searching utilities
*****************************************************************************/
@@ -759,3 +770,191 @@ apply_pathtarget_labeling_to_tlist(List *tlist, PathTarget *target)
i++;
}
}
+
+/*
+ * split_pathtarget_at_srfs
+ * Split given PathTarget into multiple levels to position SRFs safely
+ *
+ * The executor can only handle set-returning functions that appear at the
+ * top level of the targetlist of a Result plan node. If we have any SRFs
+ * that are not at top level, we need to split up the evaluation into multiple
+ * plan levels in which each level satisfies this constraint. This function
+ * creates appropriate PathTarget(s) for each level.
+ *
+ * As an example, consider the tlist expression
+ * x + srf1(srf2(y + z))
+ * This expression should appear as-is in the top PathTarget, but below that
+ * we must have a PathTarget containing
+ * x, srf1(srf2(y + z))
+ * and below that, another PathTarget containing
+ * x, srf2(y + z)
+ * and below that, another PathTarget containing
+ * x, y, z
+ * When these tlists are processed by setrefs.c, subexpressions that match
+ * output expressions of the next lower tlist will be replaced by Vars,
+ * so that what the executor gets are tlists looking like
+ * Var1 + Var2
+ * Var1, srf1(Var2)
+ * Var1, srf2(Var2 + Var3)
+ * x, y, z
+ * which satisfy the desired property.
+ *
+ * In some cases, a SRF has already been evaluated in some previous plan level
+ * and we shouldn't expand it again (that is, what we see in the target is
+ * already meant as a reference to a lower subexpression). So, don't expand
+ * any tlist expressions that appear in input_target, if that's not NULL.
+ * In principle we might need to consider matching subexpressions to
+ * input_target, but for now it's not necessary because only ORDER BY and
+ * GROUP BY expressions are at issue and those will look the same at both
+ * plan levels.
+ *
+ * The outputs of this function are two parallel lists, one a list of
+ * PathTargets and the other an integer list of bool flags indicating
+ * whether the corresponding PathTarget contains any top-level SRFs.
+ * The lists are given in the order they'd need to be evaluated in, with
+ * the "lowest" PathTarget first. So the last list entry is always the
+ * originally given PathTarget, and any entries before it indicate evaluation
+ * levels that must be inserted below it. The first list entry must not
+ * contain any SRFs, since it will typically be attached to a plan node
+ * that cannot evaluate SRFs.
+ *
+ * Note: using a list for the flags may seem like overkill, since there
+ * are only a few possible patterns for which levels contain SRFs.
+ * But this representation decouples callers from that knowledge.
+ */
+void
+split_pathtarget_at_srfs(PlannerInfo *root,
+ PathTarget *target, PathTarget *input_target,
+ List **targets, List **targets_contain_srfs)
+{
+ /* Initialize output lists to empty; we prepend to them within loop */
+ *targets = *targets_contain_srfs = NIL;
+
+ /* Loop to consider each level of PathTarget we need */
+ for (;;)
+ {
+ bool target_contains_srfs = false;
+ split_pathtarget_context context;
+ ListCell *lc;
+
+ context.nextlevel_tlist = NIL;
+ context.nextlevel_contains_srfs = false;
+
+ /*
+ * Scan the PathTarget looking for SRFs. Top-level SRFs are handled
+ * in this loop, ones lower down are found by split_pathtarget_walker.
+ */
+ foreach(lc, target->exprs)
+ {
+ Node *node = (Node *) lfirst(lc);
+
+ /*
+ * A tlist item that is just a reference to an expression already
+ * computed in input_target need not be evaluated here, so just
+ * make sure it's included in the next PathTarget.
+ */
+ if (input_target && list_member(input_target->exprs, node))
+ {
+ context.nextlevel_tlist = lappend(context.nextlevel_tlist, node);
+ continue;
+ }
+
+ /* Else, we need to compute this expression. */
+ if (IsA(node, FuncExpr) &&
+ ((FuncExpr *) node)->funcretset)
+ {
+ /* Top-level SRF: it can be evaluated here */
+ target_contains_srfs = true;
+ /* Recursively examine SRF's inputs */
+ split_pathtarget_walker((Node *) ((FuncExpr *) node)->args,
+ &context);
+ }
+ else if (IsA(node, OpExpr) &&
+ ((OpExpr *) node)->opretset)
+ {
+ /* Same as above, but for set-returning operator */
+ target_contains_srfs = true;
+ split_pathtarget_walker((Node *) ((OpExpr *) node)->args,
+ &context);
+ }
+ else
+ {
+ /* Not a top-level SRF, so recursively examine expression */
+ split_pathtarget_walker(node, &context);
+ }
+ }
+
+ /*
+ * Prepend current target and associated flag to output lists.
+ */
+ *targets = lcons(target, *targets);
+ *targets_contain_srfs = lcons_int(target_contains_srfs,
+ *targets_contain_srfs);
+
+ /*
+ * Done if we found no SRFs anywhere in this target; the tentative
+ * tlist we built for the next level can be discarded.
+ */
+ if (!target_contains_srfs && !context.nextlevel_contains_srfs)
+ break;
+
+ /*
+ * Else build the next PathTarget down, and loop back to process it.
+ * Copy the subexpressions to make sure PathTargets don't share
+ * substructure (might be unnecessary, but be safe); and drop any
+ * duplicate entries in the sub-targetlist.
+ */
+ target = create_empty_pathtarget();
+ add_new_columns_to_pathtarget(target,
+ (List *) copyObject(context.nextlevel_tlist));
+ set_pathtarget_cost_width(root, target);
+ }
+}
+
+/* Recursively examine expressions for split_pathtarget_at_srfs */
+static bool
+split_pathtarget_walker(Node *node, split_pathtarget_context *context)
+{
+ if (node == NULL)
+ return false;
+ if (IsA(node, Var) ||
+ IsA(node, PlaceHolderVar) ||
+ IsA(node, Aggref) ||
+ IsA(node, GroupingFunc) ||
+ IsA(node, WindowFunc))
+ {
+ /*
+ * Pass these items down to the child plan level for evaluation.
+ *
+ * We assume that these constructs cannot contain any SRFs (if one
+ * does, there will be an executor failure from a misplaced SRF).
+ */
+ context->nextlevel_tlist = lappend(context->nextlevel_tlist, node);
+
+ /* Having done that, we need not examine their sub-structure */
+ return false;
+ }
+ else if ((IsA(node, FuncExpr) &&
+ ((FuncExpr *) node)->funcretset) ||
+ (IsA(node, OpExpr) &&
+ ((OpExpr *) node)->opretset))
+ {
+ /*
+ * Pass SRFs down to the child plan level for evaluation, and mark
+ * that it contains SRFs. (We are not at top level of our own tlist,
+ * else this would have been picked up by split_pathtarget_at_srfs.)
+ */
+ context->nextlevel_tlist = lappend(context->nextlevel_tlist, node);
+ context->nextlevel_contains_srfs = true;
+
+ /* Inputs to the SRF need not be considered here, so we're done */
+ return false;
+ }
+
+ /*
+ * Otherwise, the node is evaluatable within the current PathTarget, so
+ * recurse to examine its inputs.
+ */
+ return expression_tree_walker(node, split_pathtarget_walker,
+ (void *) context);
+}
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index e1d31c795a..de4092d679 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1293,6 +1293,7 @@ typedef struct ProjectionPath
Path path;
Path *subpath; /* path representing input source */
bool dummypp; /* true if no separate Result is needed */
+ bool srfpp; /* true if SRFs are being evaluated here */
} ProjectionPath;
/*
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 6173ef8d75..cc0d7b0a26 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -54,7 +54,6 @@ extern bool contain_window_function(Node *clause);
extern WindowFuncLists *find_window_functions(Node *clause, Index maxWinRef);
extern double expression_returns_set_rows(Node *clause);
-extern double tlist_returns_set_rows(List *tlist);
extern bool contain_subplans(Node *clause);
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index d16f879fc1..c11c59df23 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -144,6 +144,10 @@ extern Path *apply_projection_to_path(PlannerInfo *root,
RelOptInfo *rel,
Path *path,
PathTarget *target);
+extern ProjectionPath *create_srf_projection_path(PlannerInfo *root,
+ RelOptInfo *rel,
+ Path *subpath,
+ PathTarget *target);
extern SortPath *create_sort_path(PlannerInfo *root,
RelOptInfo *rel,
Path *subpath,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index f80b31a673..976024a164 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -61,6 +61,9 @@ extern void add_column_to_pathtarget(PathTarget *target,
extern void add_new_column_to_pathtarget(PathTarget *target, Expr *expr);
extern void add_new_columns_to_pathtarget(PathTarget *target, List *exprs);
extern void apply_pathtarget_labeling_to_tlist(List *tlist, PathTarget *target);
+extern void split_pathtarget_at_srfs(PlannerInfo *root,
+ PathTarget *target, PathTarget *input_target,
+ List **targets, List **targets_contain_srfs);
/* Convenience macro to get a PathTarget with valid cost/width fields */
#define create_pathtarget(root, tlist) \
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index fa1f5e7879..b71d81ee21 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -823,7 +823,8 @@ explain (costs off)
-> Index Only Scan Backward using tenk1_unique2 on tenk1
Index Cond: (unique2 IS NOT NULL)
-> Result
-(7 rows)
+ -> Result
+(8 rows)
select max(unique2), generate_series(1,3) as g from tenk1 order by g desc;
max | g
diff --git a/src/test/regress/expected/limit.out b/src/test/regress/expected/limit.out
index 9c3eecfc3b..a7ded3ad05 100644
--- a/src/test/regress/expected/limit.out
+++ b/src/test/regress/expected/limit.out
@@ -208,13 +208,15 @@ select currval('testseq');
explain (verbose, costs off)
select unique1, unique2, generate_series(1,10)
from tenk1 order by unique2 limit 7;
- QUERY PLAN
-----------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit
Output: unique1, unique2, (generate_series(1, 10))
- -> Index Scan using tenk1_unique2 on public.tenk1
+ -> Result
Output: unique1, unique2, generate_series(1, 10)
-(4 rows)
+ -> Index Scan using tenk1_unique2 on public.tenk1
+ Output: unique1, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4
+(6 rows)
select unique1, unique2, generate_series(1,10)
from tenk1 order by unique2 limit 7;
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4b21..9634fa16d2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -1995,12 +1995,10 @@ SELECT *,
END)
FROM
(VALUES (1,''), (2,'0000000049404'), (3,'FROM 10000000876')) v(id, str);
- id | str | lower
-----+------------------+------------------
- 1 | |
- 2 | 0000000049404 | 49404
- 3 | FROM 10000000876 | from 10000000876
-(3 rows)
+ id | str | lower
+----+---------------+-------
+ 2 | 0000000049404 | 49404
+(1 row)
-- check whole-row-Var handling in nested lateral functions (bug #11703)
create function extractq2(t int8_tbl) returns int8 as $$
diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out
index eda319d24b..3ed089aa46 100644
--- a/src/test/regress/expected/subselect.out
+++ b/src/test/regress/expected/subselect.out
@@ -807,24 +807,28 @@ select * from int4_tbl where
explain (verbose, costs off)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,2) / 10 g from int4_tbl i group by f1);
- QUERY PLAN
-----------------------------------------------------------------
- Hash Semi Join
+ QUERY PLAN
+-------------------------------------------------------------------
+ Nested Loop Semi Join
Output: o.f1
- Hash Cond: (o.f1 = "ANY_subquery".f1)
+ Join Filter: (o.f1 = "ANY_subquery".f1)
-> Seq Scan on public.int4_tbl o
Output: o.f1
- -> Hash
+ -> Materialize
Output: "ANY_subquery".f1, "ANY_subquery".g
-> Subquery Scan on "ANY_subquery"
Output: "ANY_subquery".f1, "ANY_subquery".g
Filter: ("ANY_subquery".f1 = "ANY_subquery".g)
- -> HashAggregate
- Output: i.f1, (generate_series(1, 2) / 10)
- Group Key: i.f1
- -> Seq Scan on public.int4_tbl i
- Output: i.f1
-(15 rows)
+ -> Result
+ Output: i.f1, ((generate_series(1, 2)) / 10)
+ -> Result
+ Output: i.f1, generate_series(1, 2)
+ -> HashAggregate
+ Output: i.f1
+ Group Key: i.f1
+ -> Seq Scan on public.int4_tbl i
+ Output: i.f1
+(19 rows)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,2) / 10 g from int4_tbl i group by f1);
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
index 7bb6d17fcb..f257537925 100644
--- a/src/test/regress/expected/tsrf.out
+++ b/src/test/regress/expected/tsrf.out
@@ -43,7 +43,16 @@ SELECT generate_series(1, generate_series(1, 3));
-- srf, with two SRF arguments
SELECT generate_series(generate_series(1,3), generate_series(2, 4));
-ERROR: functions and operators can take at most one set argument
+ generate_series
+-----------------
+ 1
+ 2
+ 2
+ 3
+ 3
+ 4
+(6 rows)
+
CREATE TABLE few(id int, dataa text, datab text);
INSERT INTO few VALUES(1, 'a', 'foo'),(2, 'a', 'bar'),(3, 'b', 'bar');
-- SRF output order of sorting is maintained, if SRF is not referenced
--
2.11.0.22.g8d7a455.dirty
0002-Implement-targetlist-set-returning-functions-in-a-ne.patchtext/x-patch; charset=us-asciiDownload
From 6e07070ac1f2544ce8f0e455cc34b25144dd4a3e Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 16 Jan 2017 12:40:13 -0800
Subject: [PATCH 2/2] Implement targetlist set returning functions in a new
pipeline node.
---
src/backend/commands/explain.c | 5 +
src/backend/executor/Makefile | 4 +-
src/backend/executor/execAmi.c | 5 +
src/backend/executor/execProcnode.c | 14 ++
src/backend/executor/execQual.c | 85 ++++----
src/backend/executor/nodeSetResult.c | 322 +++++++++++++++++++++++++++++++
src/backend/nodes/copyfuncs.c | 19 ++
src/backend/nodes/outfuncs.c | 12 +-
src/backend/nodes/readfuncs.c | 16 ++
src/backend/optimizer/path/allpaths.c | 3 +
src/backend/optimizer/plan/createplan.c | 93 ++++++---
src/backend/optimizer/plan/planner.c | 4 +-
src/backend/optimizer/plan/setrefs.c | 21 ++
src/backend/optimizer/plan/subselect.c | 1 +
src/backend/optimizer/util/pathnode.c | 17 +-
src/include/executor/executor.h | 4 +
src/include/executor/nodeSetResult.h | 24 +++
src/include/nodes/execnodes.h | 15 ++
src/include/nodes/nodes.h | 3 +
src/include/nodes/plannodes.h | 7 +
src/include/nodes/relation.h | 11 +-
src/include/optimizer/pathnode.h | 2 +-
src/test/regress/expected/aggregates.out | 2 +-
src/test/regress/expected/limit.out | 8 +-
src/test/regress/expected/portals.out | 8 +-
src/test/regress/expected/subselect.out | 13 +-
src/test/regress/expected/tsrf.out | 8 +-
src/test/regress/expected/union.out | 2 +-
28 files changed, 631 insertions(+), 97 deletions(-)
create mode 100644 src/backend/executor/nodeSetResult.c
create mode 100644 src/include/executor/nodeSetResult.h
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index ee7046c47b..a1a42f747d 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -852,6 +852,11 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_Result:
pname = sname = "Result";
break;
+
+ case T_SetResult:
+ pname = sname = "SetResult";
+ break;
+
case T_ModifyTable:
sname = "ModifyTable";
switch (((ModifyTable *) plan)->operation)
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c5e7..15587435d7 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -22,8 +22,8 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
nodeLimit.o nodeLockRows.o \
nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
- nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
- nodeValuesscan.o nodeCtescan.o nodeWorktablescan.o \
+ nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSetResult.o nodeSort.o \
+ nodeUnique.o nodeValuesscan.o nodeCtescan.o nodeWorktablescan.o \
nodeGroup.o nodeSubplan.o nodeSubqueryscan.o nodeTidscan.o \
nodeForeignscan.o nodeWindowAgg.o tstoreReceiver.o tqueue.o spi.o
diff --git a/src/backend/executor/execAmi.c b/src/backend/executor/execAmi.c
index 3ea36979b3..c9c222f446 100644
--- a/src/backend/executor/execAmi.c
+++ b/src/backend/executor/execAmi.c
@@ -44,6 +44,7 @@
#include "executor/nodeSamplescan.h"
#include "executor/nodeSeqscan.h"
#include "executor/nodeSetOp.h"
+#include "executor/nodeSetResult.h"
#include "executor/nodeSort.h"
#include "executor/nodeSubplan.h"
#include "executor/nodeSubqueryscan.h"
@@ -130,6 +131,10 @@ ExecReScan(PlanState *node)
ExecReScanResult((ResultState *) node);
break;
+ case T_SetResultState:
+ ExecReScanSetResult((SetResultState *) node);
+ break;
+
case T_ModifyTableState:
ExecReScanModifyTable((ModifyTableState *) node);
break;
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index b8edd36470..f3cc706f13 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -106,6 +106,7 @@
#include "executor/nodeSamplescan.h"
#include "executor/nodeSeqscan.h"
#include "executor/nodeSetOp.h"
+#include "executor/nodeSetResult.h"
#include "executor/nodeSort.h"
#include "executor/nodeSubplan.h"
#include "executor/nodeSubqueryscan.h"
@@ -155,6 +156,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags);
break;
+ case T_SetResult:
+ result = (PlanState *) ExecInitSetResult((SetResult *) node,
+ estate, eflags);
+ break;
+
case T_ModifyTable:
result = (PlanState *) ExecInitModifyTable((ModifyTable *) node,
estate, eflags);
@@ -392,6 +398,10 @@ ExecProcNode(PlanState *node)
result = ExecResult((ResultState *) node);
break;
+ case T_SetResultState:
+ result = ExecSetResult((SetResultState *) node);
+ break;
+
case T_ModifyTableState:
result = ExecModifyTable((ModifyTableState *) node);
break;
@@ -634,6 +644,10 @@ ExecEndNode(PlanState *node)
ExecEndResult((ResultState *) node);
break;
+ case T_SetResultState:
+ ExecEndSetResult((SetResultState *) node);
+ break;
+
case T_ModifyTableState:
ExecEndModifyTable((ModifyTableState *) node);
break;
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index bf007b7efd..475efedad2 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -104,10 +104,6 @@ static void ExecPrepareTuplestoreResult(FuncExprState *fcache,
Tuplestorestate *resultStore,
TupleDesc resultDesc);
static void tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc);
-static Datum ExecMakeFunctionResult(FuncExprState *fcache,
- ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
static Datum ExecMakeFunctionResultNoSets(FuncExprState *fcache,
ExprContext *econtext,
bool *isNull, ExprDoneCond *isDone);
@@ -1681,7 +1677,7 @@ tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc)
* This function handles the most general case, wherein the function or
* one of its arguments can return a set.
*/
-static Datum
+Datum
ExecMakeFunctionResult(FuncExprState *fcache,
ExprContext *econtext,
bool *isNull,
@@ -1702,6 +1698,32 @@ restart:
check_stack_depth();
/*
+ * Initialize function cache if first time through. Unfortunately the
+ * parent can be either an FuncExpr or OpExpr. This is a bit ugly.
+ */
+ if (fcache->func.fn_oid == InvalidOid)
+ {
+ if (IsA(fcache->xprstate.expr, FuncExpr))
+ {
+ FuncExpr *func = (FuncExpr *) fcache->xprstate.expr;
+
+ init_fcache(func->funcid, func->inputcollid, fcache,
+ econtext->ecxt_per_query_memory, true);
+ }
+ else if (IsA(fcache->xprstate.expr, OpExpr))
+ {
+ OpExpr *op = (OpExpr *) fcache->xprstate.expr;
+
+ init_fcache(op->opfuncid, op->inputcollid, fcache,
+ econtext->ecxt_per_query_memory, true);
+ }
+ else
+ {
+ elog(ERROR, "unexpected type");
+ }
+ }
+
+ /*
* If a previous call of the function returned a set result in the form of
* a tuplestore, continue reading rows from the tuplestore until it's
* empty.
@@ -2423,24 +2445,18 @@ ExecEvalFunc(FuncExprState *fcache,
/* Initialize function lookup info */
init_fcache(func->funcid, func->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory, false);
- /*
- * We need to invoke ExecMakeFunctionResult if either the function itself
- * or any of its input expressions can return a set. Otherwise, invoke
- * ExecMakeFunctionResultNoSets. In either case, change the evalfunc
- * pointer to go directly there on subsequent uses.
- */
- if (fcache->func.fn_retset || expression_returns_set((Node *) func->args))
+ if (expression_returns_set((Node *) func->args) ||
+ fcache->func.fn_retset)
{
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResult;
- return ExecMakeFunctionResult(fcache, econtext, isNull, isDone);
- }
- else
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
}
+
+ fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
}
/* ----------------------------------------------------------------
@@ -2458,24 +2474,23 @@ ExecEvalOper(FuncExprState *fcache,
/* Initialize function lookup info */
init_fcache(op->opfuncid, op->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory, false);
- /*
- * We need to invoke ExecMakeFunctionResult if either the function itself
- * or any of its input expressions can return a set. Otherwise, invoke
- * ExecMakeFunctionResultNoSets. In either case, change the evalfunc
- * pointer to go directly there on subsequent uses.
- */
- if (fcache->func.fn_retset || expression_returns_set((Node *) op->args))
+ /* should never get here */
+ if (expression_returns_set((Node *) op->args) ||
+ fcache->func.fn_retset)
{
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResult;
- return ExecMakeFunctionResult(fcache, econtext, isNull, isDone);
- }
- else
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
}
+
+ /* should never get here */
+ Assert(!expression_returns_set((Node *) op->args));
+ Assert(!fcache->func.fn_retset);
+
+ fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
}
/* ----------------------------------------------------------------
diff --git a/src/backend/executor/nodeSetResult.c b/src/backend/executor/nodeSetResult.c
new file mode 100644
index 0000000000..55a9789632
--- /dev/null
+++ b/src/backend/executor/nodeSetResult.c
@@ -0,0 +1,322 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeSetResult.c
+ * support for evaluating targetlist set returning functions
+ *
+ * DESCRIPTION
+ *
+ * SetResult nodes are inserted by the planner to evaluate set returning
+ * functions in the targetlist. It's guaranteed that all set returning
+ * functions are directly at the top level of the targetlist, i.e. there
+ * can't be inside a more complex expressions. If that'd otherwise be
+ * the case, the planner adds additional SetResult nodes.
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/executor/nodeSetResult.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "executor/executor.h"
+#include "executor/nodeSetResult.h"
+#include "utils/memutils.h"
+
+
+static TupleTableSlot *
+ExecProjectSRF(SetResultState *node, bool continuing);
+
+
+/* ----------------------------------------------------------------
+ * ExecSetResult(node)
+ *
+ * returns the tuples from the outer plan which satisfy the
+ * qualification clause. Since result nodes with right
+ * subtrees are never planned, we ignore the right subtree
+ * entirely (for now).. -cim 10/7/89
+ *
+ * The qualification containing only constant clauses are
+ * checked first before any processing is done. It always returns
+ * 'nil' if the constant qualification is not satisfied.
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecSetResult(SetResultState *node)
+{
+ TupleTableSlot *outerTupleSlot;
+ TupleTableSlot *resultSlot;
+ PlanState *outerPlan;
+ ExprContext *econtext;
+
+ econtext = node->ps.ps_ExprContext;
+
+ /*
+ * Check to see if we're still projecting out tuples from a previous scan
+ * tuple (because there is a function-returning-set in the projection
+ * expressions). If so, try to project another one.
+ */
+ if (node->pending_srf_tuples)
+ {
+ resultSlot = ExecProjectSRF(node, true);
+
+ if (resultSlot != NULL)
+ return resultSlot;
+ }
+
+ /*
+ * Reset per-tuple memory context to free any expression evaluation
+ * storage allocated in the previous tuple cycle. Note this can't happen
+ * until we're done projecting out tuples from a scan tuple.
+ */
+ ResetExprContext(econtext);
+
+ /*
+ * if input_done is true then it means that we were asked to return a
+ * constant tuple and we already did the last time ExecSetResult() was
+ * called. Either way, now we are through.
+ */
+ while (!node->input_done)
+ {
+ outerPlan = outerPlanState(node);
+
+ if (outerPlan != NULL)
+ {
+ /*
+ * Retrieve tuples from the outer plan until there are no more.
+ */
+ outerTupleSlot = ExecProcNode(outerPlan);
+
+ if (TupIsNull(outerTupleSlot))
+ return NULL;
+
+ /*
+ * Prepare to compute projection expressions, which will expect to
+ * access the input tuples as varno OUTER.
+ */
+ econtext->ecxt_outertuple = outerTupleSlot;
+ }
+ else
+ {
+ /*
+ * If we don't have an outer plan, then we are just generating the
+ * results from a constant target list. Do it only once.
+ */
+ node->input_done = true;
+ }
+
+ resultSlot = ExecProjectSRF(node, false);
+
+ /*
+ * Return the tuple unless the projection produced now rows (due to an
+ * empty set), in which case we must loop back to see if there are
+ * more outerPlan tuples.
+ */
+ if (resultSlot)
+ return resultSlot;
+ }
+
+ return NULL;
+}
+
+/* ----------------------------------------------------------------
+ * ExecProjectSRF
+ *
+ * Project a targetlist containing one or more set returning functions.
+ *
+ * continuing is to be set to true if we're continuing to project rows
+ * for the same input tuple.
+ *
+ * Returns NULL if no output tuple has been produced.
+ *
+ * ----------------------------------------------------------------
+ */
+static TupleTableSlot *
+ExecProjectSRF(SetResultState *node, bool continuing)
+{
+ TupleTableSlot *resultSlot = node->ps.ps_ResultTupleSlot;
+ ExprContext *econtext = node->ps.ps_ExprContext;
+ ListCell *lc;
+ int argno;
+ bool hasresult;
+ bool hassrf = false PG_USED_FOR_ASSERTS_ONLY;
+
+ ExecClearTuple(resultSlot);
+
+ /*
+ * Assume no further tuples are produces unless an ExprMultipleResult is
+ * encountered from a set returning function.
+ */
+ node->pending_srf_tuples = false;
+
+ hasresult = false;
+ argno = 0;
+ foreach(lc, node->ps.targetlist)
+ {
+ GenericExprState *gstate = (GenericExprState *) lfirst(lc);
+ ExprDoneCond *isdone = &node->elemdone[argno];
+ Datum *result = &resultSlot->tts_values[argno];
+ bool *isnull = &resultSlot->tts_isnull[argno];
+
+ if (continuing && *isdone == ExprEndResult)
+ {
+ /*
+ * If we're continuing to project output rows from a source tuple,
+ * return NULLs once the SRF has been exhausted.
+ */
+ *result = 0;
+ *isnull = true;
+ hassrf = true;
+ }
+ else if (IsA(gstate->arg, FuncExprState) &&
+ ((FuncExpr *) gstate->arg->expr)->funcretset)
+ {
+ /*
+ * Evaluate SRF - possibly continuing previously started output.
+ */
+ *result = ExecMakeFunctionResult((FuncExprState *) gstate->arg,
+ econtext, isnull, isdone);
+
+ if (node->elemdone[argno] != ExprEndResult)
+ hasresult = true;
+ if (node->elemdone[argno] == ExprMultipleResult)
+ node->pending_srf_tuples = true;
+ hassrf = true;
+ }
+ else
+ {
+ *result = ExecEvalExpr(gstate->arg, econtext, isnull, NULL);
+ *isdone = ExprSingleResult;
+ }
+
+ argno++;
+ }
+
+ /* SetResult should not be used if there's no SRFs */
+ Assert(hassrf);
+
+ /*
+ * If all the SRFs returned EndResult, we consider that as no result being
+ * produced.
+ */
+ if (hasresult)
+ {
+ ExecStoreVirtualTuple(resultSlot);
+ return resultSlot;
+ }
+
+ return NULL;
+}
+
+/* ----------------------------------------------------------------
+ * ExecInitResult
+ *
+ * Creates the run-time state information for the SetResult node
+ * produced by the planner and initializes outer relations
+ * (child nodes).
+ * ----------------------------------------------------------------
+ */
+SetResultState *
+ExecInitSetResult(SetResult *node, EState *estate, int eflags)
+{
+ SetResultState *state;
+
+ /* check for unsupported flags */
+ Assert(!(eflags & (EXEC_FLAG_MARK | EXEC_FLAG_BACKWARD)) ||
+ outerPlan(node) != NULL);
+
+ /*
+ * create state structure
+ */
+ state = makeNode(SetResultState);
+ state->ps.plan = (Plan *) node;
+ state->ps.state = estate;
+
+ state->input_done = false;
+ state->pending_srf_tuples = false;
+
+ /*
+ * Miscellaneous initialization
+ *
+ * create expression context for node
+ */
+ ExecAssignExprContext(estate, &state->ps);
+
+ /*
+ * tuple table initialization
+ */
+ ExecInitResultTupleSlot(estate, &state->ps);
+
+ /*
+ * initialize child expressions
+ */
+ state->ps.targetlist = (List *)
+ ExecInitExpr((Expr *) node->plan.targetlist,
+ (PlanState *) state);
+ state->ps.qual = (List *)
+ ExecInitExpr((Expr *) node->plan.qual,
+ (PlanState *) state);
+
+ /*
+ * initialize child nodes
+ */
+ outerPlanState(state) = ExecInitNode(outerPlan(node), estate, eflags);
+
+ /*
+ * we don't use inner plan
+ */
+ Assert(innerPlan(node) == NULL);
+
+ /*
+ * initialize tuple type and projection info
+ */
+ ExecAssignResultTypeFromTL(&state->ps);
+
+ state->nelems = list_length(node->plan.targetlist);
+ state->elemdone = palloc(sizeof(ExprDoneCond) * state->nelems);
+
+ return state;
+}
+
+/* ----------------------------------------------------------------
+ * ExecEndSetResult
+ *
+ * frees up storage allocated through C routines
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndSetResult(SetResultState *node)
+{
+ /*
+ * Free the exprcontext
+ */
+ ExecFreeExprContext(&node->ps);
+
+ /*
+ * clean out the tuple table
+ */
+ ExecClearTuple(node->ps.ps_ResultTupleSlot);
+
+ /*
+ * shut down subplans
+ */
+ ExecEndNode(outerPlanState(node));
+}
+
+void
+ExecReScanSetResult(SetResultState *node)
+{
+ node->input_done = false;
+ node->pending_srf_tuples = false;
+
+ /*
+ * If chgParam of subnode is not null then plan will be re-scanned by
+ * first ExecProcNode.
+ */
+ if (node->ps.lefttree &&
+ node->ps.lefttree->chgParam == NULL)
+ ExecReScan(node->ps.lefttree);
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 7107bbf164..37fbb35455 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -166,6 +166,22 @@ _copyResult(const Result *from)
}
/*
+ * _copySetResult
+ */
+static SetResult *
+_copySetResult(const SetResult *from)
+{
+ SetResult *newnode = makeNode(SetResult);
+
+ /*
+ * copy node superclass fields
+ */
+ CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+ return newnode;
+}
+
+/*
* _copyModifyTable
*/
static ModifyTable *
@@ -4413,6 +4429,9 @@ copyObject(const void *from)
case T_Result:
retval = _copyResult(from);
break;
+ case T_SetResult:
+ retval = _copySetResult(from);
+ break;
case T_ModifyTable:
retval = _copyModifyTable(from);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 73fdc9706d..6a1b9a4536 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -327,6 +327,14 @@ _outResult(StringInfo str, const Result *node)
}
static void
+_outSetResult(StringInfo str, const SetResult *node)
+{
+ WRITE_NODE_TYPE("SETRESULT");
+
+ _outPlanInfo(str, (const Plan *) node);
+}
+
+static void
_outModifyTable(StringInfo str, const ModifyTable *node)
{
WRITE_NODE_TYPE("MODIFYTABLE");
@@ -1805,7 +1813,6 @@ _outProjectionPath(StringInfo str, const ProjectionPath *node)
WRITE_NODE_FIELD(subpath);
WRITE_BOOL_FIELD(dummypp);
- WRITE_BOOL_FIELD(srfpp);
}
static void
@@ -3362,6 +3369,9 @@ outNode(StringInfo str, const void *obj)
case T_Result:
_outResult(str, obj);
break;
+ case T_SetResult:
+ _outSetResult(str, obj);
+ break;
case T_ModifyTable:
_outModifyTable(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index e02dd94f05..f47b841947 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -1483,6 +1483,20 @@ _readResult(void)
READ_DONE();
}
+
+/*
+ * _readSetResult
+ */
+static SetResult *
+_readSetResult(void)
+{
+ READ_LOCALS_NO_FIELDS(SetResult);
+
+ ReadCommonPlan(&local_node->plan);
+
+ READ_DONE();
+}
+
/*
* _readModifyTable
*/
@@ -2450,6 +2464,8 @@ parseNodeString(void)
return_value = _readPlan();
else if (MATCH("RESULT", 6))
return_value = _readResult();
+ else if (MATCH("SETRESULT", 9))
+ return_value = _readSetResult();
else if (MATCH("MODIFYTABLE", 11))
return_value = _readModifyTable();
else if (MATCH("APPEND", 6))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 46d7d064d4..1708e8062c 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -2976,6 +2976,9 @@ print_path(PlannerInfo *root, Path *path, int indent)
case T_ResultPath:
ptype = "Result";
break;
+ case T_SetResultPath:
+ ptype = "SetResult";
+ break;
case T_MaterialPath:
ptype = "Material";
subpath = ((MaterialPath *) path)->subpath;
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 875de739a8..78f9d1b4c3 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -81,6 +81,7 @@ static Plan *create_join_plan(PlannerInfo *root, JoinPath *best_path);
static Plan *create_append_plan(PlannerInfo *root, AppendPath *best_path);
static Plan *create_merge_append_plan(PlannerInfo *root, MergeAppendPath *best_path);
static Result *create_result_plan(PlannerInfo *root, ResultPath *best_path);
+static SetResult *create_set_result_plan(PlannerInfo *root, SetProjectionPath *best_path);
static Material *create_material_plan(PlannerInfo *root, MaterialPath *best_path,
int flags);
static Plan *create_unique_plan(PlannerInfo *root, UniquePath *best_path,
@@ -264,6 +265,7 @@ static SetOp *make_setop(SetOpCmd cmd, SetOpStrategy strategy, Plan *lefttree,
long numGroups);
static LockRows *make_lockrows(Plan *lefttree, List *rowMarks, int epqParam);
static Result *make_result(List *tlist, Node *resconstantqual, Plan *subplan);
+static SetResult *make_set_result(List *tlist, Plan *subplan);
static ModifyTable *make_modifytable(PlannerInfo *root,
CmdType operation, bool canSetTag,
Index nominalRelation,
@@ -392,6 +394,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(ResultPath *) best_path);
}
break;
+ case T_SetResult:
+ plan = (Plan *) create_set_result_plan(root,
+ (SetProjectionPath *) best_path);
+ break;
case T_Material:
plan = (Plan *) create_material_plan(root,
(MaterialPath *) best_path,
@@ -1142,6 +1148,44 @@ create_result_plan(PlannerInfo *root, ResultPath *best_path)
}
/*
+ * create_set_result_plan
+ * Create a SetResult plan for 'best_path'.
+ *
+ * Returns a Plan node.
+ */
+static SetResult *
+create_set_result_plan(PlannerInfo *root, SetProjectionPath *best_path)
+{
+ SetResult *plan;
+ Plan *subplan;
+ List *tlist;
+
+ /*
+ * XXX Possibly-temporary hack: if the subpath is a dummy ResultPath,
+ * don't bother with it, just make a SetResult with no input. This avoids
+ * an extra Result plan node when doing "SELECT srf()". Depending on what
+ * we decide about the desired plan structure for SRF-expanding nodes,
+ * this optimization might have to go away, and in any case it'll probably
+ * look a good bit different.
+ */
+ if (IsA(best_path->subpath, ResultPath) &&
+ ((ResultPath *) best_path->subpath)->path.pathtarget->exprs == NIL &&
+ ((ResultPath *) best_path->subpath)->quals == NIL)
+ subplan = NULL;
+ else
+ /* Since we intend to project, we don't need to constrain child tlist */
+ subplan = create_plan_recurse(root, best_path->subpath, 0);
+
+ tlist = build_path_tlist(root, &best_path->path);
+
+ plan = make_set_result(tlist, subplan);
+
+ copy_generic_path_info(&plan->plan, (Path *) best_path);
+
+ return plan;
+}
+
+/*
* create_material_plan
* Create a Material plan for 'best_path' and (recursively) plans
* for its subpaths.
@@ -1421,21 +1465,8 @@ create_projection_plan(PlannerInfo *root, ProjectionPath *best_path)
Plan *subplan;
List *tlist;
- /*
- * XXX Possibly-temporary hack: if the subpath is a dummy ResultPath,
- * don't bother with it, just make a Result with no input. This avoids an
- * extra Result plan node when doing "SELECT srf()". Depending on what we
- * decide about the desired plan structure for SRF-expanding nodes, this
- * optimization might have to go away, and in any case it'll probably look
- * a good bit different.
- */
- if (IsA(best_path->subpath, ResultPath) &&
- ((ResultPath *) best_path->subpath)->path.pathtarget->exprs == NIL &&
- ((ResultPath *) best_path->subpath)->quals == NIL)
- subplan = NULL;
- else
- /* Since we intend to project, we don't need to constrain child tlist */
- subplan = create_plan_recurse(root, best_path->subpath, 0);
+ /* Since we intend to project, we don't need to constrain child tlist */
+ subplan = create_plan_recurse(root, best_path->subpath, 0);
tlist = build_path_tlist(root, &best_path->path);
@@ -1454,9 +1485,8 @@ create_projection_plan(PlannerInfo *root, ProjectionPath *best_path)
* creation, but that would add expense to creating Paths we might end up
* not using.)
*/
- if (!best_path->srfpp &&
- (is_projection_capable_path(best_path->subpath) ||
- tlist_same_exprs(tlist, subplan->targetlist)))
+ if (is_projection_capable_path(best_path->subpath) ||
+ tlist_same_exprs(tlist, subplan->targetlist))
{
/* Don't need a separate Result, just assign tlist to subplan */
plan = subplan;
@@ -6041,6 +6071,25 @@ make_result(List *tlist,
}
/*
+ * make_set_result
+ * Build a SetResult plan node
+ */
+static SetResult *
+make_set_result(List *tlist,
+ Plan *subplan)
+{
+ SetResult *node = makeNode(SetResult);
+ Plan *plan = &node->plan;
+
+ plan->targetlist = tlist;
+ plan->qual = NIL;
+ plan->lefttree = subplan;
+ plan->righttree = NULL;
+
+ return node;
+}
+
+/*
* make_modifytable
* Build a ModifyTable plan node
*/
@@ -6206,17 +6255,15 @@ is_projection_capable_path(Path *path)
* projection to its dummy path.
*/
return IS_DUMMY_PATH(path);
- case T_Result:
+ case T_SetResult:
/*
* If the path is doing SRF evaluation, claim it can't project, so
* we don't jam a new tlist into it and thereby break the property
* that the SRFs appear at top level.
*/
- if (IsA(path, ProjectionPath) &&
- ((ProjectionPath *) path)->srfpp)
- return false;
- break;
+ return false;
+
default:
break;
}
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 70870bbbe0..a208f511d9 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -5245,7 +5245,7 @@ adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel,
/* If this level doesn't contain SRFs, do regular projection */
if (contains_srfs)
- newpath = (Path *) create_srf_projection_path(root,
+ newpath = (Path *) create_set_projection_path(root,
rel,
newpath,
thistarget);
@@ -5278,7 +5278,7 @@ adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel,
/* If this level doesn't contain SRFs, do regular projection */
if (contains_srfs)
- newpath = (Path *) create_srf_projection_path(root,
+ newpath = (Path *) create_set_projection_path(root,
rel,
newpath,
thistarget);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 413a0d9da2..e77312d6af 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -733,6 +733,27 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
fix_scan_expr(root, splan->resconstantqual, rtoffset);
}
break;
+
+ case T_SetResult:
+ {
+ SetResult *splan = (SetResult *) plan;
+
+ /*
+ * SetResult may or may not have a subplan; if not, it's more
+ * like a scan node than an upper node.
+ */
+ if (splan->plan.lefttree != NULL)
+ set_upper_references(root, plan, rtoffset);
+ else
+ {
+ splan->plan.targetlist =
+ fix_scan_list(root, splan->plan.targetlist, rtoffset);
+ splan->plan.qual =
+ fix_scan_list(root, splan->plan.qual, rtoffset);
+ }
+ }
+ break;
+
case T_ModifyTable:
{
ModifyTable *splan = (ModifyTable *) plan;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index aad0b684ed..ad8b75b4d9 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2680,6 +2680,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
&context);
break;
+ case T_SetResult:
case T_Hash:
case T_Material:
case T_Sort:
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index aa635fd057..2e30af20af 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -2227,9 +2227,6 @@ create_projection_path(PlannerInfo *root,
(cpu_tuple_cost + target->cost.per_tuple) * subpath->rows;
}
- /* Assume no SRFs around */
- pathnode->srfpp = false;
-
return pathnode;
}
@@ -2333,17 +2330,17 @@ apply_projection_to_path(PlannerInfo *root,
* 'subpath' is the path representing the source of data
* 'target' is the PathTarget to be computed
*/
-ProjectionPath *
-create_srf_projection_path(PlannerInfo *root,
+SetProjectionPath *
+create_set_projection_path(PlannerInfo *root,
RelOptInfo *rel,
Path *subpath,
PathTarget *target)
{
- ProjectionPath *pathnode = makeNode(ProjectionPath);
+ SetProjectionPath *pathnode = makeNode(SetProjectionPath);
double tlist_rows;
ListCell *lc;
- pathnode->path.pathtype = T_Result;
+ pathnode->path.pathtype = T_SetResult;
pathnode->path.parent = rel;
pathnode->path.pathtarget = target;
/* For now, assume we are above any joins, so no parameterization */
@@ -2353,15 +2350,11 @@ create_srf_projection_path(PlannerInfo *root,
subpath->parallel_safe &&
is_parallel_safe(root, (Node *) target->exprs);
pathnode->path.parallel_workers = subpath->parallel_workers;
- /* Projection does not change the sort order */
+ /* Projection does not change the sort order XXX? */
pathnode->path.pathkeys = subpath->pathkeys;
pathnode->subpath = subpath;
- /* Always need the Result node */
- pathnode->dummypp = false;
- pathnode->srfpp = true;
-
/*
* Estimate number of rows produced by SRFs for each row of input; if
* there's more than one in this node, use the maximum.
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index b9c7f72903..59fae35ab5 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -262,6 +262,10 @@ extern int ExecTargetListLength(List *targetlist);
extern int ExecCleanTargetListLength(List *targetlist);
extern TupleTableSlot *ExecProject(ProjectionInfo *projInfo,
ExprDoneCond *isDone);
+extern Datum ExecMakeFunctionResult(FuncExprState *fcache,
+ ExprContext *econtext,
+ bool *isNull,
+ ExprDoneCond *isDone);
/*
* prototypes from functions in execScan.c
diff --git a/src/include/executor/nodeSetResult.h b/src/include/executor/nodeSetResult.h
new file mode 100644
index 0000000000..f51cf32956
--- /dev/null
+++ b/src/include/executor/nodeSetResult.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeSetResult.h
+ *
+ *
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeSetResult.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODESETRESULT_H
+#define NODESETRESULT_H
+
+#include "nodes/execnodes.h"
+
+extern SetResultState *ExecInitSetResult(SetResult *node, EState *estate, int eflags);
+extern TupleTableSlot *ExecSetResult(SetResultState *node);
+extern void ExecEndSetResult(SetResultState *node);
+extern void ExecReScanSetResult(SetResultState *node);
+
+#endif /* NODESETRESULT_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index ce13bf7635..69de3ebbd9 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1129,6 +1129,21 @@ typedef struct ResultState
bool rs_checkqual; /* do we need to check the qual? */
} ResultState;
+
+/* ----------------
+ * SetResultState information
+ * ----------------
+ */
+typedef struct SetResultState
+{
+ PlanState ps; /* its first field is NodeTag */
+ int nelems;
+ ExprDoneCond *elemdone;
+ bool input_done; /* done reading source tuple? */
+ bool pending_srf_tuples; /* evaluating srfs in tlist? */
+} SetResultState;
+
+
/* ----------------
* ModifyTableState information
* ----------------
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 4c4319bcab..be397fb138 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -43,6 +43,7 @@ typedef enum NodeTag
*/
T_Plan,
T_Result,
+ T_SetResult,
T_ModifyTable,
T_Append,
T_MergeAppend,
@@ -91,6 +92,7 @@ typedef enum NodeTag
*/
T_PlanState,
T_ResultState,
+ T_SetResultState,
T_ModifyTableState,
T_AppendState,
T_MergeAppendState,
@@ -245,6 +247,7 @@ typedef enum NodeTag
T_UniquePath,
T_GatherPath,
T_ProjectionPath,
+ T_SetProjectionPath,
T_SortPath,
T_GroupPath,
T_UpperUniquePath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 6810f8c099..3405f018fc 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -176,6 +176,13 @@ typedef struct Result
Node *resconstantqual;
} Result;
+
+typedef struct SetResult
+{
+ Plan plan;
+} SetResult;
+
+
/* ----------------
* ModifyTable node -
* Apply rows produced by subplan(s) to result table(s),
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index de4092d679..50fa79926a 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1293,10 +1293,19 @@ typedef struct ProjectionPath
Path path;
Path *subpath; /* path representing input source */
bool dummypp; /* true if no separate Result is needed */
- bool srfpp; /* true if SRFs are being evaluated here */
} ProjectionPath;
/*
+ * SetProjectionPath represents an evaluation of a targetlist set returning
+ * function.
+ */
+typedef struct SetProjectionPath
+{
+ Path path;
+ Path *subpath; /* path representing input source */
+} SetProjectionPath;
+
+/*
* SortPath represents an explicit sort step
*
* The sort keys are, by definition, the same as path.pathkeys.
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index c11c59df23..9cbd87c0a2 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -144,7 +144,7 @@ extern Path *apply_projection_to_path(PlannerInfo *root,
RelOptInfo *rel,
Path *path,
PathTarget *target);
-extern ProjectionPath *create_srf_projection_path(PlannerInfo *root,
+extern SetProjectionPath *create_set_projection_path(PlannerInfo *root,
RelOptInfo *rel,
Path *subpath,
PathTarget *target);
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index b71d81ee21..c7a87a25a9 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -822,7 +822,7 @@ explain (costs off)
-> Limit
-> Index Only Scan Backward using tenk1_unique2 on tenk1
Index Cond: (unique2 IS NOT NULL)
- -> Result
+ -> SetResult
-> Result
(8 rows)
diff --git a/src/test/regress/expected/limit.out b/src/test/regress/expected/limit.out
index a7ded3ad05..f3124394a3 100644
--- a/src/test/regress/expected/limit.out
+++ b/src/test/regress/expected/limit.out
@@ -212,7 +212,7 @@ select unique1, unique2, generate_series(1,10)
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit
Output: unique1, unique2, (generate_series(1, 10))
- -> Result
+ -> SetResult
Output: unique1, unique2, generate_series(1, 10)
-> Index Scan using tenk1_unique2 on public.tenk1
Output: unique1, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4
@@ -238,7 +238,7 @@ select unique1, unique2, generate_series(1,10)
--------------------------------------------------------------------
Limit
Output: unique1, unique2, (generate_series(1, 10)), tenthous
- -> Result
+ -> SetResult
Output: unique1, unique2, generate_series(1, 10), tenthous
-> Sort
Output: unique1, unique2, tenthous
@@ -265,7 +265,7 @@ explain (verbose, costs off)
select generate_series(0,2) as s1, generate_series((random()*.1)::int,2) as s2;
QUERY PLAN
------------------------------------------------------------------------------------------------------
- Result
+ SetResult
Output: generate_series(0, 2), generate_series(((random() * '0.1'::double precision))::integer, 2)
(2 rows)
@@ -285,7 +285,7 @@ order by s2 desc;
Sort
Output: (generate_series(0, 2)), (generate_series(((random() * '0.1'::double precision))::integer, 2))
Sort Key: (generate_series(((random() * '0.1'::double precision))::integer, 2)) DESC
- -> Result
+ -> SetResult
Output: generate_series(0, 2), generate_series(((random() * '0.1'::double precision))::integer, 2)
(5 rows)
diff --git a/src/test/regress/expected/portals.out b/src/test/regress/expected/portals.out
index 3ae918a63c..b49fa17eb3 100644
--- a/src/test/regress/expected/portals.out
+++ b/src/test/regress/expected/portals.out
@@ -1322,14 +1322,14 @@ begin;
explain (costs off) declare c2 cursor for select generate_series(1,3) as g;
QUERY PLAN
------------
- Result
+ SetResult
(1 row)
explain (costs off) declare c2 scroll cursor for select generate_series(1,3) as g;
- QUERY PLAN
---------------
+ QUERY PLAN
+-----------------
Materialize
- -> Result
+ -> SetResult
(2 rows)
declare c2 scroll cursor for select generate_series(1,3) as g;
diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out
index 3ed089aa46..0215c9a663 100644
--- a/src/test/regress/expected/subselect.out
+++ b/src/test/regress/expected/subselect.out
@@ -821,7 +821,7 @@ select * from int4_tbl o where (f1, f1) in
Filter: ("ANY_subquery".f1 = "ANY_subquery".g)
-> Result
Output: i.f1, ((generate_series(1, 2)) / 10)
- -> Result
+ -> SetResult
Output: i.f1, generate_series(1, 2)
-> HashAggregate
Output: i.f1
@@ -903,7 +903,7 @@ select * from
Subquery Scan on ss
Output: x, u
Filter: tattle(ss.x, 8)
- -> Result
+ -> SetResult
Output: 9, unnest('{1,2,3,11,12,13}'::integer[])
(5 rows)
@@ -934,10 +934,11 @@ select * from
where tattle(x, 8);
QUERY PLAN
----------------------------------------------------
- Result
+ SetResult
Output: 9, unnest('{1,2,3,11,12,13}'::integer[])
- One-Time Filter: tattle(9, 8)
-(3 rows)
+ -> Result
+ One-Time Filter: tattle(9, 8)
+(4 rows)
select * from
(select 9 as x, unnest(array[1,2,3,11,12,13]) as u) ss
@@ -963,7 +964,7 @@ select * from
Subquery Scan on ss
Output: x, u
Filter: tattle(ss.x, ss.u)
- -> Result
+ -> SetResult
Output: 9, unnest('{1,2,3,11,12,13}'::integer[])
(5 rows)
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
index f257537925..8c47f0f668 100644
--- a/src/test/regress/expected/tsrf.out
+++ b/src/test/regress/expected/tsrf.out
@@ -25,8 +25,8 @@ SELECT generate_series(1, 2), generate_series(1,4);
-----------------+-----------------
1 | 1
2 | 2
- 1 | 3
- 2 | 4
+ | 3
+ | 4
(4 rows)
-- srf, with SRF argument
@@ -127,15 +127,15 @@ SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few
SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
dataa | count | min | max | unnest
-------+-------+-----+-----+--------
- a | 2 | 1 | 1 | 1
a | 1 | 1 | 1 | 3
+ a | 2 | 1 | 1 | 1
(2 rows)
SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, 5;
dataa | count | min | max | unnest
-------+-------+-----+-----+--------
- a | 2 | 1 | 1 | 1
a | 1 | 1 | 1 | 3
+ a | 2 | 1 | 1 | 1
(2 rows)
-- check HAVING works when GROUP BY does [not] reference SRF output
diff --git a/src/test/regress/expected/union.out b/src/test/regress/expected/union.out
index 67f5fc4361..743d0bd0ed 100644
--- a/src/test/regress/expected/union.out
+++ b/src/test/regress/expected/union.out
@@ -636,7 +636,7 @@ ORDER BY x;
-> HashAggregate
Group Key: (1), (generate_series(1, 10))
-> Append
- -> Result
+ -> SetResult
-> Result
(9 rows)
--
2.11.0.22.g8d7a455.dirty
On 2017-01-16 16:04:34 -0300, Alvaro Herrera wrote:
Andres Freund wrote:
On 2017-01-16 12:17:46 -0300, Alvaro Herrera wrote:
Hmm. I wonder if your stuff could be used as support code for
XMLTABLE[1].I don't immediately see what functionality overlaps, could you expand on
that?Well, I haven't read any previous patches in this area, but the xmltable
patch adds a new way of handling set-returning expressions, so it
appears vaguely related.
Ugh. That's not good - I'm about to remove isDone. Like completely.
That's why I'm actually working on all this, because random expressions
returning more rows makes optimizing expression evaluation a lot harder.
These aren't properly functions in the current sense of the word,
though.
Why aren't they? Looks like it'd be doable to do so, at least below the
parser level?
Regards,
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund wrote:
On 2017-01-16 16:04:34 -0300, Alvaro Herrera wrote:
Andres Freund wrote:
On 2017-01-16 12:17:46 -0300, Alvaro Herrera wrote:
Hmm. I wonder if your stuff could be used as support code for
XMLTABLE[1].I don't immediately see what functionality overlaps, could you expand on
that?Well, I haven't read any previous patches in this area, but the xmltable
patch adds a new way of handling set-returning expressions, so it
appears vaguely related.Ugh. That's not good - I'm about to remove isDone. Like completely.
That's why I'm actually working on all this, because random expressions
returning more rows makes optimizing expression evaluation a lot harder.
Argh.
These aren't properly functions in the current sense of the word,
though.Why aren't they? Looks like it'd be doable to do so, at least below the
parser level?
Hmm ...
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-01-16 12:52:14 -0800, Andres Freund wrote:
Comments?
Hard to comment on your other points without a patch to look at.
Attached the current version. There's a *lot* of pending cleanup needed
(especially in execQual.c) removing outdated code/comments etc, but this
seems good enough for a first review. I'd want that cleanup done in a
separate patch anyway.
Here's a version with a lot of that pending cleanup added (and other
light updates). Most notably all SRF related code is gone from
executor/ excepting ExecMakeFunctionResultSet and nodeSetResult. I'm
sure there's minor remaining references somewhere, but that's the
majority.
I think when committing this the first two patches should be combined,
but the later cleanup patch one not. It hides too many of the actually
relevant changes.
Greetings,
Andres Freund
Attachments:
0001-Put-SRF-into-a-separate-node-v1.patchtext/x-patch; charset=us-asciiDownload
From 2c16e67f46f418239ab90a51611f168508bac66e Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Sun, 15 Jan 2017 19:23:22 -0800
Subject: [PATCH 1/3] Put SRF into a separate node v1.
Author: Tom Lane
Discussion: https://postgr.es/m/557.1473895705@sss.pgh.pa.us
---
src/backend/nodes/outfuncs.c | 1 +
src/backend/optimizer/plan/createplan.c | 33 ++++-
src/backend/optimizer/plan/planner.c | 219 +++++++++++++++++++++++++------
src/backend/optimizer/util/clauses.c | 104 ++-------------
src/backend/optimizer/util/pathnode.c | 75 +++++++++++
src/backend/optimizer/util/tlist.c | 199 ++++++++++++++++++++++++++++
src/include/nodes/relation.h | 1 +
src/include/optimizer/clauses.h | 1 -
src/include/optimizer/pathnode.h | 4 +
src/include/optimizer/tlist.h | 3 +
src/test/regress/expected/aggregates.out | 3 +-
src/test/regress/expected/limit.out | 10 +-
src/test/regress/expected/rangefuncs.out | 10 +-
src/test/regress/expected/subselect.out | 26 ++--
src/test/regress/expected/tsrf.out | 11 +-
15 files changed, 544 insertions(+), 156 deletions(-)
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index cf0a6059e9..73fdc9706d 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -1805,6 +1805,7 @@ _outProjectionPath(StringInfo str, const ProjectionPath *node)
WRITE_NODE_FIELD(subpath);
WRITE_BOOL_FIELD(dummypp);
+ WRITE_BOOL_FIELD(srfpp);
}
static void
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index c7bcd9b84c..875de739a8 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1421,8 +1421,21 @@ create_projection_plan(PlannerInfo *root, ProjectionPath *best_path)
Plan *subplan;
List *tlist;
- /* Since we intend to project, we don't need to constrain child tlist */
- subplan = create_plan_recurse(root, best_path->subpath, 0);
+ /*
+ * XXX Possibly-temporary hack: if the subpath is a dummy ResultPath,
+ * don't bother with it, just make a Result with no input. This avoids an
+ * extra Result plan node when doing "SELECT srf()". Depending on what we
+ * decide about the desired plan structure for SRF-expanding nodes, this
+ * optimization might have to go away, and in any case it'll probably look
+ * a good bit different.
+ */
+ if (IsA(best_path->subpath, ResultPath) &&
+ ((ResultPath *) best_path->subpath)->path.pathtarget->exprs == NIL &&
+ ((ResultPath *) best_path->subpath)->quals == NIL)
+ subplan = NULL;
+ else
+ /* Since we intend to project, we don't need to constrain child tlist */
+ subplan = create_plan_recurse(root, best_path->subpath, 0);
tlist = build_path_tlist(root, &best_path->path);
@@ -1441,8 +1454,9 @@ create_projection_plan(PlannerInfo *root, ProjectionPath *best_path)
* creation, but that would add expense to creating Paths we might end up
* not using.)
*/
- if (is_projection_capable_path(best_path->subpath) ||
- tlist_same_exprs(tlist, subplan->targetlist))
+ if (!best_path->srfpp &&
+ (is_projection_capable_path(best_path->subpath) ||
+ tlist_same_exprs(tlist, subplan->targetlist)))
{
/* Don't need a separate Result, just assign tlist to subplan */
plan = subplan;
@@ -6192,6 +6206,17 @@ is_projection_capable_path(Path *path)
* projection to its dummy path.
*/
return IS_DUMMY_PATH(path);
+ case T_Result:
+
+ /*
+ * If the path is doing SRF evaluation, claim it can't project, so
+ * we don't jam a new tlist into it and thereby break the property
+ * that the SRFs appear at top level.
+ */
+ if (IsA(path, ProjectionPath) &&
+ ((ProjectionPath *) path)->srfpp)
+ return false;
+ break;
default:
break;
}
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index f936710171..70870bbbe0 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -153,6 +153,8 @@ static List *make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc,
static PathTarget *make_sort_input_target(PlannerInfo *root,
PathTarget *final_target,
bool *have_postponed_srfs);
+static void adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel,
+ List *targets, List *targets_contain_srfs);
/*****************************************************************************
@@ -1434,8 +1436,9 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
int64 count_est = 0;
double limit_tuples = -1.0;
bool have_postponed_srfs = false;
- double tlist_rows;
PathTarget *final_target;
+ List *final_targets;
+ List *final_targets_contain_srfs;
RelOptInfo *current_rel;
RelOptInfo *final_rel;
ListCell *lc;
@@ -1498,6 +1501,10 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
/* Also extract the PathTarget form of the setop result tlist */
final_target = current_rel->cheapest_total_path->pathtarget;
+ /* The setop result tlist couldn't contain any SRFs */
+ Assert(!parse->hasTargetSRFs);
+ final_targets = final_targets_contain_srfs = NIL;
+
/*
* Can't handle FOR [KEY] UPDATE/SHARE here (parser should have
* checked already, but let's make sure).
@@ -1523,8 +1530,14 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
{
/* No set operations, do regular planning */
PathTarget *sort_input_target;
+ List *sort_input_targets;
+ List *sort_input_targets_contain_srfs;
PathTarget *grouping_target;
+ List *grouping_targets;
+ List *grouping_targets_contain_srfs;
PathTarget *scanjoin_target;
+ List *scanjoin_targets;
+ List *scanjoin_targets_contain_srfs;
bool have_grouping;
AggClauseCosts agg_costs;
WindowFuncLists *wflists = NULL;
@@ -1775,8 +1788,50 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
scanjoin_target = grouping_target;
/*
- * Forcibly apply scan/join target to all the Paths for the scan/join
- * rel.
+ * If there are any SRFs in the targetlist, we must separate each of
+ * these PathTargets into SRF-computing and SRF-free targets. Replace
+ * each of the named targets with a SRF-free version, and remember the
+ * list of additional projection steps we need to add afterwards.
+ */
+ if (parse->hasTargetSRFs)
+ {
+ /* final_target doesn't recompute any SRFs in sort_input_target */
+ split_pathtarget_at_srfs(root, final_target, sort_input_target,
+ &final_targets,
+ &final_targets_contain_srfs);
+ final_target = (PathTarget *) linitial(final_targets);
+ Assert(!linitial_int(final_targets_contain_srfs));
+ /* likewise for sort_input_target vs. grouping_target */
+ split_pathtarget_at_srfs(root, sort_input_target, grouping_target,
+ &sort_input_targets,
+ &sort_input_targets_contain_srfs);
+ sort_input_target = (PathTarget *) linitial(sort_input_targets);
+ Assert(!linitial_int(sort_input_targets_contain_srfs));
+ /* likewise for grouping_target vs. scanjoin_target */
+ split_pathtarget_at_srfs(root, grouping_target, scanjoin_target,
+ &grouping_targets,
+ &grouping_targets_contain_srfs);
+ grouping_target = (PathTarget *) linitial(grouping_targets);
+ Assert(!linitial_int(grouping_targets_contain_srfs));
+ /* scanjoin_target will not have any SRFs precomputed for it */
+ split_pathtarget_at_srfs(root, scanjoin_target, NULL,
+ &scanjoin_targets,
+ &scanjoin_targets_contain_srfs);
+ scanjoin_target = (PathTarget *) linitial(scanjoin_targets);
+ Assert(!linitial_int(scanjoin_targets_contain_srfs));
+ }
+ else
+ {
+ /* initialize lists, just to keep compiler quiet */
+ final_targets = final_targets_contain_srfs = NIL;
+ sort_input_targets = sort_input_targets_contain_srfs = NIL;
+ grouping_targets = grouping_targets_contain_srfs = NIL;
+ scanjoin_targets = scanjoin_targets_contain_srfs = NIL;
+ }
+
+ /*
+ * Forcibly apply SRF-free scan/join target to all the Paths for the
+ * scan/join rel.
*
* In principle we should re-run set_cheapest() here to identify the
* cheapest path, but it seems unlikely that adding the same tlist
@@ -1847,6 +1902,12 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
current_rel->partial_pathlist = NIL;
}
+ /* Now fix things up if scan/join target contains SRFs */
+ if (parse->hasTargetSRFs)
+ adjust_paths_for_srfs(root, current_rel,
+ scanjoin_targets,
+ scanjoin_targets_contain_srfs);
+
/*
* Save the various upper-rel PathTargets we just computed into
* root->upper_targets[]. The core code doesn't use this, but it
@@ -1871,6 +1932,11 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
&agg_costs,
rollup_lists,
rollup_groupclauses);
+ /* Fix things up if grouping_target contains SRFs */
+ if (parse->hasTargetSRFs)
+ adjust_paths_for_srfs(root, current_rel,
+ grouping_targets,
+ grouping_targets_contain_srfs);
}
/*
@@ -1886,6 +1952,11 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
tlist,
wflists,
activeWindows);
+ /* Fix things up if sort_input_target contains SRFs */
+ if (parse->hasTargetSRFs)
+ adjust_paths_for_srfs(root, current_rel,
+ sort_input_targets,
+ sort_input_targets_contain_srfs);
}
/*
@@ -1914,40 +1985,11 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
final_target,
have_postponed_srfs ? -1.0 :
limit_tuples);
- }
-
- /*
- * If there are set-returning functions in the tlist, scale up the output
- * rowcounts of all surviving Paths to account for that. Note that if any
- * SRFs appear in sorting or grouping columns, we'll have underestimated
- * the numbers of rows passing through earlier steps; but that's such a
- * weird usage that it doesn't seem worth greatly complicating matters to
- * account for it.
- */
- if (parse->hasTargetSRFs)
- tlist_rows = tlist_returns_set_rows(tlist);
- else
- tlist_rows = 1;
-
- if (tlist_rows > 1)
- {
- foreach(lc, current_rel->pathlist)
- {
- Path *path = (Path *) lfirst(lc);
-
- /*
- * We assume that execution costs of the tlist as such were
- * already accounted for. However, it still seems appropriate to
- * charge something more for the executor's general costs of
- * processing the added tuples. The cost is probably less than
- * cpu_tuple_cost, though, so we arbitrarily use half of that.
- */
- path->total_cost += path->rows * (tlist_rows - 1) *
- cpu_tuple_cost / 2;
-
- path->rows *= tlist_rows;
- }
- /* No need to run set_cheapest; we're keeping all paths anyway. */
+ /* Fix things up if final_target contains SRFs */
+ if (parse->hasTargetSRFs)
+ adjust_paths_for_srfs(root, current_rel,
+ final_targets,
+ final_targets_contain_srfs);
}
/*
@@ -5151,6 +5193,109 @@ get_cheapest_fractional_path(RelOptInfo *rel, double tuple_fraction)
}
/*
+ * adjust_paths_for_srfs
+ * Fix up the Paths of the given upperrel to handle tSRFs properly.
+ *
+ * The executor can only handle set-returning functions that appear at the
+ * top level of the targetlist of a Result plan node. If we have any SRFs
+ * that are not at top level, we need to split up the evaluation into multiple
+ * plan levels in which each level satisfies this constraint. This function
+ * modifies each Path of an upperrel that (might) compute any SRFs in its
+ * output tlist to insert appropriate projection steps.
+ *
+ * The given targets and targets_contain_srfs lists are from
+ * split_pathtarget_at_srfs(). We assume the existing Paths emit the first
+ * target in targets.
+ */
+static void
+adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel,
+ List *targets, List *targets_contain_srfs)
+{
+ ListCell *lc;
+
+ Assert(list_length(targets) == list_length(targets_contain_srfs));
+ Assert(!linitial_int(targets_contain_srfs));
+
+ /* If no SRFs appear at this plan level, nothing to do */
+ if (list_length(targets) == 1)
+ return;
+
+ /*
+ * Stack SRF-evaluation nodes atop each path for the rel.
+ *
+ * In principle we should re-run set_cheapest() here to identify the
+ * cheapest path, but it seems unlikely that adding the same tlist eval
+ * costs to all the paths would change that, so we don't bother. Instead,
+ * just assume that the cheapest-startup and cheapest-total paths remain
+ * so. (There should be no parameterized paths anymore, so we needn't
+ * worry about updating cheapest_parameterized_paths.)
+ */
+ foreach(lc, rel->pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ Path *newpath = subpath;
+ ListCell *lc1,
+ *lc2;
+
+ Assert(subpath->param_info == NULL);
+ forboth(lc1, targets, lc2, targets_contain_srfs)
+ {
+ PathTarget *thistarget = (PathTarget *) lfirst(lc1);
+ bool contains_srfs = (bool) lfirst_int(lc2);
+
+ /* If this level doesn't contain SRFs, do regular projection */
+ if (contains_srfs)
+ newpath = (Path *) create_srf_projection_path(root,
+ rel,
+ newpath,
+ thistarget);
+ else
+ newpath = (Path *) apply_projection_to_path(root,
+ rel,
+ newpath,
+ thistarget);
+ }
+ lfirst(lc) = newpath;
+ if (subpath == rel->cheapest_startup_path)
+ rel->cheapest_startup_path = newpath;
+ if (subpath == rel->cheapest_total_path)
+ rel->cheapest_total_path = newpath;
+ }
+
+ /* Likewise for partial paths, if any */
+ foreach(lc, rel->partial_pathlist)
+ {
+ Path *subpath = (Path *) lfirst(lc);
+ Path *newpath = subpath;
+ ListCell *lc1,
+ *lc2;
+
+ Assert(subpath->param_info == NULL);
+ forboth(lc1, targets, lc2, targets_contain_srfs)
+ {
+ PathTarget *thistarget = (PathTarget *) lfirst(lc1);
+ bool contains_srfs = (bool) lfirst_int(lc2);
+
+ /* If this level doesn't contain SRFs, do regular projection */
+ if (contains_srfs)
+ newpath = (Path *) create_srf_projection_path(root,
+ rel,
+ newpath,
+ thistarget);
+ else
+ {
+ /* avoid apply_projection_to_path, in case of multiple refs */
+ newpath = (Path *) create_projection_path(root,
+ rel,
+ newpath,
+ thistarget);
+ }
+ }
+ lfirst(lc) = newpath;
+ }
+}
+
+/*
* expression_planner
* Perform planner's transformations on a standalone expression.
*
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 59ccdf43d4..a763c7fe24 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -99,7 +99,6 @@ static bool contain_agg_clause_walker(Node *node, void *context);
static bool get_agg_clause_costs_walker(Node *node,
get_agg_clause_costs_context *context);
static bool find_window_functions_walker(Node *node, WindowFuncLists *lists);
-static bool expression_returns_set_rows_walker(Node *node, double *count);
static bool contain_subplans_walker(Node *node, void *context);
static bool contain_mutable_functions_walker(Node *node, void *context);
static bool contain_volatile_functions_walker(Node *node, void *context);
@@ -790,114 +789,37 @@ find_window_functions_walker(Node *node, WindowFuncLists *lists)
/*
* expression_returns_set_rows
* Estimate the number of rows returned by a set-returning expression.
- * The result is 1 if there are no set-returning functions.
+ * The result is 1 if it's not a set-returning expression.
*
- * We use the product of the rowcount estimates of all the functions in
- * the given tree (this corresponds to the behavior of ExecMakeFunctionResult
- * for nested set-returning functions).
+ * We should only examine the top-level function or operator; it used to be
+ * appropriate to recurse, but not anymore. (Even if there are more SRFs in
+ * the function's inputs, their multipliers are accounted for separately.)
*
* Note: keep this in sync with expression_returns_set() in nodes/nodeFuncs.c.
*/
double
expression_returns_set_rows(Node *clause)
{
- double result = 1;
-
- (void) expression_returns_set_rows_walker(clause, &result);
- return clamp_row_est(result);
-}
-
-static bool
-expression_returns_set_rows_walker(Node *node, double *count)
-{
- if (node == NULL)
- return false;
- if (IsA(node, FuncExpr))
+ if (clause == NULL)
+ return 1.0;
+ if (IsA(clause, FuncExpr))
{
- FuncExpr *expr = (FuncExpr *) node;
+ FuncExpr *expr = (FuncExpr *) clause;
if (expr->funcretset)
- *count *= get_func_rows(expr->funcid);
+ return clamp_row_est(get_func_rows(expr->funcid));
}
- if (IsA(node, OpExpr))
+ if (IsA(clause, OpExpr))
{
- OpExpr *expr = (OpExpr *) node;
+ OpExpr *expr = (OpExpr *) clause;
if (expr->opretset)
{
set_opfuncid(expr);
- *count *= get_func_rows(expr->opfuncid);
+ return clamp_row_est(get_func_rows(expr->opfuncid));
}
}
-
- /* Avoid recursion for some cases that can't return a set */
- if (IsA(node, Aggref))
- return false;
- if (IsA(node, WindowFunc))
- return false;
- if (IsA(node, DistinctExpr))
- return false;
- if (IsA(node, NullIfExpr))
- return false;
- if (IsA(node, ScalarArrayOpExpr))
- return false;
- if (IsA(node, BoolExpr))
- return false;
- if (IsA(node, SubLink))
- return false;
- if (IsA(node, SubPlan))
- return false;
- if (IsA(node, AlternativeSubPlan))
- return false;
- if (IsA(node, ArrayExpr))
- return false;
- if (IsA(node, RowExpr))
- return false;
- if (IsA(node, RowCompareExpr))
- return false;
- if (IsA(node, CoalesceExpr))
- return false;
- if (IsA(node, MinMaxExpr))
- return false;
- if (IsA(node, XmlExpr))
- return false;
-
- return expression_tree_walker(node, expression_returns_set_rows_walker,
- (void *) count);
-}
-
-/*
- * tlist_returns_set_rows
- * Estimate the number of rows returned by a set-returning targetlist.
- * The result is 1 if there are no set-returning functions.
- *
- * Here, the result is the largest rowcount estimate of any of the tlist's
- * expressions, not the product as you would get from naively applying
- * expression_returns_set_rows() to the whole tlist. The behavior actually
- * implemented by ExecTargetList produces a number of rows equal to the least
- * common multiple of the expression rowcounts, so that the product would be
- * a worst-case estimate that is typically not realistic. Taking the max as
- * we do here is a best-case estimate that might not be realistic either,
- * but it's probably closer for typical usages. We don't try to compute the
- * actual LCM because we're working with very approximate estimates, so their
- * LCM would be unduly noisy.
- */
-double
-tlist_returns_set_rows(List *tlist)
-{
- double result = 1;
- ListCell *lc;
-
- foreach(lc, tlist)
- {
- TargetEntry *tle = (TargetEntry *) lfirst(lc);
- double colresult;
-
- colresult = expression_returns_set_rows((Node *) tle->expr);
- if (result < colresult)
- result = colresult;
- }
- return result;
+ return 1.0;
}
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index 3b7c56d3c7..aa635fd057 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -2227,6 +2227,9 @@ create_projection_path(PlannerInfo *root,
(cpu_tuple_cost + target->cost.per_tuple) * subpath->rows;
}
+ /* Assume no SRFs around */
+ pathnode->srfpp = false;
+
return pathnode;
}
@@ -2320,6 +2323,78 @@ apply_projection_to_path(PlannerInfo *root,
}
/*
+ * create_srf_projection_path
+ * Creates a pathnode that represents performing a SRF projection.
+ *
+ * For the moment, we just use ProjectionPath for this, and generate a
+ * Result plan node. That's likely to change.
+ *
+ * 'rel' is the parent relation associated with the result
+ * 'subpath' is the path representing the source of data
+ * 'target' is the PathTarget to be computed
+ */
+ProjectionPath *
+create_srf_projection_path(PlannerInfo *root,
+ RelOptInfo *rel,
+ Path *subpath,
+ PathTarget *target)
+{
+ ProjectionPath *pathnode = makeNode(ProjectionPath);
+ double tlist_rows;
+ ListCell *lc;
+
+ pathnode->path.pathtype = T_Result;
+ pathnode->path.parent = rel;
+ pathnode->path.pathtarget = target;
+ /* For now, assume we are above any joins, so no parameterization */
+ pathnode->path.param_info = NULL;
+ pathnode->path.parallel_aware = false;
+ pathnode->path.parallel_safe = rel->consider_parallel &&
+ subpath->parallel_safe &&
+ is_parallel_safe(root, (Node *) target->exprs);
+ pathnode->path.parallel_workers = subpath->parallel_workers;
+ /* Projection does not change the sort order */
+ pathnode->path.pathkeys = subpath->pathkeys;
+
+ pathnode->subpath = subpath;
+
+ /* Always need the Result node */
+ pathnode->dummypp = false;
+ pathnode->srfpp = true;
+
+ /*
+ * Estimate number of rows produced by SRFs for each row of input; if
+ * there's more than one in this node, use the maximum.
+ */
+ tlist_rows = 1;
+ foreach(lc, target->exprs)
+ {
+ Node *node = (Node *) lfirst(lc);
+ double itemrows;
+
+ itemrows = expression_returns_set_rows(node);
+ if (tlist_rows < itemrows)
+ tlist_rows = itemrows;
+ }
+
+ /*
+ * In addition to the cost of evaluating the tlist, charge cpu_tuple_cost
+ * per input row, and half of cpu_tuple_cost for each added output row.
+ * This is slightly bizarre maybe, but it's what 9.6 did; we may revisit
+ * this estimate later.
+ */
+ pathnode->path.rows = subpath->rows * tlist_rows;
+ pathnode->path.startup_cost = subpath->startup_cost +
+ target->cost.startup;
+ pathnode->path.total_cost = subpath->total_cost +
+ target->cost.startup +
+ (cpu_tuple_cost + target->cost.per_tuple) * subpath->rows +
+ (pathnode->path.rows - subpath->rows) * cpu_tuple_cost / 2;
+
+ return pathnode;
+}
+
+/*
* create_sort_path
* Creates a pathnode that represents performing an explicit sort.
*
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index 45205a830f..4e92ebdf41 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -16,9 +16,20 @@
#include "nodes/makefuncs.h"
#include "nodes/nodeFuncs.h"
+#include "optimizer/cost.h"
#include "optimizer/tlist.h"
+typedef struct
+{
+ List *nextlevel_tlist;
+ bool nextlevel_contains_srfs;
+} split_pathtarget_context;
+
+static bool split_pathtarget_walker(Node *node,
+ split_pathtarget_context *context);
+
+
/*****************************************************************************
* Target list creation and searching utilities
*****************************************************************************/
@@ -759,3 +770,191 @@ apply_pathtarget_labeling_to_tlist(List *tlist, PathTarget *target)
i++;
}
}
+
+/*
+ * split_pathtarget_at_srfs
+ * Split given PathTarget into multiple levels to position SRFs safely
+ *
+ * The executor can only handle set-returning functions that appear at the
+ * top level of the targetlist of a Result plan node. If we have any SRFs
+ * that are not at top level, we need to split up the evaluation into multiple
+ * plan levels in which each level satisfies this constraint. This function
+ * creates appropriate PathTarget(s) for each level.
+ *
+ * As an example, consider the tlist expression
+ * x + srf1(srf2(y + z))
+ * This expression should appear as-is in the top PathTarget, but below that
+ * we must have a PathTarget containing
+ * x, srf1(srf2(y + z))
+ * and below that, another PathTarget containing
+ * x, srf2(y + z)
+ * and below that, another PathTarget containing
+ * x, y, z
+ * When these tlists are processed by setrefs.c, subexpressions that match
+ * output expressions of the next lower tlist will be replaced by Vars,
+ * so that what the executor gets are tlists looking like
+ * Var1 + Var2
+ * Var1, srf1(Var2)
+ * Var1, srf2(Var2 + Var3)
+ * x, y, z
+ * which satisfy the desired property.
+ *
+ * In some cases, a SRF has already been evaluated in some previous plan level
+ * and we shouldn't expand it again (that is, what we see in the target is
+ * already meant as a reference to a lower subexpression). So, don't expand
+ * any tlist expressions that appear in input_target, if that's not NULL.
+ * In principle we might need to consider matching subexpressions to
+ * input_target, but for now it's not necessary because only ORDER BY and
+ * GROUP BY expressions are at issue and those will look the same at both
+ * plan levels.
+ *
+ * The outputs of this function are two parallel lists, one a list of
+ * PathTargets and the other an integer list of bool flags indicating
+ * whether the corresponding PathTarget contains any top-level SRFs.
+ * The lists are given in the order they'd need to be evaluated in, with
+ * the "lowest" PathTarget first. So the last list entry is always the
+ * originally given PathTarget, and any entries before it indicate evaluation
+ * levels that must be inserted below it. The first list entry must not
+ * contain any SRFs, since it will typically be attached to a plan node
+ * that cannot evaluate SRFs.
+ *
+ * Note: using a list for the flags may seem like overkill, since there
+ * are only a few possible patterns for which levels contain SRFs.
+ * But this representation decouples callers from that knowledge.
+ */
+void
+split_pathtarget_at_srfs(PlannerInfo *root,
+ PathTarget *target, PathTarget *input_target,
+ List **targets, List **targets_contain_srfs)
+{
+ /* Initialize output lists to empty; we prepend to them within loop */
+ *targets = *targets_contain_srfs = NIL;
+
+ /* Loop to consider each level of PathTarget we need */
+ for (;;)
+ {
+ bool target_contains_srfs = false;
+ split_pathtarget_context context;
+ ListCell *lc;
+
+ context.nextlevel_tlist = NIL;
+ context.nextlevel_contains_srfs = false;
+
+ /*
+ * Scan the PathTarget looking for SRFs. Top-level SRFs are handled
+ * in this loop, ones lower down are found by split_pathtarget_walker.
+ */
+ foreach(lc, target->exprs)
+ {
+ Node *node = (Node *) lfirst(lc);
+
+ /*
+ * A tlist item that is just a reference to an expression already
+ * computed in input_target need not be evaluated here, so just
+ * make sure it's included in the next PathTarget.
+ */
+ if (input_target && list_member(input_target->exprs, node))
+ {
+ context.nextlevel_tlist = lappend(context.nextlevel_tlist, node);
+ continue;
+ }
+
+ /* Else, we need to compute this expression. */
+ if (IsA(node, FuncExpr) &&
+ ((FuncExpr *) node)->funcretset)
+ {
+ /* Top-level SRF: it can be evaluated here */
+ target_contains_srfs = true;
+ /* Recursively examine SRF's inputs */
+ split_pathtarget_walker((Node *) ((FuncExpr *) node)->args,
+ &context);
+ }
+ else if (IsA(node, OpExpr) &&
+ ((OpExpr *) node)->opretset)
+ {
+ /* Same as above, but for set-returning operator */
+ target_contains_srfs = true;
+ split_pathtarget_walker((Node *) ((OpExpr *) node)->args,
+ &context);
+ }
+ else
+ {
+ /* Not a top-level SRF, so recursively examine expression */
+ split_pathtarget_walker(node, &context);
+ }
+ }
+
+ /*
+ * Prepend current target and associated flag to output lists.
+ */
+ *targets = lcons(target, *targets);
+ *targets_contain_srfs = lcons_int(target_contains_srfs,
+ *targets_contain_srfs);
+
+ /*
+ * Done if we found no SRFs anywhere in this target; the tentative
+ * tlist we built for the next level can be discarded.
+ */
+ if (!target_contains_srfs && !context.nextlevel_contains_srfs)
+ break;
+
+ /*
+ * Else build the next PathTarget down, and loop back to process it.
+ * Copy the subexpressions to make sure PathTargets don't share
+ * substructure (might be unnecessary, but be safe); and drop any
+ * duplicate entries in the sub-targetlist.
+ */
+ target = create_empty_pathtarget();
+ add_new_columns_to_pathtarget(target,
+ (List *) copyObject(context.nextlevel_tlist));
+ set_pathtarget_cost_width(root, target);
+ }
+}
+
+/* Recursively examine expressions for split_pathtarget_at_srfs */
+static bool
+split_pathtarget_walker(Node *node, split_pathtarget_context *context)
+{
+ if (node == NULL)
+ return false;
+ if (IsA(node, Var) ||
+ IsA(node, PlaceHolderVar) ||
+ IsA(node, Aggref) ||
+ IsA(node, GroupingFunc) ||
+ IsA(node, WindowFunc))
+ {
+ /*
+ * Pass these items down to the child plan level for evaluation.
+ *
+ * We assume that these constructs cannot contain any SRFs (if one
+ * does, there will be an executor failure from a misplaced SRF).
+ */
+ context->nextlevel_tlist = lappend(context->nextlevel_tlist, node);
+
+ /* Having done that, we need not examine their sub-structure */
+ return false;
+ }
+ else if ((IsA(node, FuncExpr) &&
+ ((FuncExpr *) node)->funcretset) ||
+ (IsA(node, OpExpr) &&
+ ((OpExpr *) node)->opretset))
+ {
+ /*
+ * Pass SRFs down to the child plan level for evaluation, and mark
+ * that it contains SRFs. (We are not at top level of our own tlist,
+ * else this would have been picked up by split_pathtarget_at_srfs.)
+ */
+ context->nextlevel_tlist = lappend(context->nextlevel_tlist, node);
+ context->nextlevel_contains_srfs = true;
+
+ /* Inputs to the SRF need not be considered here, so we're done */
+ return false;
+ }
+
+ /*
+ * Otherwise, the node is evaluatable within the current PathTarget, so
+ * recurse to examine its inputs.
+ */
+ return expression_tree_walker(node, split_pathtarget_walker,
+ (void *) context);
+}
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index e1d31c795a..de4092d679 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1293,6 +1293,7 @@ typedef struct ProjectionPath
Path path;
Path *subpath; /* path representing input source */
bool dummypp; /* true if no separate Result is needed */
+ bool srfpp; /* true if SRFs are being evaluated here */
} ProjectionPath;
/*
diff --git a/src/include/optimizer/clauses.h b/src/include/optimizer/clauses.h
index 6173ef8d75..cc0d7b0a26 100644
--- a/src/include/optimizer/clauses.h
+++ b/src/include/optimizer/clauses.h
@@ -54,7 +54,6 @@ extern bool contain_window_function(Node *clause);
extern WindowFuncLists *find_window_functions(Node *clause, Index maxWinRef);
extern double expression_returns_set_rows(Node *clause);
-extern double tlist_returns_set_rows(List *tlist);
extern bool contain_subplans(Node *clause);
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index d16f879fc1..c11c59df23 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -144,6 +144,10 @@ extern Path *apply_projection_to_path(PlannerInfo *root,
RelOptInfo *rel,
Path *path,
PathTarget *target);
+extern ProjectionPath *create_srf_projection_path(PlannerInfo *root,
+ RelOptInfo *rel,
+ Path *subpath,
+ PathTarget *target);
extern SortPath *create_sort_path(PlannerInfo *root,
RelOptInfo *rel,
Path *subpath,
diff --git a/src/include/optimizer/tlist.h b/src/include/optimizer/tlist.h
index f80b31a673..976024a164 100644
--- a/src/include/optimizer/tlist.h
+++ b/src/include/optimizer/tlist.h
@@ -61,6 +61,9 @@ extern void add_column_to_pathtarget(PathTarget *target,
extern void add_new_column_to_pathtarget(PathTarget *target, Expr *expr);
extern void add_new_columns_to_pathtarget(PathTarget *target, List *exprs);
extern void apply_pathtarget_labeling_to_tlist(List *tlist, PathTarget *target);
+extern void split_pathtarget_at_srfs(PlannerInfo *root,
+ PathTarget *target, PathTarget *input_target,
+ List **targets, List **targets_contain_srfs);
/* Convenience macro to get a PathTarget with valid cost/width fields */
#define create_pathtarget(root, tlist) \
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index fa1f5e7879..b71d81ee21 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -823,7 +823,8 @@ explain (costs off)
-> Index Only Scan Backward using tenk1_unique2 on tenk1
Index Cond: (unique2 IS NOT NULL)
-> Result
-(7 rows)
+ -> Result
+(8 rows)
select max(unique2), generate_series(1,3) as g from tenk1 order by g desc;
max | g
diff --git a/src/test/regress/expected/limit.out b/src/test/regress/expected/limit.out
index 9c3eecfc3b..a7ded3ad05 100644
--- a/src/test/regress/expected/limit.out
+++ b/src/test/regress/expected/limit.out
@@ -208,13 +208,15 @@ select currval('testseq');
explain (verbose, costs off)
select unique1, unique2, generate_series(1,10)
from tenk1 order by unique2 limit 7;
- QUERY PLAN
-----------------------------------------------------------
+ QUERY PLAN
+-------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit
Output: unique1, unique2, (generate_series(1, 10))
- -> Index Scan using tenk1_unique2 on public.tenk1
+ -> Result
Output: unique1, unique2, generate_series(1, 10)
-(4 rows)
+ -> Index Scan using tenk1_unique2 on public.tenk1
+ Output: unique1, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4
+(6 rows)
select unique1, unique2, generate_series(1,10)
from tenk1 order by unique2 limit 7;
diff --git a/src/test/regress/expected/rangefuncs.out b/src/test/regress/expected/rangefuncs.out
index f06cfa4b21..9634fa16d2 100644
--- a/src/test/regress/expected/rangefuncs.out
+++ b/src/test/regress/expected/rangefuncs.out
@@ -1995,12 +1995,10 @@ SELECT *,
END)
FROM
(VALUES (1,''), (2,'0000000049404'), (3,'FROM 10000000876')) v(id, str);
- id | str | lower
-----+------------------+------------------
- 1 | |
- 2 | 0000000049404 | 49404
- 3 | FROM 10000000876 | from 10000000876
-(3 rows)
+ id | str | lower
+----+---------------+-------
+ 2 | 0000000049404 | 49404
+(1 row)
-- check whole-row-Var handling in nested lateral functions (bug #11703)
create function extractq2(t int8_tbl) returns int8 as $$
diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out
index eda319d24b..3ed089aa46 100644
--- a/src/test/regress/expected/subselect.out
+++ b/src/test/regress/expected/subselect.out
@@ -807,24 +807,28 @@ select * from int4_tbl where
explain (verbose, costs off)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,2) / 10 g from int4_tbl i group by f1);
- QUERY PLAN
-----------------------------------------------------------------
- Hash Semi Join
+ QUERY PLAN
+-------------------------------------------------------------------
+ Nested Loop Semi Join
Output: o.f1
- Hash Cond: (o.f1 = "ANY_subquery".f1)
+ Join Filter: (o.f1 = "ANY_subquery".f1)
-> Seq Scan on public.int4_tbl o
Output: o.f1
- -> Hash
+ -> Materialize
Output: "ANY_subquery".f1, "ANY_subquery".g
-> Subquery Scan on "ANY_subquery"
Output: "ANY_subquery".f1, "ANY_subquery".g
Filter: ("ANY_subquery".f1 = "ANY_subquery".g)
- -> HashAggregate
- Output: i.f1, (generate_series(1, 2) / 10)
- Group Key: i.f1
- -> Seq Scan on public.int4_tbl i
- Output: i.f1
-(15 rows)
+ -> Result
+ Output: i.f1, ((generate_series(1, 2)) / 10)
+ -> Result
+ Output: i.f1, generate_series(1, 2)
+ -> HashAggregate
+ Output: i.f1
+ Group Key: i.f1
+ -> Seq Scan on public.int4_tbl i
+ Output: i.f1
+(19 rows)
select * from int4_tbl o where (f1, f1) in
(select f1, generate_series(1,2) / 10 g from int4_tbl i group by f1);
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
index 7bb6d17fcb..f257537925 100644
--- a/src/test/regress/expected/tsrf.out
+++ b/src/test/regress/expected/tsrf.out
@@ -43,7 +43,16 @@ SELECT generate_series(1, generate_series(1, 3));
-- srf, with two SRF arguments
SELECT generate_series(generate_series(1,3), generate_series(2, 4));
-ERROR: functions and operators can take at most one set argument
+ generate_series
+-----------------
+ 1
+ 2
+ 2
+ 3
+ 3
+ 4
+(6 rows)
+
CREATE TABLE few(id int, dataa text, datab text);
INSERT INTO few VALUES(1, 'a', 'foo'),(2, 'a', 'bar'),(3, 'b', 'bar');
-- SRF output order of sorting is maintained, if SRF is not referenced
--
2.11.0.22.g8d7a455.dirty
0002-Implement-targetlist-set-returning-functions-in-a-ne.patchtext/x-patch; charset=us-asciiDownload
From 1dabd48d3551a4d927c4df162ef1c53ba6b9d2b5 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 16 Jan 2017 12:40:13 -0800
Subject: [PATCH 2/3] Implement targetlist set returning functions in a new
pipeline node.
---
src/backend/commands/explain.c | 5 +
src/backend/executor/Makefile | 4 +-
src/backend/executor/execAmi.c | 5 +
src/backend/executor/execProcnode.c | 14 ++
src/backend/executor/execQual.c | 112 +++++------
src/backend/executor/nodeSetResult.c | 316 +++++++++++++++++++++++++++++++
src/backend/nodes/copyfuncs.c | 19 ++
src/backend/nodes/outfuncs.c | 12 +-
src/backend/nodes/readfuncs.c | 16 ++
src/backend/optimizer/path/allpaths.c | 3 +
src/backend/optimizer/plan/createplan.c | 93 ++++++---
src/backend/optimizer/plan/planner.c | 4 +-
src/backend/optimizer/plan/setrefs.c | 21 ++
src/backend/optimizer/plan/subselect.c | 1 +
src/backend/optimizer/util/pathnode.c | 17 +-
src/backend/optimizer/util/tlist.c | 2 +-
src/include/executor/executor.h | 4 +
src/include/executor/nodeSetResult.h | 24 +++
src/include/nodes/execnodes.h | 15 ++
src/include/nodes/nodes.h | 3 +
src/include/nodes/plannodes.h | 7 +
src/include/nodes/relation.h | 11 +-
src/include/optimizer/pathnode.h | 2 +-
src/test/regress/expected/aggregates.out | 2 +-
src/test/regress/expected/limit.out | 8 +-
src/test/regress/expected/portals.out | 8 +-
src/test/regress/expected/subselect.out | 13 +-
src/test/regress/expected/tsrf.out | 8 +-
src/test/regress/expected/union.out | 2 +-
29 files changed, 634 insertions(+), 117 deletions(-)
create mode 100644 src/backend/executor/nodeSetResult.c
create mode 100644 src/include/executor/nodeSetResult.h
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index ee7046c47b..a1a42f747d 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -852,6 +852,11 @@ ExplainNode(PlanState *planstate, List *ancestors,
case T_Result:
pname = sname = "Result";
break;
+
+ case T_SetResult:
+ pname = sname = "SetResult";
+ break;
+
case T_ModifyTable:
sname = "ModifyTable";
switch (((ModifyTable *) plan)->operation)
diff --git a/src/backend/executor/Makefile b/src/backend/executor/Makefile
index 51edd4c5e7..15587435d7 100644
--- a/src/backend/executor/Makefile
+++ b/src/backend/executor/Makefile
@@ -22,8 +22,8 @@ OBJS = execAmi.o execCurrent.o execGrouping.o execIndexing.o execJunk.o \
nodeLimit.o nodeLockRows.o \
nodeMaterial.o nodeMergeAppend.o nodeMergejoin.o nodeModifyTable.o \
nodeNestloop.o nodeFunctionscan.o nodeRecursiveunion.o nodeResult.o \
- nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSort.o nodeUnique.o \
- nodeValuesscan.o nodeCtescan.o nodeWorktablescan.o \
+ nodeSamplescan.o nodeSeqscan.o nodeSetOp.o nodeSetResult.o nodeSort.o \
+ nodeUnique.o nodeValuesscan.o nodeCtescan.o nodeWorktablescan.o \
nodeGroup.o nodeSubplan.o nodeSubqueryscan.o nodeTidscan.o \
nodeForeignscan.o nodeWindowAgg.o tstoreReceiver.o tqueue.o spi.o
diff --git a/src/backend/executor/execAmi.c b/src/backend/executor/execAmi.c
index 3ea36979b3..c9c222f446 100644
--- a/src/backend/executor/execAmi.c
+++ b/src/backend/executor/execAmi.c
@@ -44,6 +44,7 @@
#include "executor/nodeSamplescan.h"
#include "executor/nodeSeqscan.h"
#include "executor/nodeSetOp.h"
+#include "executor/nodeSetResult.h"
#include "executor/nodeSort.h"
#include "executor/nodeSubplan.h"
#include "executor/nodeSubqueryscan.h"
@@ -130,6 +131,10 @@ ExecReScan(PlanState *node)
ExecReScanResult((ResultState *) node);
break;
+ case T_SetResultState:
+ ExecReScanSetResult((SetResultState *) node);
+ break;
+
case T_ModifyTableState:
ExecReScanModifyTable((ModifyTableState *) node);
break;
diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index b8edd36470..f3cc706f13 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -106,6 +106,7 @@
#include "executor/nodeSamplescan.h"
#include "executor/nodeSeqscan.h"
#include "executor/nodeSetOp.h"
+#include "executor/nodeSetResult.h"
#include "executor/nodeSort.h"
#include "executor/nodeSubplan.h"
#include "executor/nodeSubqueryscan.h"
@@ -155,6 +156,11 @@ ExecInitNode(Plan *node, EState *estate, int eflags)
estate, eflags);
break;
+ case T_SetResult:
+ result = (PlanState *) ExecInitSetResult((SetResult *) node,
+ estate, eflags);
+ break;
+
case T_ModifyTable:
result = (PlanState *) ExecInitModifyTable((ModifyTable *) node,
estate, eflags);
@@ -392,6 +398,10 @@ ExecProcNode(PlanState *node)
result = ExecResult((ResultState *) node);
break;
+ case T_SetResultState:
+ result = ExecSetResult((SetResultState *) node);
+ break;
+
case T_ModifyTableState:
result = ExecModifyTable((ModifyTableState *) node);
break;
@@ -634,6 +644,10 @@ ExecEndNode(PlanState *node)
ExecEndResult((ResultState *) node);
break;
+ case T_SetResultState:
+ ExecEndSetResult((SetResultState *) node);
+ break;
+
case T_ModifyTableState:
ExecEndModifyTable((ModifyTableState *) node);
break;
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index bf007b7efd..ad673ed8b7 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -29,9 +29,9 @@
* instead of doing needless copying. -cim 5/31/91
*
* During expression evaluation, we check_stack_depth only in
- * ExecMakeFunctionResult (and substitute routines) rather than at every
- * single node. This is a compromise that trades off precision of the
- * stack limit setting to gain speed.
+ * ExecMakeFunctionResultSet/ExecMakeFunctionResultNoSetrather than at
+ * every single node. This is a compromise that trades off precision of
+ * the stack limit setting to gain speed.
*/
#include "postgres.h"
@@ -104,10 +104,6 @@ static void ExecPrepareTuplestoreResult(FuncExprState *fcache,
Tuplestorestate *resultStore,
TupleDesc resultDesc);
static void tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc);
-static Datum ExecMakeFunctionResult(FuncExprState *fcache,
- ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
static Datum ExecMakeFunctionResultNoSets(FuncExprState *fcache,
ExprContext *econtext,
bool *isNull, ExprDoneCond *isDone);
@@ -1549,7 +1545,7 @@ ExecEvalFuncArgs(FunctionCallInfo fcinfo,
/*
* ExecPrepareTuplestoreResult
*
- * Subroutine for ExecMakeFunctionResult: prepare to extract rows from a
+ * Subroutine for ExecMakeFunctionResultSet: prepare to extract rows from a
* tuplestore function result. We must set up a funcResultSlot (unless
* already done in a previous call cycle) and verify that the function
* returned the expected tuple descriptor.
@@ -1673,19 +1669,17 @@ tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc)
}
/*
- * ExecMakeFunctionResult
+ * ExecMakeFunctionResultSet
*
- * Evaluate the arguments to a function and then the function itself.
- * init_fcache is presumed already run on the FuncExprState.
- *
- * This function handles the most general case, wherein the function or
- * one of its arguments can return a set.
+ * Evaluate the arguments to a set returning function and then call the
+ * function itself. The arguments themselves may not contain set returning
+ * functions (the planner is supposed to have separated evaluation for those).
*/
-static Datum
-ExecMakeFunctionResult(FuncExprState *fcache,
- ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+Datum
+ExecMakeFunctionResultSet(FuncExprState *fcache,
+ ExprContext *econtext,
+ bool *isNull,
+ ExprDoneCond *isDone)
{
List *arguments;
Datum result;
@@ -1702,6 +1696,32 @@ restart:
check_stack_depth();
/*
+ * Initialize function cache if first time through. Unfortunately the
+ * parent can be either an FuncExpr or OpExpr. This is a bit ugly.
+ */
+ if (fcache->func.fn_oid == InvalidOid)
+ {
+ if (IsA(fcache->xprstate.expr, FuncExpr))
+ {
+ FuncExpr *func = (FuncExpr *) fcache->xprstate.expr;
+
+ init_fcache(func->funcid, func->inputcollid, fcache,
+ econtext->ecxt_per_query_memory, true);
+ }
+ else if (IsA(fcache->xprstate.expr, OpExpr))
+ {
+ OpExpr *op = (OpExpr *) fcache->xprstate.expr;
+
+ init_fcache(op->opfuncid, op->inputcollid, fcache,
+ econtext->ecxt_per_query_memory, true);
+ }
+ else
+ {
+ elog(ERROR, "unexpected type");
+ }
+ }
+
+ /*
* If a previous call of the function returned a set result in the form of
* a tuplestore, continue reading rows from the tuplestore until it's
* empty.
@@ -2120,7 +2140,7 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
ExprDoneCond argDone;
/*
- * This path is similar to ExecMakeFunctionResult.
+ * This path is similar to ExecMakeFunctionResultSet.
*/
direct_function_call = true;
@@ -2423,24 +2443,16 @@ ExecEvalFunc(FuncExprState *fcache,
/* Initialize function lookup info */
init_fcache(func->funcid, func->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory, false);
- /*
- * We need to invoke ExecMakeFunctionResult if either the function itself
- * or any of its input expressions can return a set. Otherwise, invoke
- * ExecMakeFunctionResultNoSets. In either case, change the evalfunc
- * pointer to go directly there on subsequent uses.
- */
- if (fcache->func.fn_retset || expression_returns_set((Node *) func->args))
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResult;
- return ExecMakeFunctionResult(fcache, econtext, isNull, isDone);
- }
- else
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
- }
+ if (fcache->func.fn_retset)
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued function called in context that cannot accept a set")));
+
+ /* Change the evalfunc pointer, to skip the above initialization. */
+ fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
}
/* ----------------------------------------------------------------
@@ -2458,24 +2470,16 @@ ExecEvalOper(FuncExprState *fcache,
/* Initialize function lookup info */
init_fcache(op->opfuncid, op->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory, false);
- /*
- * We need to invoke ExecMakeFunctionResult if either the function itself
- * or any of its input expressions can return a set. Otherwise, invoke
- * ExecMakeFunctionResultNoSets. In either case, change the evalfunc
- * pointer to go directly there on subsequent uses.
- */
- if (fcache->func.fn_retset || expression_returns_set((Node *) op->args))
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResult;
- return ExecMakeFunctionResult(fcache, econtext, isNull, isDone);
- }
- else
- {
- fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
- }
+ if (fcache->func.fn_retset)
+ ereport(ERROR,
+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("set-valued operator called in context that cannot accept a set")));
+
+ /* Change the evalfunc pointer, to skip the above initialization. */
+ fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
}
/* ----------------------------------------------------------------
diff --git a/src/backend/executor/nodeSetResult.c b/src/backend/executor/nodeSetResult.c
new file mode 100644
index 0000000000..6d9d96dca9
--- /dev/null
+++ b/src/backend/executor/nodeSetResult.c
@@ -0,0 +1,316 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeSetResult.c
+ * support for evaluating targetlists containing set returning functions
+ *
+ * DESCRIPTION
+ *
+ * SetResult nodes are inserted by the planner to evaluate set returning
+ * functions in the targetlist. It's guaranteed that all set returning
+ * functions are directly at the top level of the targetlist, i.e. there
+ * can't be inside a more complex expressions. If that'd otherwise be
+ * the case, the planner adds additional SetResult nodes.
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/backend/executor/nodeSetResult.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "executor/executor.h"
+#include "executor/nodeSetResult.h"
+#include "utils/memutils.h"
+
+
+static TupleTableSlot *
+ExecProjectSRF(SetResultState *node, bool continuing);
+
+
+/* ----------------------------------------------------------------
+ * ExecSetResult(node)
+ *
+ * Return tuples after evaluating the targetlist (which contains set
+ * returning functions).
+ * ----------------------------------------------------------------
+ */
+TupleTableSlot *
+ExecSetResult(SetResultState *node)
+{
+ TupleTableSlot *outerTupleSlot;
+ TupleTableSlot *resultSlot;
+ PlanState *outerPlan;
+ ExprContext *econtext;
+
+ econtext = node->ps.ps_ExprContext;
+
+ /*
+ * Check to see if we're still projecting out tuples from a previous scan
+ * tuple (because there is a function-returning-set in the projection
+ * expressions). If so, try to project another one.
+ */
+ if (node->pending_srf_tuples)
+ {
+ resultSlot = ExecProjectSRF(node, true);
+
+ if (resultSlot != NULL)
+ return resultSlot;
+ }
+
+ /*
+ * Reset per-tuple memory context to free any expression evaluation
+ * storage allocated in the previous tuple cycle. Note this can't happen
+ * until we're done projecting out tuples from a scan tuple.
+ */
+ ResetExprContext(econtext);
+
+ /*
+ * If input_done is true then it means that we were asked to return a
+ * constant tuple and we already did the last time ExecSetResult() was
+ * called. Either way, now we are through.
+ */
+ while (!node->input_done)
+ {
+ outerPlan = outerPlanState(node);
+
+ if (outerPlan != NULL)
+ {
+ /*
+ * Retrieve tuples from the outer plan until there are no more.
+ */
+ outerTupleSlot = ExecProcNode(outerPlan);
+
+ if (TupIsNull(outerTupleSlot))
+ return NULL;
+
+ /*
+ * Prepare to compute projection expressions, which will expect to
+ * access the input tuples as varno OUTER.
+ */
+ econtext->ecxt_outertuple = outerTupleSlot;
+ }
+ else
+ {
+ /*
+ * If we don't have an outer plan, then we are just generating the
+ * results from a constant target list. Do it only once.
+ */
+ node->input_done = true;
+ }
+
+ resultSlot = ExecProjectSRF(node, false);
+
+ /*
+ * Return the tuple unless the projection produced now rows (due to an
+ * empty set), in which case we must loop back to see if there are
+ * more outerPlan tuples.
+ */
+ if (resultSlot)
+ return resultSlot;
+ }
+
+ return NULL;
+}
+
+/* ----------------------------------------------------------------
+ * ExecProjectSRF
+ *
+ * Project a targetlist containing one or more set returning functions.
+ *
+ * 'continuing' indicates whether to continuing projecting rows for the
+ * same input tuple; or whether a new input tuple is being projected.
+ *
+ * Returns NULL if no output tuple has been produced.
+ *
+ * ----------------------------------------------------------------
+ */
+static TupleTableSlot *
+ExecProjectSRF(SetResultState *node, bool continuing)
+{
+ TupleTableSlot *resultSlot = node->ps.ps_ResultTupleSlot;
+ ExprContext *econtext = node->ps.ps_ExprContext;
+ ListCell *lc;
+ int argno;
+ bool hasresult;
+ bool hassrf = false PG_USED_FOR_ASSERTS_ONLY;
+
+ ExecClearTuple(resultSlot);
+
+ /*
+ * Assume no further tuples are produces unless an ExprMultipleResult is
+ * encountered from a set returning function.
+ */
+ node->pending_srf_tuples = false;
+
+ hasresult = false;
+ argno = 0;
+ foreach(lc, node->ps.targetlist)
+ {
+ GenericExprState *gstate = (GenericExprState *) lfirst(lc);
+ ExprDoneCond *isdone = &node->elemdone[argno];
+ Datum *result = &resultSlot->tts_values[argno];
+ bool *isnull = &resultSlot->tts_isnull[argno];
+
+ if (continuing && *isdone == ExprEndResult)
+ {
+ /*
+ * If we're continuing to project output rows from a source tuple,
+ * return NULLs once the SRF has been exhausted.
+ */
+ *result = 0;
+ *isnull = true;
+ hassrf = true;
+ }
+ else if (IsA(gstate->arg, FuncExprState) &&
+ ((FuncExpr *) gstate->arg->expr)->funcretset)
+ {
+ /*
+ * Evaluate SRF - possibly continuing previously started output.
+ */
+ *result = ExecMakeFunctionResultSet((FuncExprState *) gstate->arg,
+ econtext, isnull, isdone);
+
+ if (node->elemdone[argno] != ExprEndResult)
+ hasresult = true;
+ if (node->elemdone[argno] == ExprMultipleResult)
+ node->pending_srf_tuples = true;
+ hassrf = true;
+ }
+ else
+ {
+ *result = ExecEvalExpr(gstate->arg, econtext, isnull, NULL);
+ *isdone = ExprSingleResult;
+ }
+
+ argno++;
+ }
+
+ /* SetResult should not be used if there's no SRFs */
+ Assert(hassrf);
+
+ /*
+ * If all the SRFs returned EndResult, we consider that as no result being
+ * produced.
+ */
+ if (hasresult)
+ {
+ ExecStoreVirtualTuple(resultSlot);
+ return resultSlot;
+ }
+
+ return NULL;
+}
+
+/* ----------------------------------------------------------------
+ * ExecInitResult
+ *
+ * Creates the run-time state information for the SetResult node
+ * produced by the planner and initializes outer relations
+ * (child nodes).
+ * ----------------------------------------------------------------
+ */
+SetResultState *
+ExecInitSetResult(SetResult *node, EState *estate, int eflags)
+{
+ SetResultState *state;
+
+ /* check for unsupported flags */
+ Assert(!(eflags & (EXEC_FLAG_MARK | EXEC_FLAG_BACKWARD)) ||
+ outerPlan(node) != NULL);
+
+ /*
+ * create state structure
+ */
+ state = makeNode(SetResultState);
+ state->ps.plan = (Plan *) node;
+ state->ps.state = estate;
+
+ state->input_done = false;
+ state->pending_srf_tuples = false;
+
+ /*
+ * Miscellaneous initialization
+ *
+ * create expression context for node
+ */
+ ExecAssignExprContext(estate, &state->ps);
+
+ /*
+ * tuple table initialization
+ */
+ ExecInitResultTupleSlot(estate, &state->ps);
+
+ /*
+ * initialize child expressions
+ */
+ state->ps.targetlist = (List *)
+ ExecInitExpr((Expr *) node->plan.targetlist,
+ (PlanState *) state);
+ state->ps.qual = (List *)
+ ExecInitExpr((Expr *) node->plan.qual,
+ (PlanState *) state);
+
+ /*
+ * initialize child nodes
+ */
+ outerPlanState(state) = ExecInitNode(outerPlan(node), estate, eflags);
+
+ /*
+ * we don't use inner plan
+ */
+ Assert(innerPlan(node) == NULL);
+
+ /*
+ * initialize tuple type and projection info
+ */
+ ExecAssignResultTypeFromTL(&state->ps);
+
+ state->nelems = list_length(node->plan.targetlist);
+ state->elemdone = palloc(sizeof(ExprDoneCond) * state->nelems);
+
+ return state;
+}
+
+/* ----------------------------------------------------------------
+ * ExecEndSetResult
+ *
+ * frees up storage allocated through C routines
+ * ----------------------------------------------------------------
+ */
+void
+ExecEndSetResult(SetResultState *node)
+{
+ /*
+ * Free the exprcontext
+ */
+ ExecFreeExprContext(&node->ps);
+
+ /*
+ * clean out the tuple table
+ */
+ ExecClearTuple(node->ps.ps_ResultTupleSlot);
+
+ /*
+ * shut down subplans
+ */
+ ExecEndNode(outerPlanState(node));
+}
+
+void
+ExecReScanSetResult(SetResultState *node)
+{
+ node->input_done = false;
+ node->pending_srf_tuples = false;
+
+ /*
+ * If chgParam of subnode is not null then plan will be re-scanned by
+ * first ExecProcNode.
+ */
+ if (node->ps.lefttree &&
+ node->ps.lefttree->chgParam == NULL)
+ ExecReScan(node->ps.lefttree);
+}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 7107bbf164..37fbb35455 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -166,6 +166,22 @@ _copyResult(const Result *from)
}
/*
+ * _copySetResult
+ */
+static SetResult *
+_copySetResult(const SetResult *from)
+{
+ SetResult *newnode = makeNode(SetResult);
+
+ /*
+ * copy node superclass fields
+ */
+ CopyPlanFields((const Plan *) from, (Plan *) newnode);
+
+ return newnode;
+}
+
+/*
* _copyModifyTable
*/
static ModifyTable *
@@ -4413,6 +4429,9 @@ copyObject(const void *from)
case T_Result:
retval = _copyResult(from);
break;
+ case T_SetResult:
+ retval = _copySetResult(from);
+ break;
case T_ModifyTable:
retval = _copyModifyTable(from);
break;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index 73fdc9706d..6a1b9a4536 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -327,6 +327,14 @@ _outResult(StringInfo str, const Result *node)
}
static void
+_outSetResult(StringInfo str, const SetResult *node)
+{
+ WRITE_NODE_TYPE("SETRESULT");
+
+ _outPlanInfo(str, (const Plan *) node);
+}
+
+static void
_outModifyTable(StringInfo str, const ModifyTable *node)
{
WRITE_NODE_TYPE("MODIFYTABLE");
@@ -1805,7 +1813,6 @@ _outProjectionPath(StringInfo str, const ProjectionPath *node)
WRITE_NODE_FIELD(subpath);
WRITE_BOOL_FIELD(dummypp);
- WRITE_BOOL_FIELD(srfpp);
}
static void
@@ -3362,6 +3369,9 @@ outNode(StringInfo str, const void *obj)
case T_Result:
_outResult(str, obj);
break;
+ case T_SetResult:
+ _outSetResult(str, obj);
+ break;
case T_ModifyTable:
_outModifyTable(str, obj);
break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index e02dd94f05..f47b841947 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -1483,6 +1483,20 @@ _readResult(void)
READ_DONE();
}
+
+/*
+ * _readSetResult
+ */
+static SetResult *
+_readSetResult(void)
+{
+ READ_LOCALS_NO_FIELDS(SetResult);
+
+ ReadCommonPlan(&local_node->plan);
+
+ READ_DONE();
+}
+
/*
* _readModifyTable
*/
@@ -2450,6 +2464,8 @@ parseNodeString(void)
return_value = _readPlan();
else if (MATCH("RESULT", 6))
return_value = _readResult();
+ else if (MATCH("SETRESULT", 9))
+ return_value = _readSetResult();
else if (MATCH("MODIFYTABLE", 11))
return_value = _readModifyTable();
else if (MATCH("APPEND", 6))
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 46d7d064d4..1708e8062c 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -2976,6 +2976,9 @@ print_path(PlannerInfo *root, Path *path, int indent)
case T_ResultPath:
ptype = "Result";
break;
+ case T_SetResultPath:
+ ptype = "SetResult";
+ break;
case T_MaterialPath:
ptype = "Material";
subpath = ((MaterialPath *) path)->subpath;
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 875de739a8..78f9d1b4c3 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -81,6 +81,7 @@ static Plan *create_join_plan(PlannerInfo *root, JoinPath *best_path);
static Plan *create_append_plan(PlannerInfo *root, AppendPath *best_path);
static Plan *create_merge_append_plan(PlannerInfo *root, MergeAppendPath *best_path);
static Result *create_result_plan(PlannerInfo *root, ResultPath *best_path);
+static SetResult *create_set_result_plan(PlannerInfo *root, SetProjectionPath *best_path);
static Material *create_material_plan(PlannerInfo *root, MaterialPath *best_path,
int flags);
static Plan *create_unique_plan(PlannerInfo *root, UniquePath *best_path,
@@ -264,6 +265,7 @@ static SetOp *make_setop(SetOpCmd cmd, SetOpStrategy strategy, Plan *lefttree,
long numGroups);
static LockRows *make_lockrows(Plan *lefttree, List *rowMarks, int epqParam);
static Result *make_result(List *tlist, Node *resconstantqual, Plan *subplan);
+static SetResult *make_set_result(List *tlist, Plan *subplan);
static ModifyTable *make_modifytable(PlannerInfo *root,
CmdType operation, bool canSetTag,
Index nominalRelation,
@@ -392,6 +394,10 @@ create_plan_recurse(PlannerInfo *root, Path *best_path, int flags)
(ResultPath *) best_path);
}
break;
+ case T_SetResult:
+ plan = (Plan *) create_set_result_plan(root,
+ (SetProjectionPath *) best_path);
+ break;
case T_Material:
plan = (Plan *) create_material_plan(root,
(MaterialPath *) best_path,
@@ -1142,6 +1148,44 @@ create_result_plan(PlannerInfo *root, ResultPath *best_path)
}
/*
+ * create_set_result_plan
+ * Create a SetResult plan for 'best_path'.
+ *
+ * Returns a Plan node.
+ */
+static SetResult *
+create_set_result_plan(PlannerInfo *root, SetProjectionPath *best_path)
+{
+ SetResult *plan;
+ Plan *subplan;
+ List *tlist;
+
+ /*
+ * XXX Possibly-temporary hack: if the subpath is a dummy ResultPath,
+ * don't bother with it, just make a SetResult with no input. This avoids
+ * an extra Result plan node when doing "SELECT srf()". Depending on what
+ * we decide about the desired plan structure for SRF-expanding nodes,
+ * this optimization might have to go away, and in any case it'll probably
+ * look a good bit different.
+ */
+ if (IsA(best_path->subpath, ResultPath) &&
+ ((ResultPath *) best_path->subpath)->path.pathtarget->exprs == NIL &&
+ ((ResultPath *) best_path->subpath)->quals == NIL)
+ subplan = NULL;
+ else
+ /* Since we intend to project, we don't need to constrain child tlist */
+ subplan = create_plan_recurse(root, best_path->subpath, 0);
+
+ tlist = build_path_tlist(root, &best_path->path);
+
+ plan = make_set_result(tlist, subplan);
+
+ copy_generic_path_info(&plan->plan, (Path *) best_path);
+
+ return plan;
+}
+
+/*
* create_material_plan
* Create a Material plan for 'best_path' and (recursively) plans
* for its subpaths.
@@ -1421,21 +1465,8 @@ create_projection_plan(PlannerInfo *root, ProjectionPath *best_path)
Plan *subplan;
List *tlist;
- /*
- * XXX Possibly-temporary hack: if the subpath is a dummy ResultPath,
- * don't bother with it, just make a Result with no input. This avoids an
- * extra Result plan node when doing "SELECT srf()". Depending on what we
- * decide about the desired plan structure for SRF-expanding nodes, this
- * optimization might have to go away, and in any case it'll probably look
- * a good bit different.
- */
- if (IsA(best_path->subpath, ResultPath) &&
- ((ResultPath *) best_path->subpath)->path.pathtarget->exprs == NIL &&
- ((ResultPath *) best_path->subpath)->quals == NIL)
- subplan = NULL;
- else
- /* Since we intend to project, we don't need to constrain child tlist */
- subplan = create_plan_recurse(root, best_path->subpath, 0);
+ /* Since we intend to project, we don't need to constrain child tlist */
+ subplan = create_plan_recurse(root, best_path->subpath, 0);
tlist = build_path_tlist(root, &best_path->path);
@@ -1454,9 +1485,8 @@ create_projection_plan(PlannerInfo *root, ProjectionPath *best_path)
* creation, but that would add expense to creating Paths we might end up
* not using.)
*/
- if (!best_path->srfpp &&
- (is_projection_capable_path(best_path->subpath) ||
- tlist_same_exprs(tlist, subplan->targetlist)))
+ if (is_projection_capable_path(best_path->subpath) ||
+ tlist_same_exprs(tlist, subplan->targetlist))
{
/* Don't need a separate Result, just assign tlist to subplan */
plan = subplan;
@@ -6041,6 +6071,25 @@ make_result(List *tlist,
}
/*
+ * make_set_result
+ * Build a SetResult plan node
+ */
+static SetResult *
+make_set_result(List *tlist,
+ Plan *subplan)
+{
+ SetResult *node = makeNode(SetResult);
+ Plan *plan = &node->plan;
+
+ plan->targetlist = tlist;
+ plan->qual = NIL;
+ plan->lefttree = subplan;
+ plan->righttree = NULL;
+
+ return node;
+}
+
+/*
* make_modifytable
* Build a ModifyTable plan node
*/
@@ -6206,17 +6255,15 @@ is_projection_capable_path(Path *path)
* projection to its dummy path.
*/
return IS_DUMMY_PATH(path);
- case T_Result:
+ case T_SetResult:
/*
* If the path is doing SRF evaluation, claim it can't project, so
* we don't jam a new tlist into it and thereby break the property
* that the SRFs appear at top level.
*/
- if (IsA(path, ProjectionPath) &&
- ((ProjectionPath *) path)->srfpp)
- return false;
- break;
+ return false;
+
default:
break;
}
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 70870bbbe0..a208f511d9 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -5245,7 +5245,7 @@ adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel,
/* If this level doesn't contain SRFs, do regular projection */
if (contains_srfs)
- newpath = (Path *) create_srf_projection_path(root,
+ newpath = (Path *) create_set_projection_path(root,
rel,
newpath,
thistarget);
@@ -5278,7 +5278,7 @@ adjust_paths_for_srfs(PlannerInfo *root, RelOptInfo *rel,
/* If this level doesn't contain SRFs, do regular projection */
if (contains_srfs)
- newpath = (Path *) create_srf_projection_path(root,
+ newpath = (Path *) create_set_projection_path(root,
rel,
newpath,
thistarget);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 413a0d9da2..e77312d6af 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -733,6 +733,27 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)
fix_scan_expr(root, splan->resconstantqual, rtoffset);
}
break;
+
+ case T_SetResult:
+ {
+ SetResult *splan = (SetResult *) plan;
+
+ /*
+ * SetResult may or may not have a subplan; if not, it's more
+ * like a scan node than an upper node.
+ */
+ if (splan->plan.lefttree != NULL)
+ set_upper_references(root, plan, rtoffset);
+ else
+ {
+ splan->plan.targetlist =
+ fix_scan_list(root, splan->plan.targetlist, rtoffset);
+ splan->plan.qual =
+ fix_scan_list(root, splan->plan.qual, rtoffset);
+ }
+ }
+ break;
+
case T_ModifyTable:
{
ModifyTable *splan = (ModifyTable *) plan;
diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index aad0b684ed..ad8b75b4d9 100644
--- a/src/backend/optimizer/plan/subselect.c
+++ b/src/backend/optimizer/plan/subselect.c
@@ -2680,6 +2680,7 @@ finalize_plan(PlannerInfo *root, Plan *plan, Bitmapset *valid_params,
&context);
break;
+ case T_SetResult:
case T_Hash:
case T_Material:
case T_Sort:
diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c
index aa635fd057..2e30af20af 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -2227,9 +2227,6 @@ create_projection_path(PlannerInfo *root,
(cpu_tuple_cost + target->cost.per_tuple) * subpath->rows;
}
- /* Assume no SRFs around */
- pathnode->srfpp = false;
-
return pathnode;
}
@@ -2333,17 +2330,17 @@ apply_projection_to_path(PlannerInfo *root,
* 'subpath' is the path representing the source of data
* 'target' is the PathTarget to be computed
*/
-ProjectionPath *
-create_srf_projection_path(PlannerInfo *root,
+SetProjectionPath *
+create_set_projection_path(PlannerInfo *root,
RelOptInfo *rel,
Path *subpath,
PathTarget *target)
{
- ProjectionPath *pathnode = makeNode(ProjectionPath);
+ SetProjectionPath *pathnode = makeNode(SetProjectionPath);
double tlist_rows;
ListCell *lc;
- pathnode->path.pathtype = T_Result;
+ pathnode->path.pathtype = T_SetResult;
pathnode->path.parent = rel;
pathnode->path.pathtarget = target;
/* For now, assume we are above any joins, so no parameterization */
@@ -2353,15 +2350,11 @@ create_srf_projection_path(PlannerInfo *root,
subpath->parallel_safe &&
is_parallel_safe(root, (Node *) target->exprs);
pathnode->path.parallel_workers = subpath->parallel_workers;
- /* Projection does not change the sort order */
+ /* Projection does not change the sort order XXX? */
pathnode->path.pathkeys = subpath->pathkeys;
pathnode->subpath = subpath;
- /* Always need the Result node */
- pathnode->dummypp = false;
- pathnode->srfpp = true;
-
/*
* Estimate number of rows produced by SRFs for each row of input; if
* there's more than one in this node, use the maximum.
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index 4e92ebdf41..8290769468 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -776,7 +776,7 @@ apply_pathtarget_labeling_to_tlist(List *tlist, PathTarget *target)
* Split given PathTarget into multiple levels to position SRFs safely
*
* The executor can only handle set-returning functions that appear at the
- * top level of the targetlist of a Result plan node. If we have any SRFs
+ * top level of the targetlist of a SetResult plan node. If we have any SRFs
* that are not at top level, we need to split up the evaluation into multiple
* plan levels in which each level satisfies this constraint. This function
* creates appropriate PathTarget(s) for each level.
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index b9c7f72903..4e48592798 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -253,6 +253,10 @@ extern Tuplestorestate *ExecMakeTableFunctionResult(ExprState *funcexpr,
MemoryContext argContext,
TupleDesc expectedDesc,
bool randomAccess);
+extern Datum ExecMakeFunctionResultSet(FuncExprState *fcache,
+ ExprContext *econtext,
+ bool *isNull,
+ ExprDoneCond *isDone);
extern Datum ExecEvalExprSwitchContext(ExprState *expression, ExprContext *econtext,
bool *isNull, ExprDoneCond *isDone);
extern ExprState *ExecInitExpr(Expr *node, PlanState *parent);
diff --git a/src/include/executor/nodeSetResult.h b/src/include/executor/nodeSetResult.h
new file mode 100644
index 0000000000..f51cf32956
--- /dev/null
+++ b/src/include/executor/nodeSetResult.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * nodeSetResult.h
+ *
+ *
+ *
+ * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/executor/nodeSetResult.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef NODESETRESULT_H
+#define NODESETRESULT_H
+
+#include "nodes/execnodes.h"
+
+extern SetResultState *ExecInitSetResult(SetResult *node, EState *estate, int eflags);
+extern TupleTableSlot *ExecSetResult(SetResultState *node);
+extern void ExecEndSetResult(SetResultState *node);
+extern void ExecReScanSetResult(SetResultState *node);
+
+#endif /* NODESETRESULT_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index ce13bf7635..69de3ebbd9 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -1129,6 +1129,21 @@ typedef struct ResultState
bool rs_checkqual; /* do we need to check the qual? */
} ResultState;
+
+/* ----------------
+ * SetResultState information
+ * ----------------
+ */
+typedef struct SetResultState
+{
+ PlanState ps; /* its first field is NodeTag */
+ int nelems;
+ ExprDoneCond *elemdone;
+ bool input_done; /* done reading source tuple? */
+ bool pending_srf_tuples; /* evaluating srfs in tlist? */
+} SetResultState;
+
+
/* ----------------
* ModifyTableState information
* ----------------
diff --git a/src/include/nodes/nodes.h b/src/include/nodes/nodes.h
index 4c4319bcab..be397fb138 100644
--- a/src/include/nodes/nodes.h
+++ b/src/include/nodes/nodes.h
@@ -43,6 +43,7 @@ typedef enum NodeTag
*/
T_Plan,
T_Result,
+ T_SetResult,
T_ModifyTable,
T_Append,
T_MergeAppend,
@@ -91,6 +92,7 @@ typedef enum NodeTag
*/
T_PlanState,
T_ResultState,
+ T_SetResultState,
T_ModifyTableState,
T_AppendState,
T_MergeAppendState,
@@ -245,6 +247,7 @@ typedef enum NodeTag
T_UniquePath,
T_GatherPath,
T_ProjectionPath,
+ T_SetProjectionPath,
T_SortPath,
T_GroupPath,
T_UpperUniquePath,
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 6810f8c099..3405f018fc 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -176,6 +176,13 @@ typedef struct Result
Node *resconstantqual;
} Result;
+
+typedef struct SetResult
+{
+ Plan plan;
+} SetResult;
+
+
/* ----------------
* ModifyTable node -
* Apply rows produced by subplan(s) to result table(s),
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index de4092d679..50fa79926a 100644
--- a/src/include/nodes/relation.h
+++ b/src/include/nodes/relation.h
@@ -1293,10 +1293,19 @@ typedef struct ProjectionPath
Path path;
Path *subpath; /* path representing input source */
bool dummypp; /* true if no separate Result is needed */
- bool srfpp; /* true if SRFs are being evaluated here */
} ProjectionPath;
/*
+ * SetProjectionPath represents an evaluation of a targetlist set returning
+ * function.
+ */
+typedef struct SetProjectionPath
+{
+ Path path;
+ Path *subpath; /* path representing input source */
+} SetProjectionPath;
+
+/*
* SortPath represents an explicit sort step
*
* The sort keys are, by definition, the same as path.pathkeys.
diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h
index c11c59df23..9cbd87c0a2 100644
--- a/src/include/optimizer/pathnode.h
+++ b/src/include/optimizer/pathnode.h
@@ -144,7 +144,7 @@ extern Path *apply_projection_to_path(PlannerInfo *root,
RelOptInfo *rel,
Path *path,
PathTarget *target);
-extern ProjectionPath *create_srf_projection_path(PlannerInfo *root,
+extern SetProjectionPath *create_set_projection_path(PlannerInfo *root,
RelOptInfo *rel,
Path *subpath,
PathTarget *target);
diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out
index b71d81ee21..c7a87a25a9 100644
--- a/src/test/regress/expected/aggregates.out
+++ b/src/test/regress/expected/aggregates.out
@@ -822,7 +822,7 @@ explain (costs off)
-> Limit
-> Index Only Scan Backward using tenk1_unique2 on tenk1
Index Cond: (unique2 IS NOT NULL)
- -> Result
+ -> SetResult
-> Result
(8 rows)
diff --git a/src/test/regress/expected/limit.out b/src/test/regress/expected/limit.out
index a7ded3ad05..f3124394a3 100644
--- a/src/test/regress/expected/limit.out
+++ b/src/test/regress/expected/limit.out
@@ -212,7 +212,7 @@ select unique1, unique2, generate_series(1,10)
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit
Output: unique1, unique2, (generate_series(1, 10))
- -> Result
+ -> SetResult
Output: unique1, unique2, generate_series(1, 10)
-> Index Scan using tenk1_unique2 on public.tenk1
Output: unique1, unique2, two, four, ten, twenty, hundred, thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4
@@ -238,7 +238,7 @@ select unique1, unique2, generate_series(1,10)
--------------------------------------------------------------------
Limit
Output: unique1, unique2, (generate_series(1, 10)), tenthous
- -> Result
+ -> SetResult
Output: unique1, unique2, generate_series(1, 10), tenthous
-> Sort
Output: unique1, unique2, tenthous
@@ -265,7 +265,7 @@ explain (verbose, costs off)
select generate_series(0,2) as s1, generate_series((random()*.1)::int,2) as s2;
QUERY PLAN
------------------------------------------------------------------------------------------------------
- Result
+ SetResult
Output: generate_series(0, 2), generate_series(((random() * '0.1'::double precision))::integer, 2)
(2 rows)
@@ -285,7 +285,7 @@ order by s2 desc;
Sort
Output: (generate_series(0, 2)), (generate_series(((random() * '0.1'::double precision))::integer, 2))
Sort Key: (generate_series(((random() * '0.1'::double precision))::integer, 2)) DESC
- -> Result
+ -> SetResult
Output: generate_series(0, 2), generate_series(((random() * '0.1'::double precision))::integer, 2)
(5 rows)
diff --git a/src/test/regress/expected/portals.out b/src/test/regress/expected/portals.out
index 3ae918a63c..b49fa17eb3 100644
--- a/src/test/regress/expected/portals.out
+++ b/src/test/regress/expected/portals.out
@@ -1322,14 +1322,14 @@ begin;
explain (costs off) declare c2 cursor for select generate_series(1,3) as g;
QUERY PLAN
------------
- Result
+ SetResult
(1 row)
explain (costs off) declare c2 scroll cursor for select generate_series(1,3) as g;
- QUERY PLAN
---------------
+ QUERY PLAN
+-----------------
Materialize
- -> Result
+ -> SetResult
(2 rows)
declare c2 scroll cursor for select generate_series(1,3) as g;
diff --git a/src/test/regress/expected/subselect.out b/src/test/regress/expected/subselect.out
index 3ed089aa46..0215c9a663 100644
--- a/src/test/regress/expected/subselect.out
+++ b/src/test/regress/expected/subselect.out
@@ -821,7 +821,7 @@ select * from int4_tbl o where (f1, f1) in
Filter: ("ANY_subquery".f1 = "ANY_subquery".g)
-> Result
Output: i.f1, ((generate_series(1, 2)) / 10)
- -> Result
+ -> SetResult
Output: i.f1, generate_series(1, 2)
-> HashAggregate
Output: i.f1
@@ -903,7 +903,7 @@ select * from
Subquery Scan on ss
Output: x, u
Filter: tattle(ss.x, 8)
- -> Result
+ -> SetResult
Output: 9, unnest('{1,2,3,11,12,13}'::integer[])
(5 rows)
@@ -934,10 +934,11 @@ select * from
where tattle(x, 8);
QUERY PLAN
----------------------------------------------------
- Result
+ SetResult
Output: 9, unnest('{1,2,3,11,12,13}'::integer[])
- One-Time Filter: tattle(9, 8)
-(3 rows)
+ -> Result
+ One-Time Filter: tattle(9, 8)
+(4 rows)
select * from
(select 9 as x, unnest(array[1,2,3,11,12,13]) as u) ss
@@ -963,7 +964,7 @@ select * from
Subquery Scan on ss
Output: x, u
Filter: tattle(ss.x, ss.u)
- -> Result
+ -> SetResult
Output: 9, unnest('{1,2,3,11,12,13}'::integer[])
(5 rows)
diff --git a/src/test/regress/expected/tsrf.out b/src/test/regress/expected/tsrf.out
index f257537925..8c47f0f668 100644
--- a/src/test/regress/expected/tsrf.out
+++ b/src/test/regress/expected/tsrf.out
@@ -25,8 +25,8 @@ SELECT generate_series(1, 2), generate_series(1,4);
-----------------+-----------------
1 | 1
2 | 2
- 1 | 3
- 2 | 4
+ | 3
+ | 4
(4 rows)
-- srf, with SRF argument
@@ -127,15 +127,15 @@ SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few
SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, unnest('{1,1,3}'::int[]);
dataa | count | min | max | unnest
-------+-------+-----+-----+--------
- a | 2 | 1 | 1 | 1
a | 1 | 1 | 1 | 3
+ a | 2 | 1 | 1 | 1
(2 rows)
SELECT few.dataa, count(*), min(id), max(id), unnest('{1,1,3}'::int[]) FROM few WHERE few.id = 1 GROUP BY few.dataa, 5;
dataa | count | min | max | unnest
-------+-------+-----+-----+--------
- a | 2 | 1 | 1 | 1
a | 1 | 1 | 1 | 3
+ a | 2 | 1 | 1 | 1
(2 rows)
-- check HAVING works when GROUP BY does [not] reference SRF output
diff --git a/src/test/regress/expected/union.out b/src/test/regress/expected/union.out
index 67f5fc4361..743d0bd0ed 100644
--- a/src/test/regress/expected/union.out
+++ b/src/test/regress/expected/union.out
@@ -636,7 +636,7 @@ ORDER BY x;
-> HashAggregate
Group Key: (1), (generate_series(1, 10))
-> Append
- -> Result
+ -> SetResult
-> Result
(9 rows)
--
2.11.0.22.g8d7a455.dirty
0003-Remove-obsoleted-code-relating-to-targetlist-SRF-eva.patchtext/x-patch; charset=us-asciiDownload
From 778db900f67a03d526def3d9c24ae5cd2c0c9050 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Mon, 16 Jan 2017 16:15:23 -0800
Subject: [PATCH 3/3] Remove obsoleted code relating to targetlist SRF
evaluation.
Author: Andres Freund
---
src/backend/catalog/index.c | 3 +-
src/backend/catalog/partition.c | 5 +-
src/backend/commands/copy.c | 2 +-
src/backend/commands/prepare.c | 3 +-
src/backend/commands/tablecmds.c | 3 +-
src/backend/commands/typecmds.c | 2 +-
src/backend/executor/execAmi.c | 44 +-
src/backend/executor/execQual.c | 937 ++++++++----------------------
src/backend/executor/execScan.c | 30 +-
src/backend/executor/execUtils.c | 6 -
src/backend/executor/nodeAgg.c | 52 +-
src/backend/executor/nodeBitmapHeapscan.c | 2 -
src/backend/executor/nodeCtescan.c | 2 -
src/backend/executor/nodeCustom.c | 2 -
src/backend/executor/nodeForeignscan.c | 2 -
src/backend/executor/nodeFunctionscan.c | 2 -
src/backend/executor/nodeGather.c | 25 +-
src/backend/executor/nodeGroup.c | 42 +-
src/backend/executor/nodeHash.c | 2 +-
src/backend/executor/nodeHashjoin.c | 58 +-
src/backend/executor/nodeIndexonlyscan.c | 2 -
src/backend/executor/nodeIndexscan.c | 11 +-
src/backend/executor/nodeLimit.c | 19 +-
src/backend/executor/nodeMergejoin.c | 59 +-
src/backend/executor/nodeModifyTable.c | 4 +-
src/backend/executor/nodeNestloop.c | 41 +-
src/backend/executor/nodeResult.c | 33 +-
src/backend/executor/nodeSamplescan.c | 8 +-
src/backend/executor/nodeSeqscan.c | 2 -
src/backend/executor/nodeSetResult.c | 2 +-
src/backend/executor/nodeSubplan.c | 31 +-
src/backend/executor/nodeSubqueryscan.c | 2 -
src/backend/executor/nodeTidscan.c | 8 +-
src/backend/executor/nodeValuesscan.c | 5 +-
src/backend/executor/nodeWindowAgg.c | 58 +-
src/backend/executor/nodeWorktablescan.c | 2 -
src/backend/optimizer/util/clauses.c | 4 +-
src/backend/optimizer/util/predtest.c | 2 +-
src/backend/utils/adt/domains.c | 2 +-
src/backend/utils/adt/xml.c | 4 +-
src/include/executor/executor.h | 15 +-
src/include/nodes/execnodes.h | 13 +-
src/pl/plpgsql/src/pl_exec.c | 5 +-
43 files changed, 352 insertions(+), 1204 deletions(-)
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index cac0cbf7d4..26cbc0e06a 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -1805,8 +1805,7 @@ FormIndexDatum(IndexInfo *indexInfo,
elog(ERROR, "wrong number of index expressions");
iDatum = ExecEvalExprSwitchContext((ExprState *) lfirst(indexpr_item),
GetPerTupleExprContext(estate),
- &isNull,
- NULL);
+ &isNull);
indexpr_item = lnext(indexpr_item);
}
values[i] = iDatum;
diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c
index 874e69d8d6..6dec75b59e 100644
--- a/src/backend/catalog/partition.c
+++ b/src/backend/catalog/partition.c
@@ -1339,7 +1339,7 @@ get_qual_for_range(PartitionKey key, PartitionBoundSpec *spec)
test_exprstate = ExecInitExpr(test_expr, NULL);
test_result = ExecEvalExprSwitchContext(test_exprstate,
GetPerTupleExprContext(estate),
- &isNull, NULL);
+ &isNull);
MemoryContextSwitchTo(oldcxt);
FreeExecutorState(estate);
@@ -1610,8 +1610,7 @@ FormPartitionKeyDatum(PartitionDispatch pd,
elog(ERROR, "wrong number of partition key expressions");
datum = ExecEvalExprSwitchContext((ExprState *) lfirst(partexpr_item),
GetPerTupleExprContext(estate),
- &isNull,
- NULL);
+ &isNull);
partexpr_item = lnext(partexpr_item);
}
values[i] = datum;
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 1fd2162794..ab666b9bdd 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -3395,7 +3395,7 @@ NextCopyFrom(CopyState cstate, ExprContext *econtext,
Assert(CurrentMemoryContext == econtext->ecxt_per_tuple_memory);
values[defmap[i]] = ExecEvalExpr(defexprs[i], econtext,
- &nulls[defmap[i]], NULL);
+ &nulls[defmap[i]]);
}
return true;
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index 1ff41661a5..7d7e3daf1e 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -413,8 +413,7 @@ EvaluateParams(PreparedStatement *pstmt, List *params,
prm->pflags = PARAM_FLAG_CONST;
prm->value = ExecEvalExprSwitchContext(n,
GetPerTupleExprContext(estate),
- &prm->isnull,
- NULL);
+ &prm->isnull);
i++;
}
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index e633a50dd2..ae92b2c1b7 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -4461,8 +4461,7 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
values[ex->attnum - 1] = ExecEvalExpr(ex->exprstate,
econtext,
- &isnull[ex->attnum - 1],
- NULL);
+ &isnull[ex->attnum - 1]);
}
/*
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index 3ff6cbca56..4c33d55484 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2735,7 +2735,7 @@ validateDomainConstraint(Oid domainoid, char *ccbin)
conResult = ExecEvalExprSwitchContext(exprstate,
econtext,
- &isNull, NULL);
+ &isNull);
if (!isNull && !DatumGetBool(conResult))
{
diff --git a/src/backend/executor/execAmi.c b/src/backend/executor/execAmi.c
index c9c222f446..a412cee11f 100644
--- a/src/backend/executor/execAmi.c
+++ b/src/backend/executor/execAmi.c
@@ -59,7 +59,6 @@
#include "utils/syscache.h"
-static bool TargetListSupportsBackwardScan(List *targetlist);
static bool IndexSupportsBackwardScan(Oid indexid);
@@ -120,7 +119,7 @@ ExecReScan(PlanState *node)
UpdateChangedParamSet(node->righttree, node->chgParam);
}
- /* Shut down any SRFs in the plan node's targetlist */
+ /* Call expression callbacks */
if (node->ps_ExprContext)
ReScanExprContext(node->ps_ExprContext);
@@ -460,8 +459,7 @@ ExecSupportsBackwardScan(Plan *node)
{
case T_Result:
if (outerPlan(node) != NULL)
- return ExecSupportsBackwardScan(outerPlan(node)) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return ExecSupportsBackwardScan(outerPlan(node));
else
return false;
@@ -478,13 +476,6 @@ ExecSupportsBackwardScan(Plan *node)
return true;
}
- case T_SeqScan:
- case T_TidScan:
- case T_FunctionScan:
- case T_ValuesScan:
- case T_CteScan:
- return TargetListSupportsBackwardScan(node->targetlist);
-
case T_SampleScan:
/* Simplify life for tablesample methods by disallowing this */
return false;
@@ -493,35 +484,34 @@ ExecSupportsBackwardScan(Plan *node)
return false;
case T_IndexScan:
- return IndexSupportsBackwardScan(((IndexScan *) node)->indexid) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return IndexSupportsBackwardScan(((IndexScan *) node)->indexid);
case T_IndexOnlyScan:
- return IndexSupportsBackwardScan(((IndexOnlyScan *) node)->indexid) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return IndexSupportsBackwardScan(((IndexOnlyScan *) node)->indexid);
case T_SubqueryScan:
- return ExecSupportsBackwardScan(((SubqueryScan *) node)->subplan) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return ExecSupportsBackwardScan(((SubqueryScan *) node)->subplan);
case T_CustomScan:
{
uint32 flags = ((CustomScan *) node)->flags;
- if ((flags & CUSTOMPATH_SUPPORT_BACKWARD_SCAN) &&
- TargetListSupportsBackwardScan(node->targetlist))
+ if (flags & CUSTOMPATH_SUPPORT_BACKWARD_SCAN)
return true;
}
return false;
+ case T_SeqScan:
+ case T_TidScan:
+ case T_FunctionScan:
+ case T_ValuesScan:
+ case T_CteScan:
case T_Material:
case T_Sort:
- /* these don't evaluate tlist */
return true;
case T_LockRows:
case T_Limit:
- /* these don't evaluate tlist */
return ExecSupportsBackwardScan(outerPlan(node));
default:
@@ -530,18 +520,6 @@ ExecSupportsBackwardScan(Plan *node)
}
/*
- * If the tlist contains set-returning functions, we can't support backward
- * scan, because the TupFromTlist code is direction-ignorant.
- */
-static bool
-TargetListSupportsBackwardScan(List *targetlist)
-{
- if (expression_returns_set((Node *) targetlist))
- return false;
- return true;
-}
-
-/*
* An IndexScan or IndexOnlyScan node supports backward scan only if the
* index's AM does.
*/
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index ad673ed8b7..a590bd7b28 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -64,40 +64,40 @@
/* static function decls */
static Datum ExecEvalArrayRef(ArrayRefExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static bool isAssignmentIndirectionExpr(ExprState *exprstate);
static Datum ExecEvalAggref(AggrefExprState *aggref,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWindowFunc(WindowFuncExprState *wfunc,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalParamExtern(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static void init_fcache(Oid foid, Oid input_collation, FuncExprState *fcache,
MemoryContext fcacheCxt, bool needDescForSets);
static void ShutdownFuncExpr(Datum arg);
static TupleDesc get_cached_rowtype(Oid type_id, int32 typmod,
TupleDesc *cache_field, ExprContext *econtext);
static void ShutdownTupleDescRef(Datum arg);
-static ExprDoneCond ExecEvalFuncArgs(FunctionCallInfo fcinfo,
+static void ExecEvalFuncArgs(FunctionCallInfo fcinfo,
List *argList, ExprContext *econtext);
static void ExecPrepareTuplestoreResult(FuncExprState *fcache,
ExprContext *econtext,
@@ -106,85 +106,85 @@ static void ExecPrepareTuplestoreResult(FuncExprState *fcache,
static void tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc);
static Datum ExecMakeFunctionResultNoSets(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFunc(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalOper(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalDistinct(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCaseTestExpr(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalArray(ArrayExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRow(RowExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRowCompare(RowCompareExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoalesce(CoalesceExprState *coalesceExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalMinMax(MinMaxExprState *minmaxExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalSQLValueFunction(ExprState *svfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNullIf(FuncExprState *nullIfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNullTest(NullTestState *nstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalBooleanTest(GenericExprState *bstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceToDomain(CoerceToDomainState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceToDomainValue(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFieldSelect(FieldSelectState *fstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFieldStore(FieldStoreState *fstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRelabelType(GenericExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
/* ----------------------------------------------------------------
@@ -195,8 +195,7 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
* Each of the following routines having the signature
* Datum ExecEvalFoo(ExprState *expression,
* ExprContext *econtext,
- * bool *isNull,
- * ExprDoneCond *isDone);
+ * bool *isNull);
* is responsible for evaluating one type or subtype of ExprState node.
* They are normally called via the ExecEvalExpr macro, which makes use of
* the function pointer set up when the ExprState node was built by
@@ -220,22 +219,6 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
* return value: Datum value of result
* *isNull: set to TRUE if result is NULL (actual return value is
* meaningless if so); set to FALSE if non-null result
- * *isDone: set to indicator of set-result status
- *
- * A caller that can only accept a singleton (non-set) result should pass
- * NULL for isDone; if the expression computes a set result then an error
- * will be reported via ereport. If the caller does pass an isDone pointer
- * then *isDone is set to one of these three states:
- * ExprSingleResult singleton result (not a set)
- * ExprMultipleResult return value is one element of a set
- * ExprEndResult there are no more elements in the set
- * When ExprMultipleResult is returned, the caller should invoke
- * ExecEvalExpr() repeatedly until ExprEndResult is returned. ExprEndResult
- * is returned after the last real set element. For convenience isNull will
- * always be set TRUE when ExprEndResult is returned, but this should not be
- * taken as indicating a NULL element of the set. Note that these return
- * conventions allow us to distinguish among a singleton NULL, a NULL element
- * of a set, and an empty set.
*
* The caller should already have switched into the temporary memory
* context econtext->ecxt_per_tuple_memory. The convenience entry point
@@ -260,8 +243,7 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
static Datum
ExecEvalArrayRef(ArrayRefExprState *astate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayRef *arrayRef = (ArrayRef *) astate->xprstate.expr;
Datum array_source;
@@ -278,8 +260,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
array_source = ExecEvalExpr(astate->refexpr,
econtext,
- isNull,
- isDone);
+ isNull);
/*
* If refexpr yields NULL, and it's a fetch, then result is NULL. In the
@@ -287,8 +268,6 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
*/
if (*isNull)
{
- if (isDone && *isDone == ExprEndResult)
- return (Datum) NULL; /* end of set result */
if (!isAssignment)
return (Datum) NULL;
}
@@ -314,8 +293,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
upper.indx[i++] = DatumGetInt32(ExecEvalExpr(eltstate,
econtext,
- &eisnull,
- NULL));
+ &eisnull));
/* If any index expr yields NULL, result is NULL or error */
if (eisnull)
{
@@ -350,8 +328,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
lower.indx[j++] = DatumGetInt32(ExecEvalExpr(eltstate,
econtext,
- &eisnull,
- NULL));
+ &eisnull));
/* If any index expr yields NULL, result is NULL or error */
if (eisnull)
{
@@ -438,8 +415,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
*/
sourceData = ExecEvalExpr(astate->refassgnexpr,
econtext,
- &eisnull,
- NULL);
+ &eisnull);
econtext->caseValue_datum = save_datum;
econtext->caseValue_isNull = save_isNull;
@@ -542,11 +518,8 @@ isAssignmentIndirectionExpr(ExprState *exprstate)
*/
static Datum
ExecEvalAggref(AggrefExprState *aggref, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
-
if (econtext->ecxt_aggvalues == NULL) /* safety check */
elog(ERROR, "no aggregates in this expression context");
@@ -563,11 +536,8 @@ ExecEvalAggref(AggrefExprState *aggref, ExprContext *econtext,
*/
static Datum
ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
-
if (econtext->ecxt_aggvalues == NULL) /* safety check */
elog(ERROR, "no window functions in this expression context");
@@ -588,15 +558,12 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
*/
static Datum
ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) exprstate->expr;
TupleTableSlot *slot;
AttrNumber attnum;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Get the input slot and attribute number we want */
switch (variable->varno)
{
@@ -677,15 +644,12 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) exprstate->expr;
TupleTableSlot *slot;
AttrNumber attnum;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Get the input slot and attribute number we want */
switch (variable->varno)
{
@@ -725,7 +689,7 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
@@ -733,9 +697,6 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
MemoryContext oldcontext;
bool needslow = false;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* This was checked by ExecInitExpr */
Assert(variable->varattno == InvalidAttrNumber);
@@ -941,7 +902,7 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
/* Fetch the value */
return (*wrvstate->xprstate.evalfunc) ((ExprState *) wrvstate, econtext,
- isNull, isDone);
+ isNull);
}
/* ----------------------------------------------------------------
@@ -952,14 +913,12 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
HeapTupleHeader dtuple;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = false;
/* Get the input slot we want */
@@ -1008,7 +967,7 @@ ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
@@ -1018,8 +977,6 @@ ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
HeapTupleHeader dtuple;
int i;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = false;
/* Get the input slot we want */
@@ -1097,13 +1054,10 @@ ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Const *con = (Const *) exprstate->expr;
- if (isDone)
- *isDone = ExprSingleResult;
-
*isNull = con->constisnull;
return con->constvalue;
}
@@ -1116,15 +1070,12 @@ ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Param *expression = (Param *) exprstate->expr;
int thisParamId = expression->paramid;
ParamExecData *prm;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* PARAM_EXEC params (internal executor parameters) are stored in the
* ecxt_param_exec_vals array, and can be accessed by array index.
@@ -1149,15 +1100,12 @@ ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalParamExtern(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Param *expression = (Param *) exprstate->expr;
int thisParamId = expression->paramid;
ParamListInfo paramInfo = econtext->ecxt_param_list_info;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* PARAM_EXTERN parameters must be sought in ecxt_param_list_info.
*/
@@ -1412,7 +1360,6 @@ init_fcache(Oid foid, Oid input_collation, FuncExprState *fcache,
/* Initialize additional state */
fcache->funcResultStore = NULL;
fcache->funcResultSlot = NULL;
- fcache->setArgsValid = false;
fcache->shutdown_reg = false;
}
@@ -1499,47 +1446,26 @@ ShutdownTupleDescRef(Datum arg)
/*
* Evaluate arguments for a function.
*/
-static ExprDoneCond
+static void
ExecEvalFuncArgs(FunctionCallInfo fcinfo,
List *argList,
ExprContext *econtext)
{
- ExprDoneCond argIsDone;
int i;
ListCell *arg;
- argIsDone = ExprSingleResult; /* default assumption */
-
i = 0;
foreach(arg, argList)
{
ExprState *argstate = (ExprState *) lfirst(arg);
- ExprDoneCond thisArgIsDone;
fcinfo->arg[i] = ExecEvalExpr(argstate,
econtext,
- &fcinfo->argnull[i],
- &thisArgIsDone);
-
- if (thisArgIsDone != ExprSingleResult)
- {
- /*
- * We allow only one argument to have a set value; we'd need much
- * more complexity to keep track of multiple set arguments (cf.
- * ExecTargetList) and it doesn't seem worth it.
- */
- if (argIsDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("functions and operators can take at most one set argument")));
- argIsDone = thisArgIsDone;
- }
+ &fcinfo->argnull[i]);
i++;
}
Assert(i == fcinfo->nargs);
-
- return argIsDone;
}
/*
@@ -1686,9 +1612,10 @@ ExecMakeFunctionResultSet(FuncExprState *fcache,
FunctionCallInfo fcinfo;
PgStat_FunctionCallUsage fcusage;
ReturnSetInfo rsinfo; /* for functions returning sets */
- ExprDoneCond argDone;
- bool hasSetArg;
int i;
+ bool callit;
+
+ Assert(isDone);
restart:
@@ -1728,7 +1655,6 @@ restart:
*/
if (fcache->funcResultStore)
{
- Assert(isDone); /* it was provided before ... */
if (tuplestore_gettupleslot(fcache->funcResultStore, true, false,
fcache->funcResultSlot))
{
@@ -1748,15 +1674,9 @@ restart:
/* Exhausted the tuplestore, so clean up */
tuplestore_end(fcache->funcResultStore);
fcache->funcResultStore = NULL;
- /* We are done unless there was a set-valued argument */
- if (!fcache->setHasSetArg)
- {
- *isDone = ExprEndResult;
- *isNull = true;
- return (Datum) 0;
- }
- /* If there was, continue evaluating the argument values */
- Assert(!fcache->setArgsValid);
+ *isDone = ExprEndResult;
+ *isNull = true;
+ return (Datum) 0;
}
/*
@@ -1768,255 +1688,132 @@ restart:
fcinfo = &fcache->fcinfo_data;
arguments = fcache->args;
if (!fcache->setArgsValid)
- {
- argDone = ExecEvalFuncArgs(fcinfo, arguments, econtext);
- if (argDone == ExprEndResult)
- {
- /* input is an empty set, so return an empty set. */
- *isNull = true;
- if (isDone)
- *isDone = ExprEndResult;
- else
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
- return (Datum) 0;
- }
- hasSetArg = (argDone != ExprSingleResult);
- }
+ ExecEvalFuncArgs(fcinfo, arguments, econtext);
else
- {
- /* Re-use callinfo from previous evaluation */
- hasSetArg = fcache->setHasSetArg;
/* Reset flag (we may set it again below) */
fcache->setArgsValid = false;
- }
+
+ /* shouldn't get here otherwise */
+ Assert (fcache->func.fn_retset);
/*
* Now call the function, passing the evaluated parameter values.
*/
- if (fcache->func.fn_retset || hasSetArg)
+
+ /* Prepare a resultinfo node for communication. */
+ if (fcache->func.fn_retset)
+ fcinfo->resultinfo = (Node *) &rsinfo;
+ rsinfo.type = T_ReturnSetInfo;
+ rsinfo.econtext = econtext;
+ rsinfo.expectedDesc = fcache->funcResultDesc;
+ rsinfo.allowedModes = (int) (SFRM_ValuePerCall | SFRM_Materialize);
+ /* note we do not set SFRM_Materialize_Random or _Preferred */
+ rsinfo.returnMode = SFRM_ValuePerCall;
+ /* isDone is filled below */
+ rsinfo.setResult = NULL;
+ rsinfo.setDesc = NULL;
+
+ /*
+ * If function is strict, and there are any NULL arguments, skip
+ * calling the function.
+ */
+ callit = true;
+ if (fcache->func.fn_strict)
{
- /*
- * We need to return a set result. Complain if caller not ready to
- * accept one.
- */
- if (isDone == NULL)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
-
- /*
- * Prepare a resultinfo node for communication. If the function
- * doesn't itself return set, we don't pass the resultinfo to the
- * function, but we need to fill it in anyway for internal use.
- */
- if (fcache->func.fn_retset)
- fcinfo->resultinfo = (Node *) &rsinfo;
- rsinfo.type = T_ReturnSetInfo;
- rsinfo.econtext = econtext;
- rsinfo.expectedDesc = fcache->funcResultDesc;
- rsinfo.allowedModes = (int) (SFRM_ValuePerCall | SFRM_Materialize);
- /* note we do not set SFRM_Materialize_Random or _Preferred */
- rsinfo.returnMode = SFRM_ValuePerCall;
- /* isDone is filled below */
- rsinfo.setResult = NULL;
- rsinfo.setDesc = NULL;
-
- /*
- * This loop handles the situation where we have both a set argument
- * and a set-valued function. Once we have exhausted the function's
- * value(s) for a particular argument value, we have to get the next
- * argument value and start the function over again. We might have to
- * do it more than once, if the function produces an empty result set
- * for a particular input value.
- */
- for (;;)
+ for (i = 0; i < fcinfo->nargs; i++)
{
- /*
- * If function is strict, and there are any NULL arguments, skip
- * calling the function (at least for this set of args).
- */
- bool callit = true;
-
- if (fcache->func.fn_strict)
+ if (fcinfo->argnull[i])
{
- for (i = 0; i < fcinfo->nargs; i++)
- {
- if (fcinfo->argnull[i])
- {
- callit = false;
- break;
- }
- }
- }
-
- if (callit)
- {
- pgstat_init_function_usage(fcinfo, &fcusage);
-
- fcinfo->isnull = false;
- rsinfo.isDone = ExprSingleResult;
- result = FunctionCallInvoke(fcinfo);
- *isNull = fcinfo->isnull;
- *isDone = rsinfo.isDone;
-
- pgstat_end_function_usage(&fcusage,
- rsinfo.isDone != ExprMultipleResult);
- }
- else if (fcache->func.fn_retset)
- {
- /* for a strict SRF, result for NULL is an empty set */
- result = (Datum) 0;
- *isNull = true;
- *isDone = ExprEndResult;
- }
- else
- {
- /* for a strict non-SRF, result for NULL is a NULL */
- result = (Datum) 0;
- *isNull = true;
- *isDone = ExprSingleResult;
- }
-
- /* Which protocol does function want to use? */
- if (rsinfo.returnMode == SFRM_ValuePerCall)
- {
- if (*isDone != ExprEndResult)
- {
- /*
- * Got a result from current argument. If function itself
- * returns set, save the current argument values to re-use
- * on the next call.
- */
- if (fcache->func.fn_retset &&
- *isDone == ExprMultipleResult)
- {
- fcache->setHasSetArg = hasSetArg;
- fcache->setArgsValid = true;
- /* Register cleanup callback if we didn't already */
- if (!fcache->shutdown_reg)
- {
- RegisterExprContextCallback(econtext,
- ShutdownFuncExpr,
- PointerGetDatum(fcache));
- fcache->shutdown_reg = true;
- }
- }
-
- /*
- * Make sure we say we are returning a set, even if the
- * function itself doesn't return sets.
- */
- if (hasSetArg)
- *isDone = ExprMultipleResult;
- break;
- }
- }
- else if (rsinfo.returnMode == SFRM_Materialize)
- {
- /* check we're on the same page as the function author */
- if (rsinfo.isDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
- errmsg("table-function protocol for materialize mode was not followed")));
- if (rsinfo.setResult != NULL)
- {
- /* prepare to return values from the tuplestore */
- ExecPrepareTuplestoreResult(fcache, econtext,
- rsinfo.setResult,
- rsinfo.setDesc);
- /* remember whether we had set arguments */
- fcache->setHasSetArg = hasSetArg;
- /* loop back to top to start returning from tuplestore */
- goto restart;
- }
- /* if setResult was left null, treat it as empty set */
- *isDone = ExprEndResult;
- *isNull = true;
- result = (Datum) 0;
- }
- else
- ereport(ERROR,
- (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
- errmsg("unrecognized table-function returnMode: %d",
- (int) rsinfo.returnMode)));
-
- /* Else, done with this argument */
- if (!hasSetArg)
- break; /* input not a set, so done */
-
- /* Re-eval args to get the next element of the input set */
- argDone = ExecEvalFuncArgs(fcinfo, arguments, econtext);
-
- if (argDone != ExprMultipleResult)
- {
- /* End of argument set, so we're done. */
- *isNull = true;
- *isDone = ExprEndResult;
- result = (Datum) 0;
+ callit = false;
break;
}
-
- /*
- * If we reach here, loop around to run the function on the new
- * argument.
- */
}
}
- else
+
+ if (callit)
{
- /*
- * Non-set case: much easier.
- *
- * In common cases, this code path is unreachable because we'd have
- * selected ExecMakeFunctionResultNoSets instead. However, it's
- * possible to get here if an argument sometimes produces set results
- * and sometimes scalar results. For example, a CASE expression might
- * call a set-returning function in only some of its arms.
- */
- if (isDone)
- *isDone = ExprSingleResult;
-
- /*
- * If function is strict, and there are any NULL arguments, skip
- * calling the function and return NULL.
- */
- if (fcache->func.fn_strict)
- {
- for (i = 0; i < fcinfo->nargs; i++)
- {
- if (fcinfo->argnull[i])
- {
- *isNull = true;
- return (Datum) 0;
- }
- }
- }
-
pgstat_init_function_usage(fcinfo, &fcusage);
fcinfo->isnull = false;
+ rsinfo.isDone = ExprSingleResult;
result = FunctionCallInvoke(fcinfo);
*isNull = fcinfo->isnull;
+ *isDone = rsinfo.isDone;
- pgstat_end_function_usage(&fcusage, true);
+ pgstat_end_function_usage(&fcusage,
+ rsinfo.isDone != ExprMultipleResult);
+ }
+ else
+ {
+ /* for a strict SRF, result for NULL is an empty set */
+ result = (Datum) 0;
+ *isNull = true;
+ *isDone = ExprEndResult;
}
+ /* Which protocol does function want to use? */
+ if (rsinfo.returnMode == SFRM_ValuePerCall)
+ {
+ if (*isDone != ExprEndResult)
+ {
+ /*
+ * Got a result from current argument. Save the current
+ * argument values to re-use on the next call.
+ */
+ if (fcache->func.fn_retset &&
+ *isDone == ExprMultipleResult)
+ {
+ fcache->setArgsValid = true;
+ /* Register cleanup callback if we didn't already */
+ if (!fcache->shutdown_reg)
+ {
+ RegisterExprContextCallback(econtext,
+ ShutdownFuncExpr,
+ PointerGetDatum(fcache));
+ fcache->shutdown_reg = true;
+ }
+ }
+ }
+ }
+ else if (rsinfo.returnMode == SFRM_Materialize)
+ {
+ /* check we're on the same page as the function author */
+ if (rsinfo.isDone != ExprSingleResult)
+ ereport(ERROR,
+ (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
+ errmsg("table-function protocol for materialize mode was not followed")));
+ if (rsinfo.setResult != NULL)
+ {
+ /* prepare to return values from the tuplestore */
+ ExecPrepareTuplestoreResult(fcache, econtext,
+ rsinfo.setResult,
+ rsinfo.setDesc);
+ /* loop back to top to start returning from tuplestore */
+ goto restart;
+ }
+ /* if setResult was left null, treat it as empty set */
+ *isDone = ExprEndResult;
+ *isNull = true;
+ result = (Datum) 0;
+ }
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
+ errmsg("unrecognized table-function returnMode: %d",
+ (int) rsinfo.returnMode)));
return result;
}
/*
* ExecMakeFunctionResultNoSets
*
- * Simplified version of ExecMakeFunctionResult that can only handle
- * non-set cases. Hand-tuned for speed.
+ * Portion of ExecEvalFunc/ExecEvalOper that does not need initialization.
+ * Hand-tuned for speed.
*/
static Datum
ExecMakeFunctionResultNoSets(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
ListCell *arg;
Datum result;
@@ -2027,9 +1824,6 @@ ExecMakeFunctionResultNoSets(FuncExprState *fcache,
/* Guard against stack overflow due to overly complex expressions */
check_stack_depth();
- if (isDone)
- *isDone = ExprSingleResult;
-
/* inlined, simplified version of ExecEvalFuncArgs */
fcinfo = &fcache->fcinfo_data;
i = 0;
@@ -2039,8 +1833,7 @@ ExecMakeFunctionResultNoSets(FuncExprState *fcache,
fcinfo->arg[i] = ExecEvalExpr(argstate,
econtext,
- &fcinfo->argnull[i],
- NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -2137,7 +1930,6 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
IsA(funcexpr->expr, FuncExpr))
{
FuncExprState *fcache = (FuncExprState *) funcexpr;
- ExprDoneCond argDone;
/*
* This path is similar to ExecMakeFunctionResultSet.
@@ -2172,15 +1964,9 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
*/
MemoryContextReset(argContext);
oldcontext = MemoryContextSwitchTo(argContext);
- argDone = ExecEvalFuncArgs(&fcinfo, fcache->args, econtext);
+ ExecEvalFuncArgs(&fcinfo, fcache->args, econtext);
MemoryContextSwitchTo(oldcontext);
- /* We don't allow sets in the arguments of the table function */
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
-
/*
* If function is strict, and there are any NULL arguments, skip
* calling the function and act like it returned NULL (or an empty
@@ -2240,8 +2026,8 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
}
else
{
- result = ExecEvalExpr(funcexpr, econtext,
- &fcinfo.isnull, &rsinfo.isDone);
+ result = ExecEvalExpr(funcexpr, econtext, &fcinfo.isnull);
+ rsinfo.isDone = ExprSingleResult;
}
/* Which protocol does function want to use? */
@@ -2435,8 +2221,7 @@ no_function_result:
static Datum
ExecEvalFunc(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* This is called only the first time through */
FuncExpr *func = (FuncExpr *) fcache->xprstate.expr;
@@ -2452,7 +2237,7 @@ ExecEvalFunc(FuncExprState *fcache,
/* Change the evalfunc pointer, to skip the above initialization. */
fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -2462,8 +2247,7 @@ ExecEvalFunc(FuncExprState *fcache,
static Datum
ExecEvalOper(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* This is called only the first time through */
OpExpr *op = (OpExpr *) fcache->xprstate.expr;
@@ -2479,7 +2263,7 @@ ExecEvalOper(FuncExprState *fcache,
/* Change the evalfunc pointer, to skip the above initialization. */
fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -2496,17 +2280,13 @@ ExecEvalOper(FuncExprState *fcache,
static Datum
ExecEvalDistinct(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
- /* Set default values for result flags: non-null, not a set result */
+ /* Set non-null as default */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -2516,7 +2296,7 @@ ExecEvalDistinct(FuncExprState *fcache,
DistinctExpr *op = (DistinctExpr *) fcache->xprstate.expr;
init_fcache(op->opfuncid, op->inputcollid, fcache,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory, false);
Assert(!fcache->func.fn_retset);
}
@@ -2524,11 +2304,7 @@ ExecEvalDistinct(FuncExprState *fcache,
* Evaluate arguments
*/
fcinfo = &fcache->fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, fcache->args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("IS DISTINCT FROM does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, fcache->args, econtext);
Assert(fcinfo->nargs == 2);
if (fcinfo->argnull[0] && fcinfo->argnull[1])
@@ -2564,7 +2340,7 @@ ExecEvalDistinct(FuncExprState *fcache,
static Datum
ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ScalarArrayOpExpr *opexpr = (ScalarArrayOpExpr *) sstate->fxprstate.xprstate.expr;
bool useOr = opexpr->useOr;
@@ -2573,7 +2349,6 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
Datum result;
bool resultnull;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
int i;
int16 typlen;
bool typbyval;
@@ -2582,10 +2357,8 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
bits8 *bitmap;
int bitmask;
- /* Set default values for result flags: non-null, not a set result */
+ /* Set non-null as default */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -2593,7 +2366,7 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
if (sstate->fxprstate.func.fn_oid == InvalidOid)
{
init_fcache(opexpr->opfuncid, opexpr->inputcollid, &sstate->fxprstate,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory, false);
Assert(!sstate->fxprstate.func.fn_retset);
}
@@ -2601,11 +2374,7 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
* Evaluate arguments
*/
fcinfo = &sstate->fxprstate.fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, sstate->fxprstate.args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("op ANY/ALL (array) does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, sstate->fxprstate.args, econtext);
Assert(fcinfo->nargs == 2);
/*
@@ -2751,15 +2520,12 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
*/
static Datum
ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ExprState *clause = linitial(notclause->args);
Datum expr_value;
- if (isDone)
- *isDone = ExprSingleResult;
-
- expr_value = ExecEvalExpr(clause, econtext, isNull, NULL);
+ expr_value = ExecEvalExpr(clause, econtext, isNull);
/*
* if the expression evaluates to null, then we just cascade the null back
@@ -2781,15 +2547,12 @@ ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
*/
static Datum
ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = orExpr->args;
ListCell *clause;
bool AnyNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
AnyNull = false;
/*
@@ -2810,7 +2573,7 @@ ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
ExprState *clausestate = (ExprState *) lfirst(clause);
Datum clause_value;
- clause_value = ExecEvalExpr(clausestate, econtext, isNull, NULL);
+ clause_value = ExecEvalExpr(clausestate, econtext, isNull);
/*
* if we have a non-null true result, then return it.
@@ -2832,15 +2595,12 @@ ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
*/
static Datum
ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = andExpr->args;
ListCell *clause;
bool AnyNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
AnyNull = false;
/*
@@ -2857,7 +2617,7 @@ ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
ExprState *clausestate = (ExprState *) lfirst(clause);
Datum clause_value;
- clause_value = ExecEvalExpr(clausestate, econtext, isNull, NULL);
+ clause_value = ExecEvalExpr(clausestate, econtext, isNull);
/*
* if we have a non-null false result, then return it.
@@ -2883,7 +2643,7 @@ ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
static Datum
ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ConvertRowtypeExpr *convert = (ConvertRowtypeExpr *) cstate->xprstate.expr;
HeapTuple result;
@@ -2891,7 +2651,7 @@ ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
HeapTupleHeader tuple;
HeapTupleData tmptup;
- tupDatum = ExecEvalExpr(cstate->arg, econtext, isNull, isDone);
+ tupDatum = ExecEvalExpr(cstate->arg, econtext, isNull);
/* this test covers the isDone exception too: */
if (*isNull)
@@ -2967,16 +2727,13 @@ ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
*/
static Datum
ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = caseExpr->args;
ListCell *clause;
Datum save_datum;
bool save_isNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* If there's a test expression, we have to evaluate it and save the value
* where the CaseTestExpr placeholders can find it. We must save and
@@ -3001,8 +2758,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
arg_value = ExecEvalExpr(caseExpr->arg,
econtext,
- &arg_isNull,
- NULL);
+ &arg_isNull);
/* Since caseValue_datum may be read multiple times, force to R/O */
econtext->caseValue_datum =
MakeExpandedObjectReadOnly(arg_value,
@@ -3024,8 +2780,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
clause_value = ExecEvalExpr(wclause->expr,
econtext,
- &clause_isNull,
- NULL);
+ &clause_isNull);
/*
* if we have a true test, then we return the result, since the case
@@ -3038,8 +2793,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
econtext->caseValue_isNull = save_isNull;
return ExecEvalExpr(wclause->result,
econtext,
- isNull,
- isDone);
+ isNull);
}
}
@@ -3050,8 +2804,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
{
return ExecEvalExpr(caseExpr->defresult,
econtext,
- isNull,
- isDone);
+ isNull);
}
*isNull = true;
@@ -3066,10 +2819,8 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
static Datum
ExecEvalCaseTestExpr(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = econtext->caseValue_isNull;
return econtext->caseValue_datum;
}
@@ -3086,17 +2837,13 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
static Datum
ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
int result = 0;
int attnum = 0;
Bitmapset *grouped_cols = gstate->aggstate->grouped_cols;
ListCell *lc;
- if (isDone)
- *isDone = ExprSingleResult;
-
*isNull = false;
foreach(lc, (gstate->clauses))
@@ -3118,7 +2865,7 @@ ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
*/
static Datum
ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayExpr *arrayExpr = (ArrayExpr *) astate->xprstate.expr;
ArrayType *result;
@@ -3128,10 +2875,8 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
int dims[MAXDIM];
int lbs[MAXDIM];
- /* Set default values for result flags: non-null, not a set result */
+ /* Set default values for result flag: non-null */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
if (!arrayExpr->multidims)
{
@@ -3156,7 +2901,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
{
ExprState *e = (ExprState *) lfirst(element);
- dvalues[i] = ExecEvalExpr(e, econtext, &dnulls[i], NULL);
+ dvalues[i] = ExecEvalExpr(e, econtext, &dnulls[i]);
i++;
}
@@ -3206,7 +2951,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
ArrayType *array;
int this_ndims;
- arraydatum = ExecEvalExpr(e, econtext, &eisnull, NULL);
+ arraydatum = ExecEvalExpr(e, econtext, &eisnull);
/* temporarily ignore null subarrays */
if (eisnull)
{
@@ -3345,7 +3090,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
static Datum
ExecEvalRow(RowExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
HeapTuple tuple;
Datum *values;
@@ -3354,10 +3099,8 @@ ExecEvalRow(RowExprState *rstate,
ListCell *arg;
int i;
- /* Set default values for result flags: non-null, not a set result */
+ /* Set default values for result flag: non-null */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/* Allocate workspace */
natts = rstate->tupdesc->natts;
@@ -3373,7 +3116,7 @@ ExecEvalRow(RowExprState *rstate,
{
ExprState *e = (ExprState *) lfirst(arg);
- values[i] = ExecEvalExpr(e, econtext, &isnull[i], NULL);
+ values[i] = ExecEvalExpr(e, econtext, &isnull[i]);
i++;
}
@@ -3392,7 +3135,7 @@ ExecEvalRow(RowExprState *rstate,
static Datum
ExecEvalRowCompare(RowCompareExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
bool result;
RowCompareType rctype = ((RowCompareExpr *) rstate->xprstate.expr)->rctype;
@@ -3401,8 +3144,6 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
ListCell *r;
int i;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
i = 0;
@@ -3416,9 +3157,9 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
rstate->collations[i],
NULL, NULL);
locfcinfo.arg[0] = ExecEvalExpr(le, econtext,
- &locfcinfo.argnull[0], NULL);
+ &locfcinfo.argnull[0]);
locfcinfo.arg[1] = ExecEvalExpr(re, econtext,
- &locfcinfo.argnull[1], NULL);
+ &locfcinfo.argnull[1]);
if (rstate->funcs[i].fn_strict &&
(locfcinfo.argnull[0] || locfcinfo.argnull[1]))
return (Datum) 0; /* force NULL result */
@@ -3462,20 +3203,17 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
*/
static Datum
ExecEvalCoalesce(CoalesceExprState *coalesceExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ListCell *arg;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Simply loop through until something NOT NULL is found */
foreach(arg, coalesceExpr->args)
{
ExprState *e = (ExprState *) lfirst(arg);
Datum value;
- value = ExecEvalExpr(e, econtext, isNull, NULL);
+ value = ExecEvalExpr(e, econtext, isNull);
if (!*isNull)
return value;
}
@@ -3491,7 +3229,7 @@ ExecEvalCoalesce(CoalesceExprState *coalesceExpr, ExprContext *econtext,
*/
static Datum
ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result = (Datum) 0;
MinMaxExpr *minmax = (MinMaxExpr *) minmaxExpr->xprstate.expr;
@@ -3500,8 +3238,6 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
FunctionCallInfoData locfcinfo;
ListCell *arg;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
InitFunctionCallInfoData(locfcinfo, &minmaxExpr->cfunc, 2,
@@ -3516,7 +3252,7 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
bool valueIsNull;
int32 cmpresult;
- value = ExecEvalExpr(e, econtext, &valueIsNull, NULL);
+ value = ExecEvalExpr(e, econtext, &valueIsNull);
if (valueIsNull)
continue; /* ignore NULL inputs */
@@ -3552,14 +3288,12 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
static Datum
ExecEvalSQLValueFunction(ExprState *svfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result = (Datum) 0;
SQLValueFunction *svf = (SQLValueFunction *) svfExpr->expr;
FunctionCallInfoData fcinfo;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = false;
/*
@@ -3620,7 +3354,7 @@ ExecEvalSQLValueFunction(ExprState *svfExpr,
*/
static Datum
ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
XmlExpr *xexpr = (XmlExpr *) xmlExpr->xprstate.expr;
Datum value;
@@ -3628,8 +3362,6 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
ListCell *arg;
ListCell *narg;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
switch (xexpr->op)
@@ -3642,7 +3374,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
{
ExprState *e = (ExprState *) lfirst(arg);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (!isnull)
values = lappend(values, DatumGetPointer(value));
}
@@ -3667,7 +3399,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
ExprState *e = (ExprState *) lfirst(arg);
char *argname = strVal(lfirst(narg));
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (!isnull)
{
appendStringInfo(&buf, "<%s>%s</%s>",
@@ -3710,13 +3442,13 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 2);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
data = DatumGetTextP(value);
e = (ExprState *) lsecond(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull) /* probably can't happen */
return (Datum) 0;
preserve_whitespace = DatumGetBool(value);
@@ -3740,7 +3472,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
if (xmlExpr->args)
{
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
arg = NULL;
else
@@ -3767,20 +3499,20 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 3);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
data = DatumGetXmlP(value);
e = (ExprState *) lsecond(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
version = NULL;
else
version = DatumGetTextP(value);
e = (ExprState *) lthird(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
standalone = DatumGetInt32(value);
*isNull = false;
@@ -3799,7 +3531,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 1);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
@@ -3817,7 +3549,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 1);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
else
@@ -3844,14 +3576,10 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
static Datum
ExecEvalNullIf(FuncExprState *nullIfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
-
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -3861,7 +3589,7 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
NullIfExpr *op = (NullIfExpr *) nullIfExpr->xprstate.expr;
init_fcache(op->opfuncid, op->inputcollid, nullIfExpr,
- econtext->ecxt_per_query_memory, true);
+ econtext->ecxt_per_query_memory, false);
Assert(!nullIfExpr->func.fn_retset);
}
@@ -3869,11 +3597,7 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
* Evaluate arguments
*/
fcinfo = &nullIfExpr->fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, nullIfExpr->args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("NULLIF does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, nullIfExpr->args, econtext);
Assert(fcinfo->nargs == 2);
/* if either argument is NULL they can't be equal */
@@ -3903,16 +3627,12 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
static Datum
ExecEvalNullTest(NullTestState *nstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
NullTest *ntest = (NullTest *) nstate->xprstate.expr;
Datum result;
- result = ExecEvalExpr(nstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(nstate->arg, econtext, isNull);
if (ntest->argisrow && !(*isNull))
{
@@ -4012,16 +3732,12 @@ ExecEvalNullTest(NullTestState *nstate,
static Datum
ExecEvalBooleanTest(GenericExprState *bstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
BooleanTest *btest = (BooleanTest *) bstate->xprstate.expr;
Datum result;
- result = ExecEvalExpr(bstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(bstate->arg, econtext, isNull);
switch (btest->booltesttype)
{
@@ -4097,16 +3813,13 @@ ExecEvalBooleanTest(GenericExprState *bstate,
*/
static Datum
ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
CoerceToDomain *ctest = (CoerceToDomain *) cstate->xprstate.expr;
Datum result;
ListCell *l;
- result = ExecEvalExpr(cstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(cstate->arg, econtext, isNull);
/* Make sure we have up-to-date constraints */
UpdateDomainConstraintRef(cstate->constraint_ref);
@@ -4151,8 +3864,8 @@ ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
cstate->constraint_ref->tcache->typlen);
econtext->domainValue_isNull = *isNull;
- conResult = ExecEvalExpr(con->check_expr,
- econtext, &conIsNull, NULL);
+ conResult = ExecEvalExpr(con->check_expr, econtext,
+ &conIsNull);
if (!conIsNull &&
!DatumGetBool(conResult))
@@ -4187,10 +3900,8 @@ ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
static Datum
ExecEvalCoerceToDomainValue(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = econtext->domainValue_isNull;
return econtext->domainValue_datum;
}
@@ -4204,8 +3915,7 @@ ExecEvalCoerceToDomainValue(ExprState *exprstate,
static Datum
ExecEvalFieldSelect(FieldSelectState *fstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
FieldSelect *fselect = (FieldSelect *) fstate->xprstate.expr;
AttrNumber fieldnum = fselect->fieldnum;
@@ -4218,9 +3928,8 @@ ExecEvalFieldSelect(FieldSelectState *fstate,
Form_pg_attribute attr;
HeapTupleData tmptup;
- tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull, isDone);
+ tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull);
- /* this test covers the isDone exception too: */
if (*isNull)
return tupDatum;
@@ -4283,8 +3992,7 @@ ExecEvalFieldSelect(FieldSelectState *fstate,
static Datum
ExecEvalFieldStore(FieldStoreState *fstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
FieldStore *fstore = (FieldStore *) fstate->xprstate.expr;
HeapTuple tuple;
@@ -4297,10 +4005,7 @@ ExecEvalFieldStore(FieldStoreState *fstate,
ListCell *l1,
*l2;
- tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return tupDatum;
+ tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull);
/* Lookup tupdesc if first time through or after rescan */
tupDesc = get_cached_rowtype(fstore->resulttype, -1,
@@ -4360,8 +4065,7 @@ ExecEvalFieldStore(FieldStoreState *fstate,
values[fieldnum - 1] = ExecEvalExpr(newval,
econtext,
- &isnull[fieldnum - 1],
- NULL);
+ &isnull[fieldnum - 1]);
}
econtext->caseValue_datum = save_datum;
@@ -4384,9 +4088,9 @@ ExecEvalFieldStore(FieldStoreState *fstate,
static Datum
ExecEvalRelabelType(GenericExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- return ExecEvalExpr(exprstate->arg, econtext, isNull, isDone);
+ return ExecEvalExpr(exprstate->arg, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -4398,16 +4102,13 @@ ExecEvalRelabelType(GenericExprState *exprstate,
static Datum
ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
Datum inputval;
char *string;
- inputval = ExecEvalExpr(iostate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return inputval; /* nothing to do */
+ inputval = ExecEvalExpr(iostate->arg, econtext, isNull);
if (*isNull)
string = NULL; /* output functions are not called on nulls */
@@ -4432,16 +4133,14 @@ ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
static Datum
ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) astate->xprstate.expr;
Datum result;
FunctionCallInfoData locfcinfo;
- result = ExecEvalExpr(astate->arg, econtext, isNull, isDone);
+ result = ExecEvalExpr(astate->arg, econtext, isNull);
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to do */
if (*isNull)
return result; /* nothing to do */
@@ -4509,7 +4208,7 @@ ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
*/
static Datum
ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
@@ -4526,14 +4225,13 @@ ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
Datum
ExecEvalExprSwitchContext(ExprState *expression,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
Datum retDatum;
MemoryContext oldContext;
oldContext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
- retDatum = ExecEvalExpr(expression, econtext, isNull, isDone);
+ retDatum = ExecEvalExpr(expression, econtext, isNull);
MemoryContextSwitchTo(oldContext);
return retDatum;
}
@@ -5395,7 +5093,7 @@ ExecQual(List *qual, ExprContext *econtext, bool resultForNull)
Datum expr_value;
bool isNull;
- expr_value = ExecEvalExpr(clause, econtext, &isNull, NULL);
+ expr_value = ExecEvalExpr(clause, econtext, &isNull);
if (isNull)
{
@@ -5453,17 +5151,9 @@ ExecCleanTargetListLength(List *targetlist)
/*
* ExecTargetList
* Evaluates a targetlist with respect to the given
- * expression context. Returns TRUE if we were able to create
- * a result, FALSE if we have exhausted a set-valued expression.
+ * expression context.
*
* Results are stored into the passed values and isnull arrays.
- * The caller must provide an itemIsDone array that persists across calls.
- *
- * As with ExecEvalExpr, the caller should pass isDone = NULL if not
- * prepared to deal with sets of result tuples. Otherwise, a return
- * of *isDone = ExprMultipleResult signifies a set element, and a return
- * of *isDone = ExprEndResult signifies end of the set of tuple.
- * We assume that *isDone has been initialized to ExprSingleResult by caller.
*
* Since fields of the result tuple might be multiply referenced in higher
* plan nodes, we have to force any read/write expanded values to read-only
@@ -5472,19 +5162,16 @@ ExecCleanTargetListLength(List *targetlist)
* actually-multiply-referenced Vars and insert an expression node that
* would do that only where really required.
*/
-static bool
+static void
ExecTargetList(List *targetlist,
TupleDesc tupdesc,
ExprContext *econtext,
Datum *values,
- bool *isnull,
- ExprDoneCond *itemIsDone,
- ExprDoneCond *isDone)
+ bool *isnull)
{
Form_pg_attribute *att = tupdesc->attrs;
MemoryContext oldContext;
ListCell *tl;
- bool haveDoneSets;
/*
* Run in short-lived per-tuple context while computing expressions.
@@ -5494,8 +5181,6 @@ ExecTargetList(List *targetlist,
/*
* evaluate all the expressions in the target list
*/
- haveDoneSets = false; /* any exhausted set exprs in tlist? */
-
foreach(tl, targetlist)
{
GenericExprState *gstate = (GenericExprState *) lfirst(tl);
@@ -5504,117 +5189,15 @@ ExecTargetList(List *targetlist,
values[resind] = ExecEvalExpr(gstate->arg,
econtext,
- &isnull[resind],
- &itemIsDone[resind]);
+ &isnull[resind]);
values[resind] = MakeExpandedObjectReadOnly(values[resind],
isnull[resind],
att[resind]->attlen);
-
- if (itemIsDone[resind] != ExprSingleResult)
- {
- /* We have a set-valued expression in the tlist */
- if (isDone == NULL)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
- if (itemIsDone[resind] == ExprMultipleResult)
- {
- /* we have undone sets in the tlist, set flag */
- *isDone = ExprMultipleResult;
- }
- else
- {
- /* we have done sets in the tlist, set flag for that */
- haveDoneSets = true;
- }
- }
- }
-
- if (haveDoneSets)
- {
- /*
- * note: can't get here unless we verified isDone != NULL
- */
- if (*isDone == ExprSingleResult)
- {
- /*
- * all sets are done, so report that tlist expansion is complete.
- */
- *isDone = ExprEndResult;
- MemoryContextSwitchTo(oldContext);
- return false;
- }
- else
- {
- /*
- * We have some done and some undone sets. Restart the done ones
- * so that we can deliver a tuple (if possible).
- */
- foreach(tl, targetlist)
- {
- GenericExprState *gstate = (GenericExprState *) lfirst(tl);
- TargetEntry *tle = (TargetEntry *) gstate->xprstate.expr;
- AttrNumber resind = tle->resno - 1;
-
- if (itemIsDone[resind] == ExprEndResult)
- {
- values[resind] = ExecEvalExpr(gstate->arg,
- econtext,
- &isnull[resind],
- &itemIsDone[resind]);
-
- values[resind] = MakeExpandedObjectReadOnly(values[resind],
- isnull[resind],
- att[resind]->attlen);
-
- if (itemIsDone[resind] == ExprEndResult)
- {
- /*
- * Oh dear, this item is returning an empty set. Guess
- * we can't make a tuple after all.
- */
- *isDone = ExprEndResult;
- break;
- }
- }
- }
-
- /*
- * If we cannot make a tuple because some sets are empty, we still
- * have to cycle the nonempty sets to completion, else resources
- * will not be released from subplans etc.
- *
- * XXX is that still necessary?
- */
- if (*isDone == ExprEndResult)
- {
- foreach(tl, targetlist)
- {
- GenericExprState *gstate = (GenericExprState *) lfirst(tl);
- TargetEntry *tle = (TargetEntry *) gstate->xprstate.expr;
- AttrNumber resind = tle->resno - 1;
-
- while (itemIsDone[resind] == ExprMultipleResult)
- {
- values[resind] = ExecEvalExpr(gstate->arg,
- econtext,
- &isnull[resind],
- &itemIsDone[resind]);
- /* no need for MakeExpandedObjectReadOnly */
- }
- }
-
- MemoryContextSwitchTo(oldContext);
- return false;
- }
- }
}
/* Report success */
MemoryContextSwitchTo(oldContext);
-
- return true;
}
/*
@@ -5631,7 +5214,7 @@ ExecTargetList(List *targetlist,
* result slot.
*/
TupleTableSlot *
-ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
+ExecProject(ProjectionInfo *projInfo)
{
TupleTableSlot *slot;
ExprContext *econtext;
@@ -5648,10 +5231,6 @@ ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
slot = projInfo->pi_slot;
econtext = projInfo->pi_exprContext;
- /* Assume single result row until proven otherwise */
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* Clear any former contents of the result slot. This makes it safe for
* us to use the slot's Datum/isnull arrays as workspace. (Also, we can
@@ -5719,21 +5298,15 @@ ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
}
/*
- * If there are any generic expressions, evaluate them. It's possible
- * that there are set-returning functions in such expressions; if so and
- * we have reached the end of the set, we return the result slot, which we
- * already marked empty.
+ * If there are any generic expressions, evaluate them.
*/
if (projInfo->pi_targetlist)
{
- if (!ExecTargetList(projInfo->pi_targetlist,
- slot->tts_tupleDescriptor,
- econtext,
- slot->tts_values,
- slot->tts_isnull,
- projInfo->pi_itemIsDone,
- isDone))
- return slot; /* no more result rows, return empty slot */
+ ExecTargetList(projInfo->pi_targetlist,
+ slot->tts_tupleDescriptor,
+ econtext,
+ slot->tts_values,
+ slot->tts_isnull);
}
/*
diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c
index f97db9c211..c0e4641750 100644
--- a/src/backend/executor/execScan.c
+++ b/src/backend/executor/execScan.c
@@ -125,8 +125,6 @@ ExecScan(ScanState *node,
ExprContext *econtext;
List *qual;
ProjectionInfo *projInfo;
- ExprDoneCond isDone;
- TupleTableSlot *resultSlot;
/*
* Fetch data from node
@@ -146,21 +144,6 @@ ExecScan(ScanState *node,
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- Assert(projInfo); /* can't get here if not projecting */
- resultSlot = ExecProject(projInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a scan tuple.
@@ -214,15 +197,9 @@ ExecScan(ScanState *node,
{
/*
* Form a projection tuple, store it in the result tuple slot
- * and return it --- unless we find we can project no tuples
- * from this scan tuple, in which case continue scan.
+ * and return it.
*/
- resultSlot = ExecProject(projInfo, &isDone);
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ return ExecProject(projInfo);
}
else
{
@@ -352,9 +329,6 @@ ExecScanReScan(ScanState *node)
{
EState *estate = node->ps.state;
- /* Stop projecting any tuples from SRFs in the targetlist */
- node->ps.ps_TupFromTlist = false;
-
/* Rescan EvalPlanQual tuple if we're inside an EvalPlanQual recheck */
if (estate->es_epqScanDone != NULL)
{
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index 70646fd15a..e49feff6c0 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -586,12 +586,6 @@ ExecBuildProjectionInfo(List *targetList,
projInfo->pi_numSimpleVars = numSimpleVars;
projInfo->pi_directMap = directMap;
- if (exprlist == NIL)
- projInfo->pi_itemIsDone = NULL; /* not needed */
- else
- projInfo->pi_itemIsDone = (ExprDoneCond *)
- palloc(len * sizeof(ExprDoneCond));
-
return projInfo;
}
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index dc64b3262a..e4992134bd 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -854,7 +854,7 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
/* compute input for all aggregates */
if (aggstate->evalproj)
- aggstate->evalslot = ExecProject(aggstate->evalproj, NULL);
+ aggstate->evalslot = ExecProject(aggstate->evalproj);
for (transno = 0; transno < numTrans; transno++)
{
@@ -871,7 +871,7 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
bool isnull;
res = ExecEvalExprSwitchContext(filter, aggstate->tmpcontext,
- &isnull, NULL);
+ &isnull);
if (isnull || !DatumGetBool(res))
continue;
}
@@ -970,7 +970,7 @@ combine_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
Assert(aggstate->phase->numsets == 0);
/* compute input for all aggregates */
- slot = ExecProject(aggstate->evalproj, NULL);
+ slot = ExecProject(aggstate->evalproj);
for (transno = 0; transno < numTrans; transno++)
{
@@ -1368,8 +1368,7 @@ finalize_aggregate(AggState *aggstate,
fcinfo.arg[i] = ExecEvalExpr(expr,
aggstate->ss.ps.ps_ExprContext,
- &fcinfo.argnull[i],
- NULL);
+ &fcinfo.argnull[i]);
anynull |= fcinfo.argnull[i];
i++;
}
@@ -1630,7 +1629,7 @@ finalize_aggregates(AggState *aggstate,
/*
* Project the result of a group (whose aggs have already been calculated by
* finalize_aggregates). Returns the result slot, or NULL if no row is
- * projected (suppressed by qual or by an empty SRF).
+ * projected (suppressed by qual).
*/
static TupleTableSlot *
project_aggregates(AggState *aggstate)
@@ -1643,20 +1642,10 @@ project_aggregates(AggState *aggstate)
if (ExecQual(aggstate->ss.ps.qual, econtext, false))
{
/*
- * Form and return or store a projection tuple using the aggregate
- * results and the representative input tuple.
+ * Form and return projection tuple using the aggregate results and
+ * the representative input tuple.
*/
- ExprDoneCond isDone;
- TupleTableSlot *result;
-
- result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- aggstate->ss.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(aggstate->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(aggstate, 1);
@@ -1911,27 +1900,6 @@ ExecAgg(AggState *node)
{
TupleTableSlot *result;
- /*
- * Check to see if we're still projecting out tuples from a previous agg
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ss.ps.ps_TupFromTlist)
- {
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
- * (We must do the ps_TupFromTlist check first, because in some cases
- * agg_done gets set before we emit the final aggregate tuple, and we have
- * to finish running SRFs for it.)
- */
if (!node->agg_done)
{
/* Dispatch based on strategy */
@@ -2571,8 +2539,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&aggstate->ss.ps);
ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
- aggstate->ss.ps.ps_TupFromTlist = false;
-
/*
* get the count of aggregates in targetlist and quals
*/
@@ -3575,8 +3541,6 @@ ExecReScanAgg(AggState *node)
node->agg_done = false;
- node->ss.ps.ps_TupFromTlist = false;
-
if (aggnode->aggstrategy == AGG_HASHED)
{
/*
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index d5fd57ae4b..f18827de0b 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -575,8 +575,6 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeCtescan.c b/src/backend/executor/nodeCtescan.c
index 2f9c007409..610797b36b 100644
--- a/src/backend/executor/nodeCtescan.c
+++ b/src/backend/executor/nodeCtescan.c
@@ -269,8 +269,6 @@ ExecInitCteScan(CteScan *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&scanstate->ss.ps);
ExecAssignScanProjectionInfo(&scanstate->ss);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
return scanstate;
}
diff --git a/src/backend/executor/nodeCustom.c b/src/backend/executor/nodeCustom.c
index b01e65f362..a27430242a 100644
--- a/src/backend/executor/nodeCustom.c
+++ b/src/backend/executor/nodeCustom.c
@@ -48,8 +48,6 @@ ExecInitCustomScan(CustomScan *cscan, EState *estate, int eflags)
/* create expression context for node */
ExecAssignExprContext(estate, &css->ss.ps);
- css->ss.ps.ps_TupFromTlist = false;
-
/* initialize child expressions */
css->ss.ps.targetlist = (List *)
ExecInitExpr((Expr *) cscan->scan.plan.targetlist,
diff --git a/src/backend/executor/nodeForeignscan.c b/src/backend/executor/nodeForeignscan.c
index 8f21c17f24..86a77e356c 100644
--- a/src/backend/executor/nodeForeignscan.c
+++ b/src/backend/executor/nodeForeignscan.c
@@ -152,8 +152,6 @@ ExecInitForeignScan(ForeignScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeFunctionscan.c b/src/backend/executor/nodeFunctionscan.c
index 1b593dcd71..972022784d 100644
--- a/src/backend/executor/nodeFunctionscan.c
+++ b/src/backend/executor/nodeFunctionscan.c
@@ -331,8 +331,6 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* tuple table initialization
*/
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index f95c3d1b19..92b361ebb3 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -100,8 +100,6 @@ ExecInitGather(Gather *node, EState *estate, int eflags)
outerNode = outerPlan(node);
outerPlanState(gatherstate) = ExecInitNode(outerNode, estate, eflags);
- gatherstate->ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
@@ -132,8 +130,6 @@ ExecGather(GatherState *node)
TupleTableSlot *fslot = node->funnel_slot;
int i;
TupleTableSlot *slot;
- TupleTableSlot *resultSlot;
- ExprDoneCond isDone;
ExprContext *econtext;
/*
@@ -200,20 +196,6 @@ ExecGather(GatherState *node)
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note we can't do this
* until we're done projecting. This will also clear any previous tuple
@@ -241,13 +223,8 @@ ExecGather(GatherState *node)
* back around for another tuple
*/
econtext->ecxt_outertuple = slot;
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ return ExecProject(node->ps.ps_ProjInfo);
}
return slot;
diff --git a/src/backend/executor/nodeGroup.c b/src/backend/executor/nodeGroup.c
index 6a05023e50..66c095bc72 100644
--- a/src/backend/executor/nodeGroup.c
+++ b/src/backend/executor/nodeGroup.c
@@ -50,23 +50,6 @@ ExecGroup(GroupState *node)
grpColIdx = ((Group *) node->ss.ps.plan)->grpColIdx;
/*
- * Check to see if we're still projecting out tuples from a previous group
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ss.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
* The ScanTupleSlot holds the (copied) first tuple of each group.
*/
firsttupleslot = node->ss.ss_ScanTupleSlot;
@@ -107,16 +90,7 @@ ExecGroup(GroupState *node)
/*
* Form and return a projection tuple using the first input tuple.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ss.ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(node, 1);
@@ -170,16 +144,7 @@ ExecGroup(GroupState *node)
/*
* Form and return a projection tuple using the first input tuple.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ss.ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(node, 1);
@@ -246,8 +211,6 @@ ExecInitGroup(Group *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&grpstate->ss.ps);
ExecAssignProjectionInfo(&grpstate->ss.ps, NULL);
- grpstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Precompute fmgr lookup data for inner loop
*/
@@ -283,7 +246,6 @@ ExecReScanGroup(GroupState *node)
PlanState *outerPlan = outerPlanState(node);
node->grp_done = FALSE;
- node->ss.ps.ps_TupFromTlist = false;
/* must clear first tuple */
ExecClearTuple(node->ss.ss_ScanTupleSlot);
diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c
index 11db08f5fa..af5934d2bc 100644
--- a/src/backend/executor/nodeHash.c
+++ b/src/backend/executor/nodeHash.c
@@ -959,7 +959,7 @@ ExecHashGetHashValue(HashJoinTable hashtable,
/*
* Get the join attribute value of the tuple
*/
- keyval = ExecEvalExpr(keyexpr, econtext, &isNull, NULL);
+ keyval = ExecEvalExpr(keyexpr, econtext, &isNull);
/*
* If the attribute is NULL, and the join operator is strict, then
diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c
index b41e4e2f98..f34e476bad 100644
--- a/src/backend/executor/nodeHashjoin.c
+++ b/src/backend/executor/nodeHashjoin.c
@@ -66,7 +66,6 @@ ExecHashJoin(HashJoinState *node)
List *joinqual;
List *otherqual;
ExprContext *econtext;
- ExprDoneCond isDone;
HashJoinTable hashtable;
TupleTableSlot *outerTupleSlot;
uint32 hashvalue;
@@ -83,22 +82,6 @@ ExecHashJoin(HashJoinState *node)
econtext = node->js.ps.ps_ExprContext;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -314,18 +297,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
- {
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
else
InstrCountFiltered2(node, 1);
}
@@ -353,18 +325,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
- {
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
else
InstrCountFiltered2(node, 1);
}
@@ -392,18 +353,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
- {
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
else
InstrCountFiltered2(node, 1);
break;
@@ -586,7 +536,6 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags)
/* child Hash node needs to evaluate inner hash keys, too */
((HashState *) innerPlanState(hjstate))->hashkeys = rclauses;
- hjstate->js.ps.ps_TupFromTlist = false;
hjstate->hj_JoinState = HJ_BUILD_HASHTABLE;
hjstate->hj_MatchedOuter = false;
hjstate->hj_OuterNotEmpty = false;
@@ -1000,7 +949,6 @@ ExecReScanHashJoin(HashJoinState *node)
node->hj_CurSkewBucketNo = INVALID_SKEW_BUCKET_NO;
node->hj_CurTuple = NULL;
- node->js.ps.ps_TupFromTlist = false;
node->hj_MatchedOuter = false;
node->hj_FirstOuterTupleSlot = NULL;
diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c
index ddef3a42bf..d5b19b7c11 100644
--- a/src/backend/executor/nodeIndexonlyscan.c
+++ b/src/backend/executor/nodeIndexonlyscan.c
@@ -412,8 +412,6 @@ ExecInitIndexOnlyScan(IndexOnlyScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &indexstate->ss.ps);
- indexstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 97a6fac34d..5734550d2c 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -336,8 +336,7 @@ EvalOrderByExpressions(IndexScanState *node, ExprContext *econtext)
node->iss_OrderByValues[i] = ExecEvalExpr(orderby,
econtext,
- &node->iss_OrderByNulls[i],
- NULL);
+ &node->iss_OrderByNulls[i]);
i++;
}
@@ -590,8 +589,7 @@ ExecIndexEvalRuntimeKeys(ExprContext *econtext,
*/
scanvalue = ExecEvalExpr(key_expr,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
{
scan_key->sk_argument = scanvalue;
@@ -648,8 +646,7 @@ ExecIndexEvalArrayKeys(ExprContext *econtext,
*/
arraydatum = ExecEvalExpr(array_expr,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
{
result = false;
@@ -837,8 +834,6 @@ ExecInitIndexScan(IndexScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &indexstate->ss.ps);
- indexstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*
diff --git a/src/backend/executor/nodeLimit.c b/src/backend/executor/nodeLimit.c
index 885931e594..aaec132218 100644
--- a/src/backend/executor/nodeLimit.c
+++ b/src/backend/executor/nodeLimit.c
@@ -239,8 +239,7 @@ recompute_limits(LimitState *node)
{
val = ExecEvalExprSwitchContext(node->limitOffset,
econtext,
- &isNull,
- NULL);
+ &isNull);
/* Interpret NULL offset as no offset */
if (isNull)
node->offset = 0;
@@ -263,8 +262,7 @@ recompute_limits(LimitState *node)
{
val = ExecEvalExprSwitchContext(node->limitCount,
econtext,
- &isNull,
- NULL);
+ &isNull);
/* Interpret NULL count as no count (LIMIT ALL) */
if (isNull)
{
@@ -346,18 +344,11 @@ pass_down_bound(LimitState *node, PlanState *child_node)
else if (IsA(child_node, ResultState))
{
/*
- * An extra consideration here is that if the Result is projecting a
- * targetlist that contains any SRFs, we can't assume that every input
- * tuple generates an output tuple, so a Sort underneath might need to
- * return more than N tuples to satisfy LIMIT N. So we cannot use
- * bounded sort.
- *
* If Result supported qual checking, we'd have to punt on seeing a
- * qual, too. Note that having a resconstantqual is not a
- * showstopper: if that fails we're not getting any rows at all.
+ * qual. Note that having a resconstantqual is not a showstopper: if
+ * that fails we're not getting any rows at all.
*/
- if (outerPlanState(child_node) &&
- !expression_returns_set((Node *) child_node->plan->targetlist))
+ if (outerPlanState(child_node))
pass_down_bound(node, outerPlanState(child_node));
}
}
diff --git a/src/backend/executor/nodeMergejoin.c b/src/backend/executor/nodeMergejoin.c
index 2fd1856603..5150776b00 100644
--- a/src/backend/executor/nodeMergejoin.c
+++ b/src/backend/executor/nodeMergejoin.c
@@ -313,7 +313,7 @@ MJEvalOuterValues(MergeJoinState *mergestate)
MergeJoinClause clause = &mergestate->mj_Clauses[i];
clause->ldatum = ExecEvalExpr(clause->lexpr, econtext,
- &clause->lisnull, NULL);
+ &clause->lisnull);
if (clause->lisnull)
{
/* match is impossible; can we end the join early? */
@@ -360,7 +360,7 @@ MJEvalInnerValues(MergeJoinState *mergestate, TupleTableSlot *innerslot)
MergeJoinClause clause = &mergestate->mj_Clauses[i];
clause->rdatum = ExecEvalExpr(clause->rexpr, econtext,
- &clause->risnull, NULL);
+ &clause->risnull);
if (clause->risnull)
{
/* match is impossible; can we end the join early? */
@@ -465,19 +465,10 @@ MJFillOuter(MergeJoinState *node)
* qualification succeeded. now form the desired projection tuple and
* return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
MJ_printf("ExecMergeJoin: returning outer fill tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -506,19 +497,9 @@ MJFillInner(MergeJoinState *node)
* qualification succeeded. now form the desired projection tuple and
* return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
MJ_printf("ExecMergeJoin: returning inner fill tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -642,23 +623,6 @@ ExecMergeJoin(MergeJoinState *node)
doFillInner = node->mj_FillInner;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -856,20 +820,9 @@ ExecMergeJoin(MergeJoinState *node)
* qualification succeeded. now form the desired
* projection tuple and return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
MJ_printf("ExecMergeJoin: returning tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo,
- &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -1629,7 +1582,6 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags)
* initialize join state
*/
mergestate->mj_JoinState = EXEC_MJ_INITIALIZE_OUTER;
- mergestate->js.ps.ps_TupFromTlist = false;
mergestate->mj_MatchedOuter = false;
mergestate->mj_MatchedInner = false;
mergestate->mj_OuterTupleSlot = NULL;
@@ -1684,7 +1636,6 @@ ExecReScanMergeJoin(MergeJoinState *node)
ExecClearTuple(node->mj_MarkedTupleSlot);
node->mj_JoinState = EXEC_MJ_INITIALIZE_OUTER;
- node->js.ps.ps_TupFromTlist = false;
node->mj_MatchedOuter = false;
node->mj_MatchedInner = false;
node->mj_OuterTupleSlot = NULL;
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 4692427e60..dab9c4129a 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -175,7 +175,7 @@ ExecProcessReturning(ResultRelInfo *resultRelInfo,
econtext->ecxt_outertuple = planSlot;
/* Compute the RETURNING expressions */
- return ExecProject(projectReturning, NULL);
+ return ExecProject(projectReturning);
}
/*
@@ -1302,7 +1302,7 @@ ExecOnConflictUpdate(ModifyTableState *mtstate,
}
/* Project the new tuple version */
- ExecProject(resultRelInfo->ri_onConflictSetProj, NULL);
+ ExecProject(resultRelInfo->ri_onConflictSetProj);
/*
* Note that it is possible that the target tuple has been modified in
diff --git a/src/backend/executor/nodeNestloop.c b/src/backend/executor/nodeNestloop.c
index e05842768a..5af04fde04 100644
--- a/src/backend/executor/nodeNestloop.c
+++ b/src/backend/executor/nodeNestloop.c
@@ -82,23 +82,6 @@ ExecNestLoop(NestLoopState *node)
econtext = node->js.ps.ps_ExprContext;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -201,19 +184,10 @@ ExecNestLoop(NestLoopState *node)
* the slot containing the result tuple using
* ExecProject().
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
ENL1_printf("qualification succeeded, projecting tuple");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -259,19 +233,10 @@ ExecNestLoop(NestLoopState *node)
* qualification was satisfied so we project and return the
* slot containing the result tuple using ExecProject().
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
ENL1_printf("qualification succeeded, projecting tuple");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -377,7 +342,6 @@ ExecInitNestLoop(NestLoop *node, EState *estate, int eflags)
/*
* finally, wipe the current outer tuple clean.
*/
- nlstate->js.ps.ps_TupFromTlist = false;
nlstate->nl_NeedNewOuter = true;
nlstate->nl_MatchedOuter = false;
@@ -441,7 +405,6 @@ ExecReScanNestLoop(NestLoopState *node)
* outer Vars are used as run-time keys...
*/
- node->js.ps.ps_TupFromTlist = false;
node->nl_NeedNewOuter = true;
node->nl_MatchedOuter = false;
}
diff --git a/src/backend/executor/nodeResult.c b/src/backend/executor/nodeResult.c
index 59dacd33ef..759cbe6aec 100644
--- a/src/backend/executor/nodeResult.c
+++ b/src/backend/executor/nodeResult.c
@@ -67,10 +67,8 @@ TupleTableSlot *
ExecResult(ResultState *node)
{
TupleTableSlot *outerTupleSlot;
- TupleTableSlot *resultSlot;
PlanState *outerPlan;
ExprContext *econtext;
- ExprDoneCond isDone;
econtext = node->ps.ps_ExprContext;
@@ -92,20 +90,6 @@ ExecResult(ResultState *node)
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a scan tuple.
@@ -147,18 +131,8 @@ ExecResult(ResultState *node)
node->rs_done = true;
}
- /*
- * form the result tuple using ExecProject(), and return it --- unless
- * the projection produces an empty set, in which case we must loop
- * back to see if there are more outerPlan tuples.
- */
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ /* form the result tuple using ExecProject(), and return it */
+ return ExecProject(node->ps.ps_ProjInfo);
}
return NULL;
@@ -228,8 +202,6 @@ ExecInitResult(Result *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &resstate->ps);
- resstate->ps.ps_TupFromTlist = false;
-
/*
* tuple table initialization
*/
@@ -295,7 +267,6 @@ void
ExecReScanResult(ResultState *node)
{
node->rs_done = false;
- node->ps.ps_TupFromTlist = false;
node->rs_checkqual = (node->resconstantqual == NULL) ? false : true;
/*
diff --git a/src/backend/executor/nodeSamplescan.c b/src/backend/executor/nodeSamplescan.c
index 9c686a045b..0b34fa9149 100644
--- a/src/backend/executor/nodeSamplescan.c
+++ b/src/backend/executor/nodeSamplescan.c
@@ -188,8 +188,6 @@ ExecInitSampleScan(SampleScan *node, EState *estate, int eflags)
*/
InitScanRelation(scanstate, estate, eflags);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
@@ -299,8 +297,7 @@ tablesample_init(SampleScanState *scanstate)
params[i] = ExecEvalExprSwitchContext(argstate,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLESAMPLE_ARGUMENT),
@@ -312,8 +309,7 @@ tablesample_init(SampleScanState *scanstate)
{
datum = ExecEvalExprSwitchContext(scanstate->repeatable,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLESAMPLE_REPEAT),
diff --git a/src/backend/executor/nodeSeqscan.c b/src/backend/executor/nodeSeqscan.c
index 439a94694b..e61895de0a 100644
--- a/src/backend/executor/nodeSeqscan.c
+++ b/src/backend/executor/nodeSeqscan.c
@@ -206,8 +206,6 @@ ExecInitSeqScan(SeqScan *node, EState *estate, int eflags)
*/
InitScanRelation(scanstate, estate, eflags);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
diff --git a/src/backend/executor/nodeSetResult.c b/src/backend/executor/nodeSetResult.c
index 6d9d96dca9..6a2e9fdfb5 100644
--- a/src/backend/executor/nodeSetResult.c
+++ b/src/backend/executor/nodeSetResult.c
@@ -182,7 +182,7 @@ ExecProjectSRF(SetResultState *node, bool continuing)
}
else
{
- *result = ExecEvalExpr(gstate->arg, econtext, isnull, NULL);
+ *result = ExecEvalExpr(gstate->arg, econtext, isnull);
*isdone = ExprSingleResult;
}
diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c
index 68edcd4567..12115bc541 100644
--- a/src/backend/executor/nodeSubplan.c
+++ b/src/backend/executor/nodeSubplan.c
@@ -41,12 +41,10 @@
static Datum ExecSubPlan(SubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecAlternativeSubPlan(AlternativeSubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecHashSubPlan(SubPlanState *node,
ExprContext *econtext,
bool *isNull);
@@ -69,15 +67,12 @@ static bool slotNoNulls(TupleTableSlot *slot);
static Datum
ExecSubPlan(SubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
SubPlan *subplan = (SubPlan *) node->xprstate.expr;
/* Set default values for result flags: non-null, not a set result */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/* Sanity checks */
if (subplan->subLinkType == CTE_SUBLINK)
@@ -128,7 +123,7 @@ ExecHashSubPlan(SubPlanState *node,
* have to set the econtext to use (hack alert!).
*/
node->projLeft->pi_exprContext = econtext;
- slot = ExecProject(node->projLeft, NULL);
+ slot = ExecProject(node->projLeft);
/*
* Note: because we are typically called in a per-tuple context, we have
@@ -285,8 +280,7 @@ ExecScanSubPlan(SubPlanState *node,
prm->value = ExecEvalExprSwitchContext((ExprState *) lfirst(pvar),
econtext,
- &(prm->isnull),
- NULL);
+ &(prm->isnull));
planstate->chgParam = bms_add_member(planstate->chgParam, paramid);
}
@@ -403,7 +397,7 @@ ExecScanSubPlan(SubPlanState *node,
}
rowresult = ExecEvalExprSwitchContext(node->testexpr, econtext,
- &rownull, NULL);
+ &rownull);
if (subLinkType == ANY_SUBLINK)
{
@@ -572,7 +566,7 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext)
&(prmdata->isnull));
col++;
}
- slot = ExecProject(node->projRight, NULL);
+ slot = ExecProject(node->projRight);
/*
* If result contains any nulls, store separately or not at all.
@@ -985,8 +979,7 @@ ExecSetParamPlan(SubPlanState *node, ExprContext *econtext)
prm->value = ExecEvalExprSwitchContext((ExprState *) lfirst(pvar),
econtext,
- &(prm->isnull),
- NULL);
+ &(prm->isnull));
planstate->chgParam = bms_add_member(planstate->chgParam, paramid);
}
@@ -1222,8 +1215,7 @@ ExecInitAlternativeSubPlan(AlternativeSubPlan *asplan, PlanState *parent)
static Datum
ExecAlternativeSubPlan(AlternativeSubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* Just pass control to the active subplan */
SubPlanState *activesp = (SubPlanState *) list_nth(node->subplans,
@@ -1231,8 +1223,5 @@ ExecAlternativeSubPlan(AlternativeSubPlanState *node,
Assert(IsA(activesp, SubPlanState));
- return ExecSubPlan(activesp,
- econtext,
- isNull,
- isDone);
+ return ExecSubPlan(activesp, econtext, isNull);
}
diff --git a/src/backend/executor/nodeSubqueryscan.c b/src/backend/executor/nodeSubqueryscan.c
index a4387da80a..230a96f9d2 100644
--- a/src/backend/executor/nodeSubqueryscan.c
+++ b/src/backend/executor/nodeSubqueryscan.c
@@ -138,8 +138,6 @@ ExecInitSubqueryScan(SubqueryScan *node, EState *estate, int eflags)
*/
subquerystate->subplan = ExecInitNode(node->subplan, estate, eflags);
- subquerystate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize scan tuple type (needed by ExecAssignScanProjectionInfo)
*/
diff --git a/src/backend/executor/nodeTidscan.c b/src/backend/executor/nodeTidscan.c
index e3d3fc3842..13ed886577 100644
--- a/src/backend/executor/nodeTidscan.c
+++ b/src/backend/executor/nodeTidscan.c
@@ -104,8 +104,7 @@ TidListCreate(TidScanState *tidstate)
itemptr = (ItemPointer)
DatumGetPointer(ExecEvalExprSwitchContext(exstate,
econtext,
- &isNull,
- NULL));
+ &isNull));
if (!isNull &&
ItemPointerIsValid(itemptr) &&
ItemPointerGetBlockNumber(itemptr) < nblocks)
@@ -133,8 +132,7 @@ TidListCreate(TidScanState *tidstate)
exstate = (ExprState *) lsecond(saexstate->fxprstate.args);
arraydatum = ExecEvalExprSwitchContext(exstate,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
continue;
itemarray = DatumGetArrayTypeP(arraydatum);
@@ -469,8 +467,6 @@ ExecInitTidScan(TidScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &tidstate->ss.ps);
- tidstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeValuesscan.c b/src/backend/executor/nodeValuesscan.c
index 5b42ca93cf..9883a8b130 100644
--- a/src/backend/executor/nodeValuesscan.c
+++ b/src/backend/executor/nodeValuesscan.c
@@ -140,8 +140,7 @@ ValuesNext(ValuesScanState *node)
values[resind] = ExecEvalExpr(estate,
econtext,
- &isnull[resind],
- NULL);
+ &isnull[resind]);
/*
* We must force any R/W expanded datums to read-only state, in
@@ -272,8 +271,6 @@ ExecInitValuesScan(ValuesScan *node, EState *estate, int eflags)
scanstate->exprlists[i++] = (List *) lfirst(vtl);
}
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c
index 17884d2c44..6ac6b83cdd 100644
--- a/src/backend/executor/nodeWindowAgg.c
+++ b/src/backend/executor/nodeWindowAgg.c
@@ -256,7 +256,7 @@ advance_windowaggregate(WindowAggState *winstate,
if (filter)
{
bool isnull;
- Datum res = ExecEvalExpr(filter, econtext, &isnull, NULL);
+ Datum res = ExecEvalExpr(filter, econtext, &isnull);
if (isnull || !DatumGetBool(res))
{
@@ -272,7 +272,7 @@ advance_windowaggregate(WindowAggState *winstate,
ExprState *argstate = (ExprState *) lfirst(arg);
fcinfo->arg[i] = ExecEvalExpr(argstate, econtext,
- &fcinfo->argnull[i], NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -433,7 +433,7 @@ advance_windowaggregate_base(WindowAggState *winstate,
if (filter)
{
bool isnull;
- Datum res = ExecEvalExpr(filter, econtext, &isnull, NULL);
+ Datum res = ExecEvalExpr(filter, econtext, &isnull);
if (isnull || !DatumGetBool(res))
{
@@ -449,7 +449,7 @@ advance_windowaggregate_base(WindowAggState *winstate,
ExprState *argstate = (ExprState *) lfirst(arg);
fcinfo->arg[i] = ExecEvalExpr(argstate, econtext,
- &fcinfo->argnull[i], NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -1584,15 +1584,12 @@ update_frametailpos(WindowObject winobj, TupleTableSlot *slot)
* ExecWindowAgg receives tuples from its outer subplan and
* stores them into a tuplestore, then processes window functions.
* This node doesn't reduce nor qualify any row so the number of
- * returned rows is exactly the same as its outer subplan's result
- * (ignoring the case of SRFs in the targetlist, that is).
+ * returned rows is exactly the same as its outer subplan's result.
* -----------------
*/
TupleTableSlot *
ExecWindowAgg(WindowAggState *winstate)
{
- TupleTableSlot *result;
- ExprDoneCond isDone;
ExprContext *econtext;
int i;
int numfuncs;
@@ -1601,23 +1598,6 @@ ExecWindowAgg(WindowAggState *winstate)
return NULL;
/*
- * Check to see if we're still projecting out tuples from a previous
- * output tuple (because there is a function-returning-set in the
- * projection expressions). If so, try to project another one.
- */
- if (winstate->ss.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(winstate->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- winstate->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
* Compute frame offset values, if any, during first call.
*/
if (winstate->all_first)
@@ -1634,8 +1614,7 @@ ExecWindowAgg(WindowAggState *winstate)
Assert(winstate->startOffset != NULL);
value = ExecEvalExprSwitchContext(winstate->startOffset,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
@@ -1660,8 +1639,7 @@ ExecWindowAgg(WindowAggState *winstate)
Assert(winstate->endOffset != NULL);
value = ExecEvalExprSwitchContext(winstate->endOffset,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
@@ -1684,7 +1662,6 @@ ExecWindowAgg(WindowAggState *winstate)
winstate->all_first = false;
}
-restart:
if (winstate->buffer == NULL)
{
/* Initialize for first partition and set current row = 0 */
@@ -1776,17 +1753,8 @@ restart:
* evaluated with respect to that row.
*/
econtext->ecxt_outertuple = winstate->ss.ss_ScanTupleSlot;
- result = ExecProject(winstate->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprEndResult)
- {
- /* SRF in tlist returned no rows, so advance to next input tuple */
- goto restart;
- }
-
- winstate->ss.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
+ return ExecProject(winstate->ss.ps.ps_ProjInfo);
}
/* -----------------
@@ -1896,8 +1864,6 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&winstate->ss.ps);
ExecAssignProjectionInfo(&winstate->ss.ps, NULL);
- winstate->ss.ps.ps_TupFromTlist = false;
-
/* Set up data for comparing tuples */
if (node->partNumCols > 0)
winstate->partEqfunctions = execTuplesMatchPrepare(node->partNumCols,
@@ -2090,8 +2056,6 @@ ExecReScanWindowAgg(WindowAggState *node)
ExprContext *econtext = node->ss.ps.ps_ExprContext;
node->all_done = false;
-
- node->ss.ps.ps_TupFromTlist = false;
node->all_first = true;
/* release tuplestore et al */
@@ -2712,7 +2676,7 @@ WinGetFuncArgInPartition(WindowObject winobj, int argno,
}
econtext->ecxt_outertuple = slot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
}
@@ -2811,7 +2775,7 @@ WinGetFuncArgInFrame(WindowObject winobj, int argno,
}
econtext->ecxt_outertuple = slot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
}
@@ -2841,5 +2805,5 @@ WinGetFuncArgCurrent(WindowObject winobj, int argno, bool *isnull)
econtext->ecxt_outertuple = winstate->ss.ss_ScanTupleSlot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
diff --git a/src/backend/executor/nodeWorktablescan.c b/src/backend/executor/nodeWorktablescan.c
index 73a1a8238a..bdba9e0bfc 100644
--- a/src/backend/executor/nodeWorktablescan.c
+++ b/src/backend/executor/nodeWorktablescan.c
@@ -174,8 +174,6 @@ ExecInitWorkTableScan(WorkTableScan *node, EState *estate, int eflags)
*/
ExecAssignResultTypeFromTL(&scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
return scanstate;
}
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index a763c7fe24..fa446c799d 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4305,7 +4305,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
/*
* Forget it if the function is not SQL-language or has other showstopper
- * properties. (The nargs check is just paranoia.)
+ * properties. (The nargs and retset checks are just paranoia.)
*/
if (funcform->prolang != SQLlanguageId ||
funcform->prosecdef ||
@@ -4687,7 +4687,7 @@ evaluate_expr(Expr *expr, Oid result_type, int32 result_typmod,
*/
const_val = ExecEvalExprSwitchContext(exprstate,
GetPerTupleExprContext(estate),
- &const_is_null, NULL);
+ &const_is_null);
/* Get info needed about result datatype */
get_typlenbyval(result_type, &resultTypLen, &resultTypByVal);
diff --git a/src/backend/optimizer/util/predtest.c b/src/backend/optimizer/util/predtest.c
index fd009e135e..c4a04cfa95 100644
--- a/src/backend/optimizer/util/predtest.c
+++ b/src/backend/optimizer/util/predtest.c
@@ -1596,7 +1596,7 @@ operator_predicate_proof(Expr *predicate, Node *clause, bool refute_it)
/* And execute it. */
test_result = ExecEvalExprSwitchContext(test_exprstate,
GetPerTupleExprContext(estate),
- &isNull, NULL);
+ &isNull);
/* Get back to outer memory context */
MemoryContextSwitchTo(oldcontext);
diff --git a/src/backend/utils/adt/domains.c b/src/backend/utils/adt/domains.c
index 14fa119f07..c2ad440013 100644
--- a/src/backend/utils/adt/domains.c
+++ b/src/backend/utils/adt/domains.c
@@ -179,7 +179,7 @@ domain_check_input(Datum value, bool isnull, DomainIOData *my_extra)
conResult = ExecEvalExprSwitchContext(con->check_expr,
econtext,
- &conIsNull, NULL);
+ &conIsNull);
if (!conIsNull &&
!DatumGetBool(conResult))
diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c
index dcc5d6287a..e8bce3b806 100644
--- a/src/backend/utils/adt/xml.c
+++ b/src/backend/utils/adt/xml.c
@@ -603,7 +603,7 @@ xmlelement(XmlExprState *xmlExpr, ExprContext *econtext)
bool isnull;
char *str;
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
str = NULL;
else
@@ -620,7 +620,7 @@ xmlelement(XmlExprState *xmlExpr, ExprContext *econtext)
bool isnull;
char *str;
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
/* here we can just forget NULL elements immediately */
if (!isnull)
{
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 4e48592798..db930a5661 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -70,8 +70,8 @@
* now it's just a macro invoking the function pointed to by an ExprState
* node. Beware of double evaluation of the ExprState argument!
*/
-#define ExecEvalExpr(expr, econtext, isNull, isDone) \
- ((*(expr)->evalfunc) (expr, econtext, isNull, isDone))
+#define ExecEvalExpr(expr, econtext, isNull) \
+ ((*(expr)->evalfunc) (expr, econtext, isNull))
/* Hook for plugins to get control in ExecutorStart() */
@@ -254,18 +254,17 @@ extern Tuplestorestate *ExecMakeTableFunctionResult(ExprState *funcexpr,
TupleDesc expectedDesc,
bool randomAccess);
extern Datum ExecMakeFunctionResultSet(FuncExprState *fcache,
- ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ ExprContext *econtext,
+ bool *isNull,
+ ExprDoneCond *isDone);
extern Datum ExecEvalExprSwitchContext(ExprState *expression, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
extern ExprState *ExecInitExpr(Expr *node, PlanState *parent);
extern ExprState *ExecPrepareExpr(Expr *node, EState *estate);
extern bool ExecQual(List *qual, ExprContext *econtext, bool resultForNull);
extern int ExecTargetListLength(List *targetlist);
extern int ExecCleanTargetListLength(List *targetlist);
-extern TupleTableSlot *ExecProject(ProjectionInfo *projInfo,
- ExprDoneCond *isDone);
+extern TupleTableSlot *ExecProject(ProjectionInfo *projInfo);
/*
* prototypes from functions in execScan.c
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 69de3ebbd9..e602f426f6 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -245,7 +245,6 @@ typedef struct ProjectionInfo
List *pi_targetlist;
ExprContext *pi_exprContext;
TupleTableSlot *pi_slot;
- ExprDoneCond *pi_itemIsDone;
bool pi_directMap;
int pi_numSimpleVars;
int *pi_varSlotOffsets;
@@ -586,8 +585,7 @@ typedef struct ExprState ExprState;
typedef Datum (*ExprStateEvalFunc) (ExprState *expression,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
struct ExprState
{
@@ -726,13 +724,6 @@ typedef struct FuncExprState
bool setArgsValid;
/*
- * Flag to remember whether we found a set-valued argument to the
- * function. This causes the function result to be a set as well. Valid
- * only when setArgsValid is true or funcResultStore isn't NULL.
- */
- bool setHasSetArg; /* some argument returns a set */
-
- /*
* Flag to remember whether we have registered a shutdown callback for
* this FuncExprState. We do so only if funcResultStore or setArgsValid
* has been set at least once (since all the callback is for is to release
@@ -1075,8 +1066,6 @@ typedef struct PlanState
TupleTableSlot *ps_ResultTupleSlot; /* slot for my result tuples */
ExprContext *ps_ExprContext; /* node's expression-evaluation context */
ProjectionInfo *ps_ProjInfo; /* info for doing tuple projection */
- bool ps_TupFromTlist;/* state flag for processing set-valued
- * functions in targetlist */
} PlanState;
/* ----------------
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
index bc7b00199e..b48146a362 100644
--- a/src/pl/plpgsql/src/pl_exec.c
+++ b/src/pl/plpgsql/src/pl_exec.c
@@ -5606,8 +5606,7 @@ exec_eval_simple_expr(PLpgSQL_execstate *estate,
*/
*result = ExecEvalExpr(expr->expr_simple_state,
econtext,
- isNull,
- NULL);
+ isNull);
/* Assorted cleanup */
expr->expr_simple_in_use = false;
@@ -6272,7 +6271,7 @@ exec_cast_value(PLpgSQL_execstate *estate,
cast_entry->cast_in_use = true;
value = ExecEvalExpr(cast_entry->cast_exprstate, econtext,
- isnull, NULL);
+ isnull);
cast_entry->cast_in_use = false;
--
2.11.0.22.g8d7a455.dirty
On Mon, Jan 16, 2017 at 2:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
That worked quite well. So we have a few questions, before I clean this
up:- For now the node is named 'Srf' both internally and in explain - not
sure if we want to make that something longer/easier to understand for
others? Proposals? TargetFunctionScan? SetResult?"Srf" is ugly as can be, and unintelligible. SetResult might be OK.
The operation we're performing here, IIUC, is projection. SetResult
lacks a verb, although Set could be confused with one; someone might
think this is the node that sets a result, whatever that means.
Anyway, I suggest working Project in there somehow. If Project by
itself seems like it's too generic, perhaps ProjectSet or
ProjectSetResult would be suitable.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
On Mon, Jan 16, 2017 at 2:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Srf" is ugly as can be, and unintelligible. SetResult might be OK.
The operation we're performing here, IIUC, is projection. SetResult
lacks a verb, although Set could be confused with one; someone might
think this is the node that sets a result, whatever that means.
Anyway, I suggest working Project in there somehow. If Project by
itself seems like it's too generic, perhaps ProjectSet or
ProjectSetResult would be suitable.
Andres' patch is already using "SetProjectionPath" for the path struct
type. Maybe make that "ProjectSetPath", giving rise to a "ProjectSet"
plan node?
I'm happy to do a global-search-and-replace while I'm reviewing the
patch, but let's decide on names PDQ.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Jan 17, 2017 at 12:52 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
On Mon, Jan 16, 2017 at 2:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Srf" is ugly as can be, and unintelligible. SetResult might be OK.
The operation we're performing here, IIUC, is projection. SetResult
lacks a verb, although Set could be confused with one; someone might
think this is the node that sets a result, whatever that means.
Anyway, I suggest working Project in there somehow. If Project by
itself seems like it's too generic, perhaps ProjectSet or
ProjectSetResult would be suitable.Andres' patch is already using "SetProjectionPath" for the path struct
type. Maybe make that "ProjectSetPath", giving rise to a "ProjectSet"
plan node?
+1.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2017-01-17 12:52:20 -0500, Tom Lane wrote:
Robert Haas <robertmhaas@gmail.com> writes:
On Mon, Jan 16, 2017 at 2:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Srf" is ugly as can be, and unintelligible. SetResult might be OK.
The operation we're performing here, IIUC, is projection. SetResult
lacks a verb, although Set could be confused with one; someone might
think this is the node that sets a result, whatever that means.
Anyway, I suggest working Project in there somehow. If Project by
itself seems like it's too generic, perhaps ProjectSet or
ProjectSetResult would be suitable.
I'd not have gone for SetResult if we didn't already have Result. I'm
not super happy ending up having Project in ProjectSet but not in the
Result that end up doing the majority of the projection. But eh, we can
live with it.
Andres' patch is already using "SetProjectionPath" for the path struct
type. Maybe make that "ProjectSetPath", giving rise to a "ProjectSet"
plan node?
WFM.
I'm happy to do a global-search-and-replace while I'm reviewing the
patch, but let's decide on names PDQ.
Yes, let's decide soon please.
Greeting,
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
I'd not have gone for SetResult if we didn't already have Result. I'm
not super happy ending up having Project in ProjectSet but not in the
Result that end up doing the majority of the projection. But eh, we can
live with it.
Using Result for two completely different things is a wart though. If we
had it to do over I think we'd define Result as a scan node that produces
rows from no input, and create a separate Project node for the case of
projecting from input tuples. People are used to seeing Result in EXPLAIN
output, so it's not worth the trouble of changing that IMO, but we don't
have to use it as a model for more node types.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Jan 17, 2017 at 1:18 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
I'd not have gone for SetResult if we didn't already have Result. I'm
not super happy ending up having Project in ProjectSet but not in the
Result that end up doing the majority of the projection. But eh, we can
live with it.Using Result for two completely different things is a wart though. If we
had it to do over I think we'd define Result as a scan node that produces
rows from no input, and create a separate Project node for the case of
projecting from input tuples. People are used to seeing Result in EXPLAIN
output, so it's not worth the trouble of changing that IMO, but we don't
have to use it as a model for more node types.
+1, although I think changing the existing node would be fine too if
somebody wanted to do the work. It's not worth having that wart
forever just to avoid whatever minor pain-of-adjustment might be
involved.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
On Tue, Jan 17, 2017 at 1:18 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Using Result for two completely different things is a wart though. If we
had it to do over I think we'd define Result as a scan node that produces
rows from no input, and create a separate Project node for the case of
projecting from input tuples. People are used to seeing Result in EXPLAIN
output, so it's not worth the trouble of changing that IMO, but we don't
have to use it as a model for more node types.
+1, although I think changing the existing node would be fine too if
somebody wanted to do the work. It's not worth having that wart
forever just to avoid whatever minor pain-of-adjustment might be
involved.
Although ... looking closer at Andres' patch, the new node type *is*
channeling Result, in the sense that it might or might not have any input
plan. This probably traces to what I wrote in September:
+ * XXX Possibly-temporary hack: if the subpath is a dummy ResultPath,
+ * don't bother with it, just make a Result with no input. This avoids an
+ * extra Result plan node when doing "SELECT srf()". Depending on what we
+ * decide about the desired plan structure for SRF-expanding nodes, this
+ * optimization might have to go away, and in any case it'll probably look
+ * a good bit different.
I'm not convinced that that optimization is worth preserving, but if we
keep it then ProjectSet isn't le mot juste here, any more than you'd want
to rename Result to Project without changing its existing functionality.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-01-17 13:43:38 -0500, Tom Lane wrote:
Although ... looking closer at Andres' patch, the new node type *is*
channeling Result, in the sense that it might or might not have any input
plan. This probably traces to what I wrote in September:+ * XXX Possibly-temporary hack: if the subpath is a dummy ResultPath, + * don't bother with it, just make a Result with no input. This avoids an + * extra Result plan node when doing "SELECT srf()". Depending on what we + * decide about the desired plan structure for SRF-expanding nodes, this + * optimization might have to go away, and in any case it'll probably look + * a good bit different.I'm not convinced that that optimization is worth preserving, but if we
keep it then ProjectSet isn't le mot juste here, any more than you'd want
to rename Result to Project without changing its existing
functionality.
Right. I'd removed that, and re-added it; primarily because the plans
looked more complex without it. After all, you'd thought it worth adding
that hack ;) I'm happy with removing it again too.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2017-01-17 13:43:38 -0500, Tom Lane wrote:
I'm not convinced that that optimization is worth preserving, but if we
keep it then ProjectSet isn't le mot juste here, any more than you'd want
to rename Result to Project without changing its existing
functionality.
Right. I'd removed that, and re-added it; primarily because the plans
looked more complex without it. After all, you'd thought it worth adding
that hack ;) I'm happy with removing it again too.
Well, it seemed reasonable to do that as long as the only cost was ten or
so lines in create_projection_plan. But if we're contorting not only the
behavior but the very name of the SRF-evaluation plan node type, that's
not negligible cost anymore. So now I'm inclined to take it out.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
I did a review pass over 0001 and 0002. I think the attached updated
version is committable ... except for one thing. The more I look at it,
the more disturbed I am by the behavioral change shown in rangefuncs.out
--- that's the SRF-in-one-arm-of-CASE issue. (The changes in tsrf.out
are fine and as per agreement.) We touched lightly on that point far
upthread, but didn't really resolve it. What's bothering me is that
we're changing, silently, from a reasonably-intuitive behavior to a
completely-not-intuitive one. Since we got a bug report for the previous
less-than-intuitive behavior for such cases, it's inevitable that we'll
get bug reports for this. I think it'd be far better to throw error for
SRF-inside-a-CASE. If we don't, we certainly need to document this,
and I'm not very sure how to explain it clearly.
Upthread we had put COALESCE in the same bucket, but I think that's less
of a problem, because in typical usages the SRF would be in the first
argument and so users wouldn't be expecting conditional evaluation.
Anyway, I've not done anything about that in the attached. What I did do:
* Merge 0001 and 0002. I appreciate you having separated that for my
review, but it doesn't make any sense to commit the parts of 0001 that
you undid in 0002.
* Rename the node types as per yesterday's discussion.
* Require Project to always have an input plan node.
* Obviously, ExecMakeFunctionResultSet can be greatly simplified now
that it need not deal with hasSetArg cases. I saw you'd left that
for later, which is mostly fine, but I did lobotomize it just enough
to throw an error if it gets a set result from an argument. Without
that, we wouldn't really be testing that the planner splits nested
SRFs correctly.
* This bit in ExecProjectSRF was no good:
+ else if (IsA(gstate->arg, FuncExprState) &&
+ ((FuncExpr *) gstate->arg->expr)->funcretset)
because FuncExprState is used for more node types than just FuncExpr;
in particular this would fail (except perhaps by accident) for a
set-returning OpExpr. I chose to fix it by adding a funcReturnsSet
field to FuncExprState and insisting that ExecInitExpr fill that in
immediately, which it can do easily.
* Minor style and comment improvements; fix a couple small oversights
such as missing outfuncs.c support.
* Update the user documentation (didn't address the CASE issue, though).
regards, tom lane
Attachments:
use-project-set-for-tlist-srfs.patch.gzapplication/x-gzip; name=use-project-set-for-tlist-srfs.patch.gzDownload
�Z`Xuse-project-set-for-tlist-srfs.patch �=is�����_1�����>GU�,'��%?I~��U�!0���|y����s`���|dU"������������������4��g���a��n?���o��CO|`���3
�����L&��l����7C����h����>{}vz�B>so��c��.��0�D���'7gW'�
d�~�
:G~���E#� ?�'� ���i�$"���������%"���g,Y�\2����c~��[_��X������)���\$�:c��� ���$��<�Q��F`&B��L�l�`~������a~
������0z�?��,�3
��N��� @vkE� ����<�~�]D�p4���^������1����-pO��pi�f��n3�S"��{I�i��`�S���
� ���Q�2���� ���aMm/
���[��D
�;�I
�H�s�����tI1h�|�'��-���G"F>7�5�������d$�m�bS���������T��<@K{ds���d�+���q��E���Uaxf�|`�D�D )������|Qh+���pY��q��� M����e�G�XD9�,c�$�3`g��m6��� �Y��Rvu��5�����:���M'l��#7�y*�L�������s0�I�@tHp��0F�p���s?I3-[�����������@�%)� }QX'��[��z���I��.`%�E��L[qE�H�<�{� �����4��\k����O S�g�A��vgQ� 5��4S\(���4"C���Jm��TO�I�p���y����Z�Z�����z���ME�I��m����(�������a1E���Q��/EN�@k
��#�$ �hx@G��C"%�,[*p�@���v�
�3�aCc{�S[��s �(<K��
X ���J�i�
���5�:�EM~�b{����2d0
�j��2�k����1N�:��6K����pm�d��/�d��������@���F@��#l���-�`e���V���y��4��e��H�����V�,��x���1��vxq��q�z�>�_\�]�H���wo_�����Z�V�UO���0Wn����X Y&�����Vr���'&�c�.�d�J�Ri&���H[:�]Q� p{;CB"��h�
��
��c��� �����%r7���,�7�Sj�"4��,3[����
��p� �s@p����F� '5�Z���H�~���d���� R�AE
Q@���O��.�h ��L����4�fx��-�x�J���+Q�@��W7����E��
��Y[���f�^K�W���Ob���B�3C58~i���AZM�����v
X2"8 X�g�����JN�d ��u��O�������|���=j_��p�������h���Hh��G������ �w�Ak�L�
��V%�MxRr�?���[2&��%��;��C��YV�����!���������w�bC��!>�����&;:�,��f������t���Y� d�cG�e�r&���<Q}��:�@�B#h3|t�k��CBa�k;����QC���R��������<��J�g?�T�}"�}���zQ���`���I�/ 1���yB�����X8���A��L!b�[_i,Rpx`w����%;�1�D��8�q�2R��w�A�y%�7�NL}?����z�%<�\w{�n�7���U�L_����������O�~=�O*;��~N�<����i�,�`�k��U���K
�[��.�l.lGDN"��r&L���1`�f�2l����<>1���eB=V{�"x�J���s���z��Y4W?S��:�/<�U]���"__G�Q,T�����~
����^o@���������KTW�q�9��X�5i��+�{�Ebp�Q9W���m�5�Z�Vz�.�T�Q��.��2�B�'�~m~d6�_�pg����$����T���6��3P(� ��29z�^U��5|�+�{�!�����Oi�nr!���T�;���^������=<Ub��*:-�*��9�nt�cO+�����`O����>����m��lA�g�:Y�
��n�J����%��m5���@�f�v8�&}{���:����N�iu�x�R���T����U#���C9�u���h�
�
�����R_��R��!�S,!nu(Ew����� �.��@E/�E�!�{�~����Q����E��d�\�O��3�N7���C��K�&��`;��B-����i�B)7�
����z�O��A�u��8�f_G�_��<�<�����
��������?�->�����!v���CB�����zN���a����of������i�g���X�aul�#�=D�`��8k��H[q��p+���M��@�-����:���\��^�3j;���@�N�{q�u�Q�@&_Y��S��:����f��;]��4���{���`<~�@?c��?_����K[�
�C�fs0�p]o F�=
cG��z������)rF���:�\�a�3��(w����������Vc��aj��<��Cb�YZ&������{��[���A���'������S 5�3���$V��s�� ���P��������~���!(���,������$���P'� ����X�����:�T�_�N���#b3��,��<��>l������2q�j,i���&�A���������Lq���Hq�6���������<�����������z�$���Emg4�j�md&���|T~[O�0�c�
~2�oE;���<d��Q(N��Z�3�5����S;?O]��^�X:�{��o�<�Q(�#�,�� <�����y�,4f��S��J(�/!4�=�{���<���P�P�d����Ji9���c��'�}��%��a���a�i��<�4q��� 8����/oQaf��"�)��K1�
k��^�����?7(,�,#*,�2��M�U���:����UQe�7�4MTh���K�S�=~\�Z���n��9���j1�K�1|��9����f�U�\���
"p�t��A>�%]�MX��%M�&�w(t�K�v��TU������N�,�'����sQ��;Q>�21K%��������6,�~���e�yG����$s���@��e�����z����v�k-�,�e��>&?��%��Z�q �Yv[�������5[�Vhp���k����6�l��>�����FZ���0T����� K����$���4 �����u���*7`F�-����pX�.>�m���
m����:E=����v�Pp��W�=MMe�d!���(�v�@U�6p(%Y�����J',�"I\��B����������wWg��������o/�n�^�tw������\�p�+�P��+�$�Q��T���&S=k��V�]��CQ�c2�E������<�S��r�0I�mA�$�I�
X�z��#�����a��-���R���.�k
�s�X�>��rN#X�TV�;rZ��)�l�}��H@n�p�����$R-�z���P
�M?���U�ZT�� �$$�b��R���
|�S���<V�j'�LUe�����"ro!oIP���N���������~��.�r=�$9��m#��*�W�+*>%1I�������o<(��? ?���<�!=O�p�n���|8��{Ah�=�:����CJS�]�*&��%����� �3��R�H(uK&hko�����H����Q@��� &�_B)J$$d^4E�5��i��f�����������
A��� Z�>X}���J�6�:q�|w0Di0Gy�Ih���m��\��i�Np���\� �S
NnA5�H3J�,W�Z�U�� @��S�Q*�=�e�����Rw\�_���v�cnx;�+�x^������|�t�v4�����f��_��<���cJ'*+����bl�������oaG<��Su�*��(/ ��Ql����S�\}3D��������������P������^�� ���)V������s0u�����dl(�T@r��L�a���9L��.1���|���bjR2�����C6�E2�����6N��\���D��)�`�j%9�k�m$]Q��
��(��X�L|�l��@D�T���!���BJ����F��{OL�^��L����z�`���9
>;��x�~�*k���*����E�3����L��xi�{�>^Z\��"�:`<��
������K�{���M���@/p� ���7Vf��:�+��2zv
W�0+�v�4� ���9��Q����k�
���I����42��j���E���'��"��y���vU�Q����O�$;.m �U�{Sh��cn�A,�A�Nk�)������m��ZM���k:$QD'g��_���, ���fY�X��b�Z~|,;�6��H�����q4�
�h�_������I�<�)�cTWT�m8��+�Gr2��`�(<B;���!�_���,�M����R<�>){K����i7[��{�=obQn��L���d���"�+\wgN;IqG��r���~�K�e]�aPy����.O�[����E��F*E�!\��JEo�b�*�`��4Z��R�|L7�59���%V �v���u��������H�H�NV�Z�O��?-T<�7�"#��J�L)�f���E�Nv�09u�����������zH���J��q������3���v�iw�}�7%k�l��(��c�(�x �"��(0�����
s�R�8f����@�����0�.tG��v�`%�c_�U�+P���|%��F&�\��3������ �l)Y�Y��� �-%QH�D(fx:��a�����������q��iX=GC��
�d<��G
]T8#SZ�j� #E�
�/%&��K������0�� �1Q���Q�z�E�J���z�����.jlQ���h,<��I�e���n�y��4�o���oz��<�@�� H��?:������M��H����� ���m�����-�7i��_��o���{��������j��V��Z����,7�j�{-���Z�bO���
�L���vmz>Yd����]��7_E�P��&��}\�U�M�vEC�mb�5( ON��/.�j���@�}8J�<�)-HiV����dT�@��� [J��~e���f��
�E)}�� �'������1����ao�t��na.L�vK�AC
`�=D�3��`���(��_�G������%����+�x"���|#��~E45��S��9U� H�X��(KsV�-Hm�a�Y�L_30KJg��t��g��^<u��4���3���;���K�(�(5�>����������M]�h������-��ma��.]�HD}T,���cS��t�TD�5$�N��K�AI&G~2z@�a�&s����`2����; ���6�!Z������J��x�bG �?��ZC~f��.�1��A6�Y��ki�k�o*/��a�}��~��r�^�x���������+��\X^���������T���@����*|H��
J�Za���u���o�����a��Pr��y�y^�]�^���9��(Z+�m��[�������]������i|�:S[y)p�-f��2�,� ��j�9��{(D�$��[�c��n]�UL�0�.���U���S�"`Y�'�-�Gxo8�WmI��z?��z�%��}��
�#s�6M���ve��u�o�����(^$�WT�����v�5 �f�G�~� �{���b����c����+1�z]E�w�Oe#%`OA�A�B�[���<��9u~zbep/2PW=� o����Jo$���.�[�o�k�^�>���?���|��8���q�j�%�YQ���������G!��qa�g!��a)������)�m�RR"V����
����= g���Ig���e*C�����K�����VK�����.��
B_���
���Zo]���y"BG�qZ���5������O��&hU�����oA���Nf\%����w^��T|��&o�T'����?�q ���+���K�x�_�e+�DI;q�s���� h, ����_�f�g���'_X.����������g2���t�],�h�L���ha )�X7�/Pep�#����zMe��e��8���t9���+m��LV�G�m�#���V�<�#G��8D��U�5���'}�,�S�p��@.,��
�_t��/�L�JD��&���>,pdJ�Y�.����=|y�)�h���b�#�r�����r������-��x��a�@:��t7(���"2XF�j9���/����2U�J<��:��E_����l~��XS y�-�(�8��|���/��P�?��T�te�s3|�b�Pz�d1n���qI�=ops�|d���)����O����x (9������IF�E��ey�M��G�hk�@����,�s�gu��^#�&��i��{m�&2����x����+���J7��i�C��4d�PjG:�S�>��82���<�O_��7�Ne%�Sp���]"�74�]'~����U�y���~�b���W�a�tE��8&����V��
%P��6'���a\W�)k�������Q���M�(�2bGu��L�6���K(���f��r���:��FA��������/NN/���������W?�_�x2<������/�o��<K���f#�
5�fI��G�����^����ay�2q<BXH<� ��R��8��F2��k4j�w���{���vW�8�\�y��c�4���,��(tk�6�:��X��6]Z�J�]=�Q �3.8� �G�+�t���4�������x�Vf���*���c�gD��4�s�M�+~���<�>���q���T��$f����dL]��,�b���Z8�U��w>�S6��!��w �q��C�
�������9��v�g�x�N����R<� ��e����|
%X�G=Nx?��2��QH5�)<���?��rl���nu6�����G��=:lT! �P��B���6�Z
����;���@����r�|�������Qp����by�5�%8��#�G3y�I
C�c���������{{���SI�.j$E�D���Z|���0����&9��?��v59��TqK��"�� ��������E$R����K��Ux$��=?�����Q�A�����>�S�q�ne@@*�=��H���UR�������l�p��T��q�cR[��N'':�t�I/��&F��p&����r��Ta��S
!��sg&! >]?�1��QP;���������x}t�����=������rt����c��i!e��(���;"62�pVN����U8�y|�l���ib@&�����b���i:A/�'����t�$���� �S�e�N/���B����F�| ��������2Bt����J����]��������f���j�Fa�L���-�����*������
��M�f���pOr��|��kZ��4�^�7���%i�\=�_��s�i�0+��,>�,�r�K�M��%$� �@V<U+\��y-��K��gB� ��t�V�����27���p��=l$ucRu2=S�5�k)Lz1�Z�NUi���������rZy)��a�Z��(�5��$xf3)��AB� ��|
��6Ka/P�k4b��2��M4�zd�y�n�s��Z]��HuS�p���O�����
�#P9�`�_�HUZW����+��
sX��������M�Xy�H�c-"V�lk�q�������&J1�S.�J�/�,D!)U��p����9 ;��hMp�]"m���j����������j�����z@�}�i�����<r����5{������GqgX��=!���o�u������*e�c�iKm�n7�z�r���^>��r��J���;x�i-���x 91(4�!")�
�)t��@�K�#N5�9��~B[:n���f�#r�%Tb� ������9�V��f�JY�?>�eX�8X�T��C�G{�uZX3���\�F�e� p���>���ruN���P0�-[x���^�M�q�y�ht�I����m@�l��&����g+L�O�*��@���nd�V/l��g=p5�d���{��Z��^�7.���r����<?�8���q�����??;?;>h�����Pk5�k���s������^��!z;��(9�rc|}�������~u�� ����oc��
�d��o���E�H��������~���{t�mf
���P��/^�z^C�n^$�o�a"����9;9����
�B|��fw���J5�v7�
�U���vo ���"O��Q�:@��S��R^���W
���G�gt;Y�/��� ���?�Thw���k��x���hf��
o�wY`��3��di�l4[�L���|P�Ma�fk<t��hw������N�&�����*.��d�~'�T�$~t��4+�!��r��Z��tz|�������Wg����� _� �����#�!�.�u�5D��O���Eb�7�M��2%��w��b�s��<
��e����
=U���:���PB���0�Y�����g����}���at����eA����!y�kJ�0�F�� +�0$�����}{�����tc-�W������\������6���p4j7�$�����m���U�&Y�8Mf'�+�����V��X�n���;�:l�D�r4������-����$����,��E�1%[��\8�A�^��0��r%G���u-D2���im����W��R�>{�)��H�f�#���52����~H��Y�)�����0=��.�R���A�����/>��5y9���kzR���������U T�ll�� ��?:J� �w�����m4&��.Uh=i�g��"�������<Y�Nx�\,V�H�� ��6h����L���f���k�+y�����"\����!RLI���JU�m*I"�)�}(DA�V��wZr5�H�;|^4;'���BU�O��B��
t�min���h<6�� �oN6��H[��%u����-/��^`{Ep\�8
�J~�?{�[ ��gAY������$e"�e�<`WX�vHU�~\�5���Kar�<�BaL@
��42�p��^8�� m9�g
��� �|!{��8[��5q�����KH�wr<F��cq]�'�����j���� []% �0�KO1-8)f,�����������)��H������������x�>�����O.����!�C�Y��@��.q�|���P�(��2!��B|�Y�r3���'d�(X'S|
0�6I�J�.���a8���m����|q5���S���khz�`)�������v�N�]\9���V�=��>����Z��v&��s��q��!�kGe��#�x��keP�w���t9�r1��F�V��p�G�~�U�q&�Ll�y���];�]�F��8fz���g�%�^Xk�i���
!$����g�i�!�Oe�>����
��W�|�� Cgg7��M,�J��� �j�
(�hg�����M{!7�
�5��f�H>~�����I�9�$�H�1l�
��Kv��^��_{��XE������.��A��������qXDa�uP[�����z�X���1�� 0�'�/c�\���������w�}x�/��dk��k\�����]���0N��Rh��l�s[AH��Si�������[D^�z����4�z��K����:����-u�������>:�Y/��F�^�;��(�����e/[R
=��
�Q�����m�2�i|a0=#�x}uuK���i���p�/�.�����~��s����9�V���C��(Yb8����$�4�
��x�X��n@��F�/a�-� �/�qJL������"�IB�'��)�����*� q n�E_!!X8�10�?7�!��@�\���q2��
����V]qs���o��������<�s6[uy��r�C� ��?�V�1�{�?u��^�e��n���\�'�v�5�N����wh���5-�]1��CX���6��s�������
����;��2�������}m�\]pyP����<,���C�W�
W�B���� ��y2F��T���&��)���s�V������l��5;<�4�����<�9�tv��On�.���~V��EB������N���W����H��m��k�Z;;3 ��s���������������X��K��-,������u@�������M6F��b�.N��R~��n�����t�+���5�HB�>�O�A3�G���=��$u�:)b�!GJc�c����Y��@42�2�&�N{I|���j�B���E!�(���N�
�S�� :7*/F@�����)�����-RB��9��T>��%�3I�Uz�'���>������������}{tzP�s����60(������Z��T�t;�vA,^b��O[�,~�nzL� �:NR��Y�7
��*"`�c��u���.����z������!�����}��h�P�_�m������c����s���[^�����8��|6C9��3G����u*%w�B�;�����R�o&�a�\�>�t0�]���T��\���2���X,G����[���aH�<���3��
\w-
$`!����C/�da� �[d�-�},Z��!�rju��x�C���z3R���*��Xv����}2�r��4J2�PP=�%)��G�3����0�R%Jp�-��+��L�� #��x� �����dEq���
'��|B)T�F��#Q���4�y���
��^��: �2�J��\c��">�T�-u1NW�u�33:3h�y�n������|��^��3��3i�HR��X����h��-d�^��,����x��(oO���qu�P&}��9��eTj�9ck�Y���S� �o�iZ�(W�
]������O(��D� |D} ��PwU�������Q�������Y��3r#N����Y"Z�l�"Z�8����,{,ny]�����������V��b�O0 �tv�_��?�peR�����,�e��-��)3wr��e�f���w]�,�EWu
X�G�����~�$��`i:v��z����w����[v� K4��wS�Da��p�?F��/�b�=.+�C����,}�����@�:P6�h�%���{WK��q�b���0e�������c@zJ
�!"�6���Z�K�j*�T���P��9��b��,���K�H�
K�!^RF�5���C���A���@�
��W�������4�hA)_����8a�454��+��0N��W��+�i�
<�W���|�����������Ft�2����RG�H+$�m��[�( $o|�Wg��J E���8}������.N�hq�@����rK�=���!�K�����l_rE�$�-
>���,�A[���K���,R��7iO����Z��j�y���{kIv;{���Q�Y�M�I@� [.nH�K���nz��0����X�srDM���W.ggL��@�hD�5/�%O|�
��o{}EY �]�����|�`M�)�#�]��x������u��|
r=�3���o���)C5��E:P7�t9_�A��)���,��Kt|��PW���]��V8`�Lc��u����N�ol�����������]t(��}�7_�.����eNo>���;�3{���q*��D�2���@�9u���Hs�����f}r,+r�l����Q�"`���M�!Y�$
Q�_������(i����CL+�(�}4�-�����3
s��Y ���
D-��p�Ls ��e[c/%�g<w��^������#�����x9��� -��x���gFQ�4��-;��g�=u������Z?�V�z ����!x���i� ����"��6*��,{(���B*-L�LL8�M� �����A� 8�� x}^�Yx�y:����]�U�8{�$�R=TlTCm��t��b��$�����2y�vD!��d����v3*��*��$9A�������K.�,�8d$���(��1���J��6�Z������8��Q�u���t���q����^@���1�<��P�OI�a�Rb�&�J2�1@�� �.�/)~y�)|�N���l�:g����0H�7t����x7�S���F���H;]�\MvY���E���x�:��B����<*�o�lAA�b���T#�I��i��B����������xb������4��*��_����g�U�N�d�M����yN����y
����a�rb��m���%X���g,�x�R��
9����#g+L\�J�e��
�3"iK�qN��k��U1���41��m:�m�`�C��f+.�+#�G����7`B���q(�H���,*�"���\��up���$9H���
V��KZ�F��OCJ<�v-���%UY�g:
Rf���`�&��b��m�����c��Q����������|��Km�J+��(�"��k;On����o��z���-�gU��|J��������l$_�j�V`w=�
��Mo�j�2��R���cd# �|mD���c��v:�C��}"F��|��������; 8���^�6k�E�]q-����I"IA��C��o���wyA6m��H!���ZlU���G�'�u������C�����������m�(x@%
����u�3PUv�W��&a����7��>��������
/������Z����9{��l�r���37����J��avE���
�5��r��&h��M��Z(OwADY���fO����@~�D<���T�`Tv�Z@�ns���L�����f�hu{�2]d mu�vMsN��v���8>���k��^[9y��N��8&��h[�@D����d�Y���dbkI�%���f$8�v�Qb�%�@WN4�w����K�i]��*�z�&��c+y��B q<n��Fc0�0�OeQ����j�%��@��hHE$�n��?:��<�f�aA�����+�_���q��iyl������g�����7�>>��Br8e�?�F��Px�S{t�e���0��~z�����s.JV���R��
M^|Jb�*�O�.���fB���h<�4q���&U1hf������:(w�������g��e����3�Q��X����bq����}<6|�~`R�?����K�������mIIT�X
OpYJT^������Y�M�A��qe�3��
�y��@}X`6���B�����X��+��o�f8hZcg�p��� =;�2�Y��-x1���DJ�������vB�K��6CNQZ����jC��(%3VF/l,��1<Z()�%�!9�6E��D9��^/�cR�`�a�.�0]���K�0<*�`���y�e"�y��>e�/������/K���8������5l�8����e��-��+�� [��&9�\�M� CSL�m6r����5�!�T���]}G���n,��m2]?�!��1(S�����A�<C,��P`��l�
9�L�e��PNc���u�w�5/�k�Qn`'��a��=*~�aL�k3� �2!�vq��S;��y�����|�#bg�O@J�P�1�����AeI�������]^�$\��������9�F+�n�/����I��g�.�G�e|k��g_�V�C'�����}��T-�R��H��}���N��N7wj�lq��I��l���v�/^O��������j��)f��=��1����77�Xl@C �C��>�#��7�����xW��?�x`��o��T�w��������<�Fo=��]��_���Zj���v�#�ib��yL�OX����l���L9���b&'+9�#,�5�ehS,(���(S�����R�
��dV#�� ��,�S<F�u�)��Z�u
#���:T������hJW�t�Ku�������
w5��t�3��3�UW{Y���47f�)�M�lS���
8�h�������vF����5�G�na)[��'����������%x������1�k�B���dy���<\+_
6����! �K��[���-S8�l"�=(sS�h� S,��,��b��������x���>�B��,�"�c�-f�d���'X�k��I
���9��(�+��DV����Y0���:/V��a�e]�~7�"[�8���w��.��$������Z\��!�pP�����������(�ni��N�����4&6
�1�/D�&�;�,q7'��n����w�aCS���@L�hC.g�����2D�zTxw�, ��7���rw�o��
/R.7 3�;�%J�x��D� ��E�f�-���UU�hd!��-�A�N���aot�7�N���TV*H[���mV�Sj��(wY��n5C�G���� ��������)�����e�[+Aw�k�F�����0p����h{
� �����P���\��8M�)��"�F�XL�
�����>\[��8^��)��Ke�%�j�
j}�u�e[,2�� 2�� (uI3 ��5=�������O[������{k���)us3@���A�?Z[8����O�a(c`Wi$�J����8S�H��p!.��2��+�x~8�a�f�+j�)�/��P�N/k���$0�R
�����������L�b�5�\,�N�"���%a@e��T���Bu)�����0t; i�z�%��6��_���eH��Z~�,OA�[Dy�xN�VP�i�0�+(p��![�V$���41�V��3I8�����"m�^����4�������
�������bp������]��S����Ure6 ��<�"y�:H;����
q����S^B�?�d�0wQ�M ��O���$�~(��z�,yUp�8�� ���>���`ql��dE���Kgx��y;}��k|;LBww���k������r������4��,a {=u����[�fi[�:2�3��q�������{�:�������
0Q u�I)8yU� }/�Sd+��B��t��"���M���k���
��L�-h�`���p��}�Tu�X��E��F����<����h����B��U�4
��Q7lutY��m�����d�I.�B��Wiz�����o��=����&��{�C������8�Hq�T/n�Wt��pIr��������p�b\=�����6����j$�2��xK�BD��Y�3�!��L<G�c��??R0��������~��1�������J��Zl�>�������������M��l����#J$����v���db/f�D�`��E��I(T�n��tOaL#������I
�{�sJ[����t��r.����?ur�,�G8��������_�bbF2P�pQ��EJ��.����B�2���k����h:�fTX6&��0J>-R��Ipn��o�/Xj���\���,m=,)��2���>^��y�f���}�����( �Mak��F}/)) E�0�c������v����t��#8���ROd-�i=��E���v�O�xP�Q �t��5��" _���;�4oX ��H@��'>���(�b���:�`(�i ��
*��^M�(��I�'���Lq�`�jU`=M)�IV�[m��;[�bR����ns��g���%)�NL\?r/�h7,���/�
�:���p���`0~o��'�-pz�y�/o������������;�oNO�{�OtG�i@�����"�-�|�T����b����l�%��0��2��,\%<� �w��dd8:W
�?���'���9r��K����G0���W|�3Q&<�<kM�o�xB5��O�cY(�u�����.�0��Dh�����!Re�z����+3Q���S��h��qv��� ^,���!k��������&x��]&(��(�3��B<����&�5O�A�F��G 5���������L�]:���jg��z����5H��je�]�)�liT&*��/��Nyk���,��0��H{"`)�'&�
�U���EP��9kb��
�t��P��k O�76U��h��q���"K=M�=l�Y2�T�o$�4�x��5�G
�g~bJ�m������1��%����7,hru��%���Z�� ��hK ���\�6��K��xK2f���o57UB�5a#H����vx�TY|S� ���1�^��}l�D��#m�;���R��t0������(�g��F���}�����;������oN�/n��O�9�S�W���zb"�6hM|TE]b�������������,��;r�j�����w��cO�M��g�h��� ���N��j6��=��zN����\!��G������&�JZ��R�2��|���/q<C����9������W��j�;;��+j�7g��<R~�gn��:������Z�9"�SW8�z+@�=��+��X�-�4���<��_�*9�|g�v��W����Y�����i����
�8`� qp���0���rA��5�����S;��1��G�k����cM��
�_���bn}<�lR~3<�j��~����v���>R�4�����V?�M�9�Hu�l�f�����/��b��k�<F�� ��s�_�i�d����(�z�c���B2.�`��Y��sSD^�%���4{,O���*k����.J�&��-��B�����A�l�����Q�G�cl��q�����.��|�C�n���N��"�F��3�ekW�cW��
$��
"��w���b��\H�<��!��2%U�d9={�oO��,;�$Q9�xb�7���ltD�j\���j�+���$<��WE�l��i9��T�����1��_~����[T7}����[�@�
�/����\��EXY���2���
���%CcB���N=��Q{�4$g`eR�1T���&H��/W���d ~����2�
9��D�6�����t�S�j������m +���V�)�D P&���1e�/�&����F��!����w�_��#
:��l�Q�
�mB�Y��MRF�Q�('Z�;�s��x���������I
-x�[����Ja���L��_3��Iwy=�"<����0Nt�4R�L�2���P�(jc������=��7���q�[ �pq,�y�(�^����"�hT�P��K� !������JLI 3��f�@'�����%�/�wb���B�����7���q��i���1|g ��*k��)%��s�%A�,X\�I�`���A;l��� ���ur�X�>�cp+��k��y��p��O���j@zX.y4Bi�/�}��W%a�g 7>u�s��P'�}����dd��pI�i�am�Q�����'�������i@��w�YM���fX6��x<[G�z���J
����.m���~X���[g�-��l;��F��n[��(hl"����;����C�,C�"X����G��s#��T���5eH���DqP*���f��9����y|3[�Y������X�����/��KT��|J5�V�,�M���Oc5�jX����O�sf2G��7'����������������
�#�/x����G��~c��@�L2����eK[��3;�x$�X����9k#�y6���@�h�����!2���N��V�q����*.G)O��8�As�����=���hGI4)?���l �&{��NT�w6p�A7�5]e:.��UV���*����K
���a�$F� w]c�d���1�_�������r��t|2��/����o�(K�9Y�f��D`�*�R'�*�/�.�~��]��/39sB��L�~G�5�}�n�XK~�����=`�������i��-��\�Xzj
�i�5Un��h/%��
�7NBS
�h�jNm��@B/����X`���w-�-���r����������opM���M��ra�K|�]�0��v��c�4Pb����k�K��q5�987&C�*����%�TA�,/�tR7�
����c1��hI!���i�VAB�W�JA���M�PPx�����Z������303� �b�����Pt�x!b�:�#\PT/���B�����+��rZA�O�����i�O�=:���>J �R�Xt��"��� ��'��)��� RIb��rEgtc���[B(~����*�rxd�g��-� A�p���So����v�Q�
�p������n�I���K�p����N:���nl����6a��HH!���M��$oF���(������Q8P ��p�/#��7Py��n�%o�^C7xy\y�����-����W`��$����[�R��8N=paEoM,���j�9�4$�����<���-�L�27��z/d��G�I�hLz�I/�o#k
���u�J��T��.��9�}^�**y�sD%�r��[�6�L(�� ����.������> ��R�J�S��I��j��(2I"���$�pAu���-�c/�Ih����?��hl#K�ZB���e���n�$�;�mD� ���j��$������;��#�8�Q�r��x�!gUU��r['�Iy�{�)w�B��4�l�\3�X��p�;�v"}�A�L���z������fq ���p������HeX��Y6��j{���&x�3c�y��I�0e��@����@�*�Gr]f������Q��L(>�9�
7�t�<U�*Q {�3 �`��v:a�i���e.�eaNF/�D�G��U��>M&u�R��UNZ��4�O+����,L���En�����*��MjPN���8�N��A��v������T�R� �
��u:a��cv����������v4�X���L�a�"�?���Pjx��g�3��3.COv�L� K�]��M��a;��A��lu�*f l�.����8��x, �4�����.����d9w�<��3[D$!�&Xi`�KL������������ �����I����|Jn�W�h�@��<�<j�'F�'t�yB�9&6]�����'p� <������� .�K<5�S���Y.Q����>��M�^���A�fs������J�����<���H� 'S�qR��[����C�~����]W��y���{��
���u<zS%Q�Y���Gk�L��v=���B����f>xy�99����O�!��8����G!�|�)��$��J�Z��(~]J�G"g@X��4RS���xB�&��m@��+�V����W�K���6�Z�[w�m%�g0j'�h�htF�Q�Su�88���k�/~AC`8�]�>M~��EZ�f?l��8+�C�.R���;�b�1��K�h�&�u+\C4��w��>��?����}u���|� ���'��xZ0�Z~���uE�L��sJ6�5H��Q#�o�{-�7E�`S����V���D�����g;E����h���#�+��i$\p�T�<9r(�u���~"L�;��n���z>^&c.����15uL������cx���5>0�udk��:�?�_b�����H~@Lc} ~d�p��>����������������c���nx?;�3 �T�/:[�HQ;l�"[���=;�f�"�/��s��e�����>��W�q��_�$�'b�@N��c�$���H�v��t��F��������hJ� ��P � �k���\�4�=mR�� H�n��D��2�/B�k]�K�I����^�M�xKx�����zj���0���!>���������<?>{V�(�����-5 ��'1�>�7���_r=7���{;�j�_x�+�m�a��v��jgR~�� ���F4�Oz�nU����My���j�%��;��%U�b����V� ��v�g��r1�ac����r:?�wv'N��`�B�P�XJ${��Y}��z�������!-��72LnB:���m$��6w�"��r���%rl�U>y<�Vf��U�Y(k���U =YR������{�2<.����W��
����5�>�:W�EKL��9U��#����4��I�i4�vgGU9R�6��i��Kg��������d4���,0�s��9����������������9���x������t�f���]�gF?�^w�5t�OW\I"��A��xfz�G�s������j�X@"Y������
������
e��H���d`X���d��H������%�b SV�Q�3�xW�/����^����j�|����e<Z���Q���j8��[~JO����?�W�bv<��[L����"�,��^����1�����V��Ox-�g�8nG�q���V��LX�����D��i0�A�������Y(�d}��[0��6�ra9 &Oi?��tZ V>���6
.���L����P�������i���������D���JL��4�c��������:Y�>�guw��%?���i�^�.�'�{+j��2�>��i����A����}���)V����#���'3�#��PF'f�SX��V�^�<��9����l[��fJ��$�Ytg��;�\�At�f}�1��
��1 �X���E��r|����������l��
��/���Q�X�����6�� 7p��?y�=����q���%-��p��9}vv����>�������
#\�������!�ZKW����l�vEa�
����V���O�Za��F� ���.X��������:������p�N�u��u�w��4������2h7�A[�P2�
���-��o4n�Y/����y�>�J�+;�e-�C���A�� ~(-���
��'�Vdw�Q�} ��{��]�31�.P�������9ybE5�*]N�j`L#Q�����8�5�Q�7iN�*_,�mz��D/[(�i�C����r��B�$�'�(-�o����WR;�(����NF�
������$���m8�2��/;�����O�ru���<
V �v��HtW�@J�S��o+:u�O����>
�����������{*�rE%'�T��M;����G_�:�Z5��2_�T�7(��>$�a�|sv|z�;?����F������d����8P�{/��6�j�>dD!j$ ���� ����+jL)|P�����PJ�ul�W�U��/��W���]\�g+=jWz���>�E��d���4�r����cX�������%�O�5���~���sB�c8��~��������� ����l$�E�ny!~�q��a�o����4����g��^+��L6R�f�[EE��9��?q�L���o����� +D��4���jC����*���$���Q�������j����j���91t[����c��m������M&�N�1n������=���vQp-���-p��}���n}g=8
�������!��PK�����A��Ye��[���Y�GQpt�J"p��G�'�d=!�R�*i���)����rF�{���T���p Hi,
On 2017-01-18 08:43:24 -0500, Tom Lane wrote:
I did a review pass over 0001 and 0002. I think the attached updated
version is committable
Cool.
... except for one thing. The more I look at it, the more disturbed I am by the behavioral change shown in rangefuncs.out --- that's the SRF-in-one-arm-of-CASE issue. (The changes in tsrf.out are fine and as per agreement.) We touched lightly on that point far upthread, but didn't really resolve it. What's bothering me is that we're changing, silently, from a reasonably-intuitive behavior to a completely-not-intuitive one. Since we got a bug report for the previous less-than-intuitive behavior for such cases, it's inevitable that we'll get bug reports for this. I think it'd be far better to throw error for SRF-inside-a-CASE. If we don't, we certainly need to document this, and I'm not very sure how to explain it clearly.
I'm fine with leaving it as is in the patch, but I'm also fine with
changing things to ERROR. Personally I don't think it matters much, and
we can whack it back and forth as we want later. Thus I'm inclined to
commit it without erroring out; since presumably we'll take some time
deciding on what exactly we want to prohibit.
Anyway, I've not done anything about that in the attached. What I did do:
* Merge 0001 and 0002. I appreciate you having separated that for my
review, but it doesn't make any sense to commit the parts of 0001 that
you undid in 0002.
Right. I was suggesting upthread that we'd merge them before committing.
* Obviously, ExecMakeFunctionResultSet can be greatly simplified now
that it need not deal with hasSetArg cases.
Yea, I've cleaned it up in my 0003; where it would have started to error
out too (without an explicit check), because there's no set evaluating
function anymore besides ExecMakeFunctionResultSet.
I saw you'd left that
for later, which is mostly fine, but I did lobotomize it just enough
to throw an error if it gets a set result from an argument. Without
that, we wouldn't really be testing that the planner splits nested
SRFs correctly.
Ok, that makes sense.
* This bit in ExecProjectSRF was no good:
+ else if (IsA(gstate->arg, FuncExprState) && + ((FuncExpr *) gstate->arg->expr)->funcretset)because FuncExprState is used for more node types than just FuncExpr;
in particular this would fail (except perhaps by accident) for a
set-returning OpExpr.
Argh. That should have been FunExprState->func->fn_retset. Anyway, your
approach works, too.
* Update the user documentation (didn't address the CASE issue, though).
Cool.
Greetings,
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2017-01-18 08:43:24 -0500, Tom Lane wrote:
... except for one thing. The more I look at it, the more disturbed I am by the behavioral change shown in rangefuncs.out --- that's the SRF-in-one-arm-of-CASE issue.
I'm fine with leaving it as is in the patch, but I'm also fine with
changing things to ERROR. Personally I don't think it matters much, and
we can whack it back and forth as we want later. Thus I'm inclined to
commit it without erroring out; since presumably we'll take some time
deciding on what exactly we want to prohibit.
I agree. If we do decide to throw an error, it would best be done in
parse analysis, and thus would be practically independent of this patch
anyway.
* This bit in ExecProjectSRF was no good: + else if (IsA(gstate->arg, FuncExprState) && + ((FuncExpr *) gstate->arg->expr)->funcretset)
Argh. That should have been FunExprState->func->fn_retset.
Nope; that was my first thought as well, but fn_retset isn't valid if
init_fcache hasn't been run yet, which it won't have been the first time
through.
So I think we can push this patch now and get on with the downstream
patches. Do you want to do the honors, or shall I?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2017-01-18 14:24:12 -0500, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2017-01-18 08:43:24 -0500, Tom Lane wrote:
... except for one thing. The more I look at it, the more disturbed I am by the behavioral change shown in rangefuncs.out --- that's the SRF-in-one-arm-of-CASE issue.I'm fine with leaving it as is in the patch, but I'm also fine with
changing things to ERROR. Personally I don't think it matters much, and
we can whack it back and forth as we want later. Thus I'm inclined to
commit it without erroring out; since presumably we'll take some time
deciding on what exactly we want to prohibit.I agree. If we do decide to throw an error, it would best be done in
parse analysis, and thus would be practically independent of this patch
anyway.
Cool, agreed then.
* This bit in ExecProjectSRF was no good: + else if (IsA(gstate->arg, FuncExprState) && + ((FuncExpr *) gstate->arg->expr)->funcretset)Argh. That should have been FunExprState->func->fn_retset.
Nope; that was my first thought as well, but fn_retset isn't valid if
init_fcache hasn't been run yet, which it won't have been the first time
through.
Righty-O :(
So I think we can push this patch now and get on with the downstream
patches. Do you want to do the honors, or shall I?
Whatever you prefer - either way, I'll go on to rebasing the cleanup
patch afterwards (whose existance should probably be mentioned in the
commit message).
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2017-01-18 14:24:12 -0500, Tom Lane wrote:
So I think we can push this patch now and get on with the downstream
patches. Do you want to do the honors, or shall I?
Whatever you prefer - either way, I'll go on to rebasing the cleanup
patch afterwards (whose existance should probably be mentioned in the
commit message).
OK, I can do it --- I have the revised patch already queued up in git
stash, so it's easy. Need to write a commit msg though. Did you have
a draft for that?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On January 18, 2017 12:00:12 PM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
On 2017-01-18 14:24:12 -0500, Tom Lane wrote:
So I think we can push this patch now and get on with the downstream
patches. Do you want to do the honors, or shall I?Whatever you prefer - either way, I'll go on to rebasing the cleanup
patch afterwards (whose existance should probably be mentioned in the
commit message).OK, I can do it --- I have the revised patch already queued up in git
stash, so it's easy. Need to write a commit msg though. Did you have
a draft for that?
Yea, have something lying around. Let me push it then when I get back from lunch?
Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
Yea, have something lying around. Let me push it then when I get back from lunch?
Sure, no sweat.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2017-01-18 15:24:32 -0500, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
Yea, have something lying around. Let me push it then when I get back from lunch?
Sure, no sweat.
Pushed. Yay!
There's one sgml comment you'd added:
"Furthermore, nested set-returning functions did not work at all."
I'm not quite sure what you're referring to there - it was previously
allowed to have one set argument to an SRF:
postgres[28758][1]=# SELECT generate_series(1,generate_series(1,5));
┌─────────────────┐
│ generate_series │
├─────────────────┤
│ 1 │
│ 1 │
│ 2 │
│ 1 │
│ 2 │
│ 3 │
Am I misunderstanding what you meant? I left it in what I committed,
but we probably should clear up the language there.
Working on rebasing the cleanup patch now. Interested in reviewing
that? Otherwise I think I'll just push the rebased version of what I'd
posted before, after making another pass through it.
- Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
There's one sgml comment you'd added:
"Furthermore, nested set-returning functions did not work at all."
I'm not quite sure what you're referring to there - it was previously
allowed to have one set argument to an SRF:
Ooops ... that was composed too hastily, evidently. Will fix.
I'll try to write something about the SRF-in-CASE issue too. Seeing
whether we can document that adequately seems like an important part
of making the decision about whether we need to block it.
Working on rebasing the cleanup patch now. Interested in reviewing
that? Otherwise I think I'll just push the rebased version of what I'd
posted before, after making another pass through it.
I have not actually looked at 0003 at all yet. So yeah, please post
for review after you're done rebasing.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-01-18 16:56:46 -0500, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
I have not actually looked at 0003 at all yet. So yeah, please post
for review after you're done rebasing.
Here's a rebased and lightly massaged version. I'm vanishing in a
meeting for a bit, thought it'd be more useful to get it now rather than
later.
(I also noticed the previous patch should have had a catversion bump :(,
will do after the meeting).
- Andres
Attachments:
0001-Remove-obsoleted-code-relating-to-targetlist-SRF-eva.patchtext/x-patch; charset=us-asciiDownload
From 5a0bdc9543291c051c2dbab26492f6e0320e8f82 Mon Sep 17 00:00:00 2001
From: Andres Freund <andres@anarazel.de>
Date: Wed, 18 Jan 2017 13:51:47 -0800
Subject: [PATCH] Remove obsoleted code relating to targetlist SRF evaluation.
Author: Andres Freund
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
---
src/backend/catalog/index.c | 3 +-
src/backend/catalog/partition.c | 5 +-
src/backend/commands/copy.c | 2 +-
src/backend/commands/prepare.c | 3 +-
src/backend/commands/tablecmds.c | 3 +-
src/backend/commands/typecmds.c | 2 +-
src/backend/executor/execAmi.c | 44 +-
src/backend/executor/execQual.c | 919 ++++++++----------------------
src/backend/executor/execScan.c | 30 +-
src/backend/executor/execUtils.c | 6 -
src/backend/executor/nodeAgg.c | 52 +-
src/backend/executor/nodeBitmapHeapscan.c | 2 -
src/backend/executor/nodeCtescan.c | 2 -
src/backend/executor/nodeCustom.c | 2 -
src/backend/executor/nodeForeignscan.c | 2 -
src/backend/executor/nodeFunctionscan.c | 2 -
src/backend/executor/nodeGather.c | 25 +-
src/backend/executor/nodeGroup.c | 42 +-
src/backend/executor/nodeHash.c | 2 +-
src/backend/executor/nodeHashjoin.c | 58 +-
src/backend/executor/nodeIndexonlyscan.c | 2 -
src/backend/executor/nodeIndexscan.c | 11 +-
src/backend/executor/nodeLimit.c | 19 +-
src/backend/executor/nodeMergejoin.c | 59 +-
src/backend/executor/nodeModifyTable.c | 4 +-
src/backend/executor/nodeNestloop.c | 41 +-
src/backend/executor/nodeProjectSet.c | 2 +-
src/backend/executor/nodeResult.c | 33 +-
src/backend/executor/nodeSamplescan.c | 8 +-
src/backend/executor/nodeSeqscan.c | 2 -
src/backend/executor/nodeSubplan.c | 31 +-
src/backend/executor/nodeSubqueryscan.c | 2 -
src/backend/executor/nodeTidscan.c | 8 +-
src/backend/executor/nodeValuesscan.c | 5 +-
src/backend/executor/nodeWindowAgg.c | 58 +-
src/backend/executor/nodeWorktablescan.c | 2 -
src/backend/optimizer/util/clauses.c | 4 +-
src/backend/optimizer/util/predtest.c | 2 +-
src/backend/utils/adt/domains.c | 2 +-
src/backend/utils/adt/xml.c | 4 +-
src/include/executor/executor.h | 9 +-
src/include/nodes/execnodes.h | 16 +-
src/pl/plpgsql/src/pl_exec.c | 5 +-
43 files changed, 346 insertions(+), 1189 deletions(-)
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index cac0cbf7d4..26cbc0e06a 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -1805,8 +1805,7 @@ FormIndexDatum(IndexInfo *indexInfo,
elog(ERROR, "wrong number of index expressions");
iDatum = ExecEvalExprSwitchContext((ExprState *) lfirst(indexpr_item),
GetPerTupleExprContext(estate),
- &isNull,
- NULL);
+ &isNull);
indexpr_item = lnext(indexpr_item);
}
values[i] = iDatum;
diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c
index 874e69d8d6..6dec75b59e 100644
--- a/src/backend/catalog/partition.c
+++ b/src/backend/catalog/partition.c
@@ -1339,7 +1339,7 @@ get_qual_for_range(PartitionKey key, PartitionBoundSpec *spec)
test_exprstate = ExecInitExpr(test_expr, NULL);
test_result = ExecEvalExprSwitchContext(test_exprstate,
GetPerTupleExprContext(estate),
- &isNull, NULL);
+ &isNull);
MemoryContextSwitchTo(oldcxt);
FreeExecutorState(estate);
@@ -1610,8 +1610,7 @@ FormPartitionKeyDatum(PartitionDispatch pd,
elog(ERROR, "wrong number of partition key expressions");
datum = ExecEvalExprSwitchContext((ExprState *) lfirst(partexpr_item),
GetPerTupleExprContext(estate),
- &isNull,
- NULL);
+ &isNull);
partexpr_item = lnext(partexpr_item);
}
values[i] = datum;
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 1fd2162794..ab666b9bdd 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -3395,7 +3395,7 @@ NextCopyFrom(CopyState cstate, ExprContext *econtext,
Assert(CurrentMemoryContext == econtext->ecxt_per_tuple_memory);
values[defmap[i]] = ExecEvalExpr(defexprs[i], econtext,
- &nulls[defmap[i]], NULL);
+ &nulls[defmap[i]]);
}
return true;
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index 1ff41661a5..7d7e3daf1e 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -413,8 +413,7 @@ EvaluateParams(PreparedStatement *pstmt, List *params,
prm->pflags = PARAM_FLAG_CONST;
prm->value = ExecEvalExprSwitchContext(n,
GetPerTupleExprContext(estate),
- &prm->isnull,
- NULL);
+ &prm->isnull);
i++;
}
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index e633a50dd2..ae92b2c1b7 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -4461,8 +4461,7 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
values[ex->attnum - 1] = ExecEvalExpr(ex->exprstate,
econtext,
- &isnull[ex->attnum - 1],
- NULL);
+ &isnull[ex->attnum - 1]);
}
/*
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index 3ff6cbca56..4c33d55484 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2735,7 +2735,7 @@ validateDomainConstraint(Oid domainoid, char *ccbin)
conResult = ExecEvalExprSwitchContext(exprstate,
econtext,
- &isNull, NULL);
+ &isNull);
if (!isNull && !DatumGetBool(conResult))
{
diff --git a/src/backend/executor/execAmi.c b/src/backend/executor/execAmi.c
index b52cfaa41f..1ca4bcb117 100644
--- a/src/backend/executor/execAmi.c
+++ b/src/backend/executor/execAmi.c
@@ -59,7 +59,6 @@
#include "utils/syscache.h"
-static bool TargetListSupportsBackwardScan(List *targetlist);
static bool IndexSupportsBackwardScan(Oid indexid);
@@ -120,7 +119,7 @@ ExecReScan(PlanState *node)
UpdateChangedParamSet(node->righttree, node->chgParam);
}
- /* Shut down any SRFs in the plan node's targetlist */
+ /* Call expression callbacks */
if (node->ps_ExprContext)
ReScanExprContext(node->ps_ExprContext);
@@ -460,8 +459,7 @@ ExecSupportsBackwardScan(Plan *node)
{
case T_Result:
if (outerPlan(node) != NULL)
- return ExecSupportsBackwardScan(outerPlan(node)) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return ExecSupportsBackwardScan(outerPlan(node));
else
return false;
@@ -478,13 +476,6 @@ ExecSupportsBackwardScan(Plan *node)
return true;
}
- case T_SeqScan:
- case T_TidScan:
- case T_FunctionScan:
- case T_ValuesScan:
- case T_CteScan:
- return TargetListSupportsBackwardScan(node->targetlist);
-
case T_SampleScan:
/* Simplify life for tablesample methods by disallowing this */
return false;
@@ -493,35 +484,34 @@ ExecSupportsBackwardScan(Plan *node)
return false;
case T_IndexScan:
- return IndexSupportsBackwardScan(((IndexScan *) node)->indexid) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return IndexSupportsBackwardScan(((IndexScan *) node)->indexid);
case T_IndexOnlyScan:
- return IndexSupportsBackwardScan(((IndexOnlyScan *) node)->indexid) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return IndexSupportsBackwardScan(((IndexOnlyScan *) node)->indexid);
case T_SubqueryScan:
- return ExecSupportsBackwardScan(((SubqueryScan *) node)->subplan) &&
- TargetListSupportsBackwardScan(node->targetlist);
+ return ExecSupportsBackwardScan(((SubqueryScan *) node)->subplan);
case T_CustomScan:
{
uint32 flags = ((CustomScan *) node)->flags;
- if ((flags & CUSTOMPATH_SUPPORT_BACKWARD_SCAN) &&
- TargetListSupportsBackwardScan(node->targetlist))
+ if (flags & CUSTOMPATH_SUPPORT_BACKWARD_SCAN)
return true;
}
return false;
+ case T_SeqScan:
+ case T_TidScan:
+ case T_FunctionScan:
+ case T_ValuesScan:
+ case T_CteScan:
case T_Material:
case T_Sort:
- /* these don't evaluate tlist */
return true;
case T_LockRows:
case T_Limit:
- /* these don't evaluate tlist */
return ExecSupportsBackwardScan(outerPlan(node));
default:
@@ -530,18 +520,6 @@ ExecSupportsBackwardScan(Plan *node)
}
/*
- * If the tlist contains set-returning functions, we can't support backward
- * scan, because the TupFromTlist code is direction-ignorant.
- */
-static bool
-TargetListSupportsBackwardScan(List *targetlist)
-{
- if (expression_returns_set((Node *) targetlist))
- return false;
- return true;
-}
-
-/*
* An IndexScan or IndexOnlyScan node supports backward scan only if the
* index's AM does.
*/
diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c
index eed7e95c75..88abf596e1 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -64,40 +64,40 @@
/* static function decls */
static Datum ExecEvalArrayRef(ArrayRefExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static bool isAssignmentIndirectionExpr(ExprState *exprstate);
static Datum ExecEvalAggref(AggrefExprState *aggref,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWindowFunc(WindowFuncExprState *wfunc,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalParamExtern(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static void init_fcache(Oid foid, Oid input_collation, FuncExprState *fcache,
MemoryContext fcacheCxt, bool allowSRF, bool needDescForSRF);
static void ShutdownFuncExpr(Datum arg);
static TupleDesc get_cached_rowtype(Oid type_id, int32 typmod,
TupleDesc *cache_field, ExprContext *econtext);
static void ShutdownTupleDescRef(Datum arg);
-static ExprDoneCond ExecEvalFuncArgs(FunctionCallInfo fcinfo,
+static void ExecEvalFuncArgs(FunctionCallInfo fcinfo,
List *argList, ExprContext *econtext);
static void ExecPrepareTuplestoreResult(FuncExprState *fcache,
ExprContext *econtext,
@@ -106,85 +106,85 @@ static void ExecPrepareTuplestoreResult(FuncExprState *fcache,
static void tupledesc_match(TupleDesc dst_tupdesc, TupleDesc src_tupdesc);
static Datum ExecMakeFunctionResultNoSets(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFunc(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalOper(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalDistinct(FuncExprState *fcache, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCaseTestExpr(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalArray(ArrayExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRow(RowExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRowCompare(RowCompareExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoalesce(CoalesceExprState *coalesceExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalMinMax(MinMaxExprState *minmaxExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalSQLValueFunction(ExprState *svfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNullIf(FuncExprState *nullIfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalNullTest(NullTestState *nstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalBooleanTest(GenericExprState *bstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceToDomain(CoerceToDomainState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceToDomainValue(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFieldSelect(FieldSelectState *fstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalFieldStore(FieldStoreState *fstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalRelabelType(GenericExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
/* ----------------------------------------------------------------
@@ -195,8 +195,7 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
* Each of the following routines having the signature
* Datum ExecEvalFoo(ExprState *expression,
* ExprContext *econtext,
- * bool *isNull,
- * ExprDoneCond *isDone);
+ * bool *isNull);
* is responsible for evaluating one type or subtype of ExprState node.
* They are normally called via the ExecEvalExpr macro, which makes use of
* the function pointer set up when the ExprState node was built by
@@ -220,22 +219,6 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
* return value: Datum value of result
* *isNull: set to TRUE if result is NULL (actual return value is
* meaningless if so); set to FALSE if non-null result
- * *isDone: set to indicator of set-result status
- *
- * A caller that can only accept a singleton (non-set) result should pass
- * NULL for isDone; if the expression computes a set result then an error
- * will be reported via ereport. If the caller does pass an isDone pointer
- * then *isDone is set to one of these three states:
- * ExprSingleResult singleton result (not a set)
- * ExprMultipleResult return value is one element of a set
- * ExprEndResult there are no more elements in the set
- * When ExprMultipleResult is returned, the caller should invoke
- * ExecEvalExpr() repeatedly until ExprEndResult is returned. ExprEndResult
- * is returned after the last real set element. For convenience isNull will
- * always be set TRUE when ExprEndResult is returned, but this should not be
- * taken as indicating a NULL element of the set. Note that these return
- * conventions allow us to distinguish among a singleton NULL, a NULL element
- * of a set, and an empty set.
*
* The caller should already have switched into the temporary memory
* context econtext->ecxt_per_tuple_memory. The convenience entry point
@@ -260,8 +243,7 @@ static Datum ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
static Datum
ExecEvalArrayRef(ArrayRefExprState *astate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayRef *arrayRef = (ArrayRef *) astate->xprstate.expr;
Datum array_source;
@@ -278,8 +260,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
array_source = ExecEvalExpr(astate->refexpr,
econtext,
- isNull,
- isDone);
+ isNull);
/*
* If refexpr yields NULL, and it's a fetch, then result is NULL. In the
@@ -287,8 +268,6 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
*/
if (*isNull)
{
- if (isDone && *isDone == ExprEndResult)
- return (Datum) NULL; /* end of set result */
if (!isAssignment)
return (Datum) NULL;
}
@@ -314,8 +293,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
upper.indx[i++] = DatumGetInt32(ExecEvalExpr(eltstate,
econtext,
- &eisnull,
- NULL));
+ &eisnull));
/* If any index expr yields NULL, result is NULL or error */
if (eisnull)
{
@@ -350,8 +328,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
lower.indx[j++] = DatumGetInt32(ExecEvalExpr(eltstate,
econtext,
- &eisnull,
- NULL));
+ &eisnull));
/* If any index expr yields NULL, result is NULL or error */
if (eisnull)
{
@@ -438,8 +415,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate,
*/
sourceData = ExecEvalExpr(astate->refassgnexpr,
econtext,
- &eisnull,
- NULL);
+ &eisnull);
econtext->caseValue_datum = save_datum;
econtext->caseValue_isNull = save_isNull;
@@ -542,11 +518,8 @@ isAssignmentIndirectionExpr(ExprState *exprstate)
*/
static Datum
ExecEvalAggref(AggrefExprState *aggref, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
-
if (econtext->ecxt_aggvalues == NULL) /* safety check */
elog(ERROR, "no aggregates in this expression context");
@@ -563,11 +536,8 @@ ExecEvalAggref(AggrefExprState *aggref, ExprContext *econtext,
*/
static Datum
ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
-
if (econtext->ecxt_aggvalues == NULL) /* safety check */
elog(ERROR, "no window functions in this expression context");
@@ -588,15 +558,12 @@ ExecEvalWindowFunc(WindowFuncExprState *wfunc, ExprContext *econtext,
*/
static Datum
ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) exprstate->expr;
TupleTableSlot *slot;
AttrNumber attnum;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Get the input slot and attribute number we want */
switch (variable->varno)
{
@@ -677,15 +644,12 @@ ExecEvalScalarVar(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) exprstate->expr;
TupleTableSlot *slot;
AttrNumber attnum;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Get the input slot and attribute number we want */
switch (variable->varno)
{
@@ -725,7 +689,7 @@ ExecEvalScalarVarFast(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
@@ -733,9 +697,6 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
MemoryContext oldcontext;
bool needslow = false;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* This was checked by ExecInitExpr */
Assert(variable->varattno == InvalidAttrNumber);
@@ -941,7 +902,7 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
/* Fetch the value */
return (*wrvstate->xprstate.evalfunc) ((ExprState *) wrvstate, econtext,
- isNull, isDone);
+ isNull);
}
/* ----------------------------------------------------------------
@@ -952,14 +913,12 @@ ExecEvalWholeRowVar(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
HeapTupleHeader dtuple;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = false;
/* Get the input slot we want */
@@ -1008,7 +967,7 @@ ExecEvalWholeRowFast(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Var *variable = (Var *) wrvstate->xprstate.expr;
TupleTableSlot *slot;
@@ -1018,8 +977,6 @@ ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
HeapTupleHeader dtuple;
int i;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = false;
/* Get the input slot we want */
@@ -1097,13 +1054,10 @@ ExecEvalWholeRowSlow(WholeRowVarExprState *wrvstate, ExprContext *econtext,
*/
static Datum
ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Const *con = (Const *) exprstate->expr;
- if (isDone)
- *isDone = ExprSingleResult;
-
*isNull = con->constisnull;
return con->constvalue;
}
@@ -1116,15 +1070,12 @@ ExecEvalConst(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Param *expression = (Param *) exprstate->expr;
int thisParamId = expression->paramid;
ParamExecData *prm;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* PARAM_EXEC params (internal executor parameters) are stored in the
* ecxt_param_exec_vals array, and can be accessed by array index.
@@ -1149,15 +1100,12 @@ ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
*/
static Datum
ExecEvalParamExtern(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Param *expression = (Param *) exprstate->expr;
int thisParamId = expression->paramid;
ParamListInfo paramInfo = econtext->ecxt_param_list_info;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* PARAM_EXTERN parameters must be sought in ecxt_param_list_info.
*/
@@ -1421,7 +1369,6 @@ init_fcache(Oid foid, Oid input_collation, FuncExprState *fcache,
/* Initialize additional state */
fcache->funcResultStore = NULL;
fcache->funcResultSlot = NULL;
- fcache->setArgsValid = false;
fcache->shutdown_reg = false;
}
@@ -1508,47 +1455,26 @@ ShutdownTupleDescRef(Datum arg)
/*
* Evaluate arguments for a function.
*/
-static ExprDoneCond
+static void
ExecEvalFuncArgs(FunctionCallInfo fcinfo,
List *argList,
ExprContext *econtext)
{
- ExprDoneCond argIsDone;
int i;
ListCell *arg;
- argIsDone = ExprSingleResult; /* default assumption */
-
i = 0;
foreach(arg, argList)
{
ExprState *argstate = (ExprState *) lfirst(arg);
- ExprDoneCond thisArgIsDone;
fcinfo->arg[i] = ExecEvalExpr(argstate,
econtext,
- &fcinfo->argnull[i],
- &thisArgIsDone);
-
- if (thisArgIsDone != ExprSingleResult)
- {
- /*
- * We allow only one argument to have a set value; we'd need much
- * more complexity to keep track of multiple set arguments (cf.
- * ExecTargetList) and it doesn't seem worth it.
- */
- if (argIsDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("functions and operators can take at most one set argument")));
- argIsDone = thisArgIsDone;
- }
+ &fcinfo->argnull[i]);
i++;
}
Assert(i == fcinfo->nargs);
-
- return argIsDone;
}
/*
@@ -1695,9 +1621,10 @@ ExecMakeFunctionResultSet(FuncExprState *fcache,
FunctionCallInfo fcinfo;
PgStat_FunctionCallUsage fcusage;
ReturnSetInfo rsinfo; /* for functions returning sets */
- ExprDoneCond argDone;
- bool hasSetArg;
int i;
+ bool callit;
+
+ Assert(isDone);
restart:
@@ -1736,7 +1663,6 @@ restart:
*/
if (fcache->funcResultStore)
{
- Assert(isDone); /* it was provided before ... */
if (tuplestore_gettupleslot(fcache->funcResultStore, true, false,
fcache->funcResultSlot))
{
@@ -1756,15 +1682,9 @@ restart:
/* Exhausted the tuplestore, so clean up */
tuplestore_end(fcache->funcResultStore);
fcache->funcResultStore = NULL;
- /* We are done unless there was a set-valued argument */
- if (!fcache->setHasSetArg)
- {
- *isDone = ExprEndResult;
- *isNull = true;
- return (Datum) 0;
- }
- /* If there was, continue evaluating the argument values */
- Assert(!fcache->setArgsValid);
+ *isDone = ExprEndResult;
+ *isNull = true;
+ return (Datum) 0;
}
/*
@@ -1776,233 +1696,119 @@ restart:
fcinfo = &fcache->fcinfo_data;
arguments = fcache->args;
if (!fcache->setArgsValid)
- {
- argDone = ExecEvalFuncArgs(fcinfo, arguments, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
- hasSetArg = false;
- }
+ ExecEvalFuncArgs(fcinfo, arguments, econtext);
else
- {
- /* Re-use callinfo from previous evaluation */
- hasSetArg = fcache->setHasSetArg;
/* Reset flag (we may set it again below) */
fcache->setArgsValid = false;
- }
+
+ /* shouldn't get here otherwise */
+ Assert (fcache->func.fn_retset);
/*
* Now call the function, passing the evaluated parameter values.
*/
- if (fcache->func.fn_retset || hasSetArg)
+
+ /* Prepare a resultinfo node for communication. */
+ if (fcache->func.fn_retset)
+ fcinfo->resultinfo = (Node *) &rsinfo;
+ rsinfo.type = T_ReturnSetInfo;
+ rsinfo.econtext = econtext;
+ rsinfo.expectedDesc = fcache->funcResultDesc;
+ rsinfo.allowedModes = (int) (SFRM_ValuePerCall | SFRM_Materialize);
+ /* note we do not set SFRM_Materialize_Random or _Preferred */
+ rsinfo.returnMode = SFRM_ValuePerCall;
+ /* isDone is filled below */
+ rsinfo.setResult = NULL;
+ rsinfo.setDesc = NULL;
+
+ /*
+ * If function is strict, and there are any NULL arguments, skip
+ * calling the function.
+ */
+ callit = true;
+ if (fcache->func.fn_strict)
{
- /*
- * We need to return a set result. Complain if caller not ready to
- * accept one.
- */
- if (isDone == NULL)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
-
- /*
- * Prepare a resultinfo node for communication. If the function
- * doesn't itself return set, we don't pass the resultinfo to the
- * function, but we need to fill it in anyway for internal use.
- */
- if (fcache->func.fn_retset)
- fcinfo->resultinfo = (Node *) &rsinfo;
- rsinfo.type = T_ReturnSetInfo;
- rsinfo.econtext = econtext;
- rsinfo.expectedDesc = fcache->funcResultDesc;
- rsinfo.allowedModes = (int) (SFRM_ValuePerCall | SFRM_Materialize);
- /* note we do not set SFRM_Materialize_Random or _Preferred */
- rsinfo.returnMode = SFRM_ValuePerCall;
- /* isDone is filled below */
- rsinfo.setResult = NULL;
- rsinfo.setDesc = NULL;
-
- /*
- * This loop handles the situation where we have both a set argument
- * and a set-valued function. Once we have exhausted the function's
- * value(s) for a particular argument value, we have to get the next
- * argument value and start the function over again. We might have to
- * do it more than once, if the function produces an empty result set
- * for a particular input value.
- */
- for (;;)
+ for (i = 0; i < fcinfo->nargs; i++)
{
- /*
- * If function is strict, and there are any NULL arguments, skip
- * calling the function (at least for this set of args).
- */
- bool callit = true;
-
- if (fcache->func.fn_strict)
+ if (fcinfo->argnull[i])
{
- for (i = 0; i < fcinfo->nargs; i++)
- {
- if (fcinfo->argnull[i])
- {
- callit = false;
- break;
- }
- }
- }
-
- if (callit)
- {
- pgstat_init_function_usage(fcinfo, &fcusage);
-
- fcinfo->isnull = false;
- rsinfo.isDone = ExprSingleResult;
- result = FunctionCallInvoke(fcinfo);
- *isNull = fcinfo->isnull;
- *isDone = rsinfo.isDone;
-
- pgstat_end_function_usage(&fcusage,
- rsinfo.isDone != ExprMultipleResult);
- }
- else if (fcache->func.fn_retset)
- {
- /* for a strict SRF, result for NULL is an empty set */
- result = (Datum) 0;
- *isNull = true;
- *isDone = ExprEndResult;
- }
- else
- {
- /* for a strict non-SRF, result for NULL is a NULL */
- result = (Datum) 0;
- *isNull = true;
- *isDone = ExprSingleResult;
- }
-
- /* Which protocol does function want to use? */
- if (rsinfo.returnMode == SFRM_ValuePerCall)
- {
- if (*isDone != ExprEndResult)
- {
- /*
- * Got a result from current argument. If function itself
- * returns set, save the current argument values to re-use
- * on the next call.
- */
- if (fcache->func.fn_retset &&
- *isDone == ExprMultipleResult)
- {
- fcache->setHasSetArg = hasSetArg;
- fcache->setArgsValid = true;
- /* Register cleanup callback if we didn't already */
- if (!fcache->shutdown_reg)
- {
- RegisterExprContextCallback(econtext,
- ShutdownFuncExpr,
- PointerGetDatum(fcache));
- fcache->shutdown_reg = true;
- }
- }
-
- /*
- * Make sure we say we are returning a set, even if the
- * function itself doesn't return sets.
- */
- if (hasSetArg)
- *isDone = ExprMultipleResult;
- break;
- }
- }
- else if (rsinfo.returnMode == SFRM_Materialize)
- {
- /* check we're on the same page as the function author */
- if (rsinfo.isDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
- errmsg("table-function protocol for materialize mode was not followed")));
- if (rsinfo.setResult != NULL)
- {
- /* prepare to return values from the tuplestore */
- ExecPrepareTuplestoreResult(fcache, econtext,
- rsinfo.setResult,
- rsinfo.setDesc);
- /* remember whether we had set arguments */
- fcache->setHasSetArg = hasSetArg;
- /* loop back to top to start returning from tuplestore */
- goto restart;
- }
- /* if setResult was left null, treat it as empty set */
- *isDone = ExprEndResult;
- *isNull = true;
- result = (Datum) 0;
- }
- else
- ereport(ERROR,
- (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
- errmsg("unrecognized table-function returnMode: %d",
- (int) rsinfo.returnMode)));
-
- /* Else, done with this argument */
- if (!hasSetArg)
- break; /* input not a set, so done */
-
- /* Re-eval args to get the next element of the input set */
- argDone = ExecEvalFuncArgs(fcinfo, arguments, econtext);
-
- if (argDone != ExprMultipleResult)
- {
- /* End of argument set, so we're done. */
- *isNull = true;
- *isDone = ExprEndResult;
- result = (Datum) 0;
+ callit = false;
break;
}
-
- /*
- * If we reach here, loop around to run the function on the new
- * argument.
- */
}
}
- else
+
+ if (callit)
{
- /*
- * Non-set case: much easier.
- *
- * In common cases, this code path is unreachable because we'd have
- * selected ExecMakeFunctionResultNoSets instead. However, it's
- * possible to get here if an argument sometimes produces set results
- * and sometimes scalar results. For example, a CASE expression might
- * call a set-returning function in only some of its arms.
- */
- if (isDone)
- *isDone = ExprSingleResult;
-
- /*
- * If function is strict, and there are any NULL arguments, skip
- * calling the function and return NULL.
- */
- if (fcache->func.fn_strict)
- {
- for (i = 0; i < fcinfo->nargs; i++)
- {
- if (fcinfo->argnull[i])
- {
- *isNull = true;
- return (Datum) 0;
- }
- }
- }
-
pgstat_init_function_usage(fcinfo, &fcusage);
fcinfo->isnull = false;
+ rsinfo.isDone = ExprSingleResult;
result = FunctionCallInvoke(fcinfo);
*isNull = fcinfo->isnull;
+ *isDone = rsinfo.isDone;
- pgstat_end_function_usage(&fcusage, true);
+ pgstat_end_function_usage(&fcusage,
+ rsinfo.isDone != ExprMultipleResult);
+ }
+ else
+ {
+ /* for a strict SRF, result for NULL is an empty set */
+ result = (Datum) 0;
+ *isNull = true;
+ *isDone = ExprEndResult;
}
+ /* Which protocol does function want to use? */
+ if (rsinfo.returnMode == SFRM_ValuePerCall)
+ {
+ if (*isDone != ExprEndResult)
+ {
+ /*
+ * Got a result from current argument. Save the current
+ * argument values to re-use on the next call.
+ */
+ if (fcache->func.fn_retset &&
+ *isDone == ExprMultipleResult)
+ {
+ fcache->setArgsValid = true;
+ /* Register cleanup callback if we didn't already */
+ if (!fcache->shutdown_reg)
+ {
+ RegisterExprContextCallback(econtext,
+ ShutdownFuncExpr,
+ PointerGetDatum(fcache));
+ fcache->shutdown_reg = true;
+ }
+ }
+ }
+ }
+ else if (rsinfo.returnMode == SFRM_Materialize)
+ {
+ /* check we're on the same page as the function author */
+ if (rsinfo.isDone != ExprSingleResult)
+ ereport(ERROR,
+ (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
+ errmsg("table-function protocol for materialize mode was not followed")));
+ if (rsinfo.setResult != NULL)
+ {
+ /* prepare to return values from the tuplestore */
+ ExecPrepareTuplestoreResult(fcache, econtext,
+ rsinfo.setResult,
+ rsinfo.setDesc);
+ /* loop back to top to start returning from tuplestore */
+ goto restart;
+ }
+ /* if setResult was left null, treat it as empty set */
+ *isDone = ExprEndResult;
+ *isNull = true;
+ result = (Datum) 0;
+ }
+ else
+ ereport(ERROR,
+ (errcode(ERRCODE_E_R_I_E_SRF_PROTOCOL_VIOLATED),
+ errmsg("unrecognized table-function returnMode: %d",
+ (int) rsinfo.returnMode)));
return result;
}
@@ -2015,8 +1821,7 @@ restart:
static Datum
ExecMakeFunctionResultNoSets(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
ListCell *arg;
Datum result;
@@ -2027,9 +1832,6 @@ ExecMakeFunctionResultNoSets(FuncExprState *fcache,
/* Guard against stack overflow due to overly complex expressions */
check_stack_depth();
- if (isDone)
- *isDone = ExprSingleResult;
-
/* inlined, simplified version of ExecEvalFuncArgs */
fcinfo = &fcache->fcinfo_data;
i = 0;
@@ -2039,8 +1841,7 @@ ExecMakeFunctionResultNoSets(FuncExprState *fcache,
fcinfo->arg[i] = ExecEvalExpr(argstate,
econtext,
- &fcinfo->argnull[i],
- NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -2137,7 +1938,6 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
IsA(funcexpr->expr, FuncExpr))
{
FuncExprState *fcache = (FuncExprState *) funcexpr;
- ExprDoneCond argDone;
/*
* This path is similar to ExecMakeFunctionResultSet.
@@ -2172,15 +1972,9 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
*/
MemoryContextReset(argContext);
oldcontext = MemoryContextSwitchTo(argContext);
- argDone = ExecEvalFuncArgs(&fcinfo, fcache->args, econtext);
+ ExecEvalFuncArgs(&fcinfo, fcache->args, econtext);
MemoryContextSwitchTo(oldcontext);
- /* We don't allow sets in the arguments of the table function */
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
-
/*
* If function is strict, and there are any NULL arguments, skip
* calling the function and act like it returned NULL (or an empty
@@ -2240,8 +2034,8 @@ ExecMakeTableFunctionResult(ExprState *funcexpr,
}
else
{
- result = ExecEvalExpr(funcexpr, econtext,
- &fcinfo.isnull, &rsinfo.isDone);
+ result = ExecEvalExpr(funcexpr, econtext, &fcinfo.isnull);
+ rsinfo.isDone = ExprSingleResult;
}
/* Which protocol does function want to use? */
@@ -2435,8 +2229,7 @@ no_function_result:
static Datum
ExecEvalFunc(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* This is called only the first time through */
FuncExpr *func = (FuncExpr *) fcache->xprstate.expr;
@@ -2447,7 +2240,7 @@ ExecEvalFunc(FuncExprState *fcache,
/* Change the evalfunc pointer to save a few cycles in additional calls */
fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -2457,8 +2250,7 @@ ExecEvalFunc(FuncExprState *fcache,
static Datum
ExecEvalOper(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* This is called only the first time through */
OpExpr *op = (OpExpr *) fcache->xprstate.expr;
@@ -2469,7 +2261,7 @@ ExecEvalOper(FuncExprState *fcache,
/* Change the evalfunc pointer to save a few cycles in additional calls */
fcache->xprstate.evalfunc = (ExprStateEvalFunc) ExecMakeFunctionResultNoSets;
- return ExecMakeFunctionResultNoSets(fcache, econtext, isNull, isDone);
+ return ExecMakeFunctionResultNoSets(fcache, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -2486,17 +2278,13 @@ ExecEvalOper(FuncExprState *fcache,
static Datum
ExecEvalDistinct(FuncExprState *fcache,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
- /* Set default values for result flags: non-null, not a set result */
+ /* Set non-null as default */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -2513,11 +2301,7 @@ ExecEvalDistinct(FuncExprState *fcache,
* Evaluate arguments
*/
fcinfo = &fcache->fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, fcache->args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("IS DISTINCT FROM does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, fcache->args, econtext);
Assert(fcinfo->nargs == 2);
if (fcinfo->argnull[0] && fcinfo->argnull[1])
@@ -2553,7 +2337,7 @@ ExecEvalDistinct(FuncExprState *fcache,
static Datum
ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ScalarArrayOpExpr *opexpr = (ScalarArrayOpExpr *) sstate->fxprstate.xprstate.expr;
bool useOr = opexpr->useOr;
@@ -2562,7 +2346,6 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
Datum result;
bool resultnull;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
int i;
int16 typlen;
bool typbyval;
@@ -2571,10 +2354,8 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
bits8 *bitmap;
int bitmask;
- /* Set default values for result flags: non-null, not a set result */
+ /* Set non-null as default */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -2589,11 +2370,7 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
* Evaluate arguments
*/
fcinfo = &sstate->fxprstate.fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, sstate->fxprstate.args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("op ANY/ALL (array) does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, sstate->fxprstate.args, econtext);
Assert(fcinfo->nargs == 2);
/*
@@ -2739,15 +2516,12 @@ ExecEvalScalarArrayOp(ScalarArrayOpExprState *sstate,
*/
static Datum
ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ExprState *clause = linitial(notclause->args);
Datum expr_value;
- if (isDone)
- *isDone = ExprSingleResult;
-
- expr_value = ExecEvalExpr(clause, econtext, isNull, NULL);
+ expr_value = ExecEvalExpr(clause, econtext, isNull);
/*
* if the expression evaluates to null, then we just cascade the null back
@@ -2769,15 +2543,12 @@ ExecEvalNot(BoolExprState *notclause, ExprContext *econtext,
*/
static Datum
ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = orExpr->args;
ListCell *clause;
bool AnyNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
AnyNull = false;
/*
@@ -2798,7 +2569,7 @@ ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
ExprState *clausestate = (ExprState *) lfirst(clause);
Datum clause_value;
- clause_value = ExecEvalExpr(clausestate, econtext, isNull, NULL);
+ clause_value = ExecEvalExpr(clausestate, econtext, isNull);
/*
* if we have a non-null true result, then return it.
@@ -2820,15 +2591,12 @@ ExecEvalOr(BoolExprState *orExpr, ExprContext *econtext,
*/
static Datum
ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = andExpr->args;
ListCell *clause;
bool AnyNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
AnyNull = false;
/*
@@ -2845,7 +2613,7 @@ ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
ExprState *clausestate = (ExprState *) lfirst(clause);
Datum clause_value;
- clause_value = ExecEvalExpr(clausestate, econtext, isNull, NULL);
+ clause_value = ExecEvalExpr(clausestate, econtext, isNull);
/*
* if we have a non-null false result, then return it.
@@ -2871,7 +2639,7 @@ ExecEvalAnd(BoolExprState *andExpr, ExprContext *econtext,
static Datum
ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ConvertRowtypeExpr *convert = (ConvertRowtypeExpr *) cstate->xprstate.expr;
HeapTuple result;
@@ -2879,7 +2647,7 @@ ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
HeapTupleHeader tuple;
HeapTupleData tmptup;
- tupDatum = ExecEvalExpr(cstate->arg, econtext, isNull, isDone);
+ tupDatum = ExecEvalExpr(cstate->arg, econtext, isNull);
/* this test covers the isDone exception too: */
if (*isNull)
@@ -2955,16 +2723,13 @@ ExecEvalConvertRowtype(ConvertRowtypeExprState *cstate,
*/
static Datum
ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
List *clauses = caseExpr->args;
ListCell *clause;
Datum save_datum;
bool save_isNull;
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* If there's a test expression, we have to evaluate it and save the value
* where the CaseTestExpr placeholders can find it. We must save and
@@ -2989,8 +2754,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
arg_value = ExecEvalExpr(caseExpr->arg,
econtext,
- &arg_isNull,
- NULL);
+ &arg_isNull);
/* Since caseValue_datum may be read multiple times, force to R/O */
econtext->caseValue_datum =
MakeExpandedObjectReadOnly(arg_value,
@@ -3012,8 +2776,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
clause_value = ExecEvalExpr(wclause->expr,
econtext,
- &clause_isNull,
- NULL);
+ &clause_isNull);
/*
* if we have a true test, then we return the result, since the case
@@ -3026,8 +2789,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
econtext->caseValue_isNull = save_isNull;
return ExecEvalExpr(wclause->result,
econtext,
- isNull,
- isDone);
+ isNull);
}
}
@@ -3038,8 +2800,7 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
{
return ExecEvalExpr(caseExpr->defresult,
econtext,
- isNull,
- isDone);
+ isNull);
}
*isNull = true;
@@ -3054,10 +2815,8 @@ ExecEvalCase(CaseExprState *caseExpr, ExprContext *econtext,
static Datum
ExecEvalCaseTestExpr(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = econtext->caseValue_isNull;
return econtext->caseValue_datum;
}
@@ -3074,17 +2833,13 @@ ExecEvalCaseTestExpr(ExprState *exprstate,
static Datum
ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
int result = 0;
int attnum = 0;
Bitmapset *grouped_cols = gstate->aggstate->grouped_cols;
ListCell *lc;
- if (isDone)
- *isDone = ExprSingleResult;
-
*isNull = false;
foreach(lc, (gstate->clauses))
@@ -3106,7 +2861,7 @@ ExecEvalGroupingFuncExpr(GroupingFuncExprState *gstate,
*/
static Datum
ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayExpr *arrayExpr = (ArrayExpr *) astate->xprstate.expr;
ArrayType *result;
@@ -3116,10 +2871,8 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
int dims[MAXDIM];
int lbs[MAXDIM];
- /* Set default values for result flags: non-null, not a set result */
+ /* Set default values for result flag: non-null */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
if (!arrayExpr->multidims)
{
@@ -3144,7 +2897,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
{
ExprState *e = (ExprState *) lfirst(element);
- dvalues[i] = ExecEvalExpr(e, econtext, &dnulls[i], NULL);
+ dvalues[i] = ExecEvalExpr(e, econtext, &dnulls[i]);
i++;
}
@@ -3194,7 +2947,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
ArrayType *array;
int this_ndims;
- arraydatum = ExecEvalExpr(e, econtext, &eisnull, NULL);
+ arraydatum = ExecEvalExpr(e, econtext, &eisnull);
/* temporarily ignore null subarrays */
if (eisnull)
{
@@ -3333,7 +3086,7 @@ ExecEvalArray(ArrayExprState *astate, ExprContext *econtext,
static Datum
ExecEvalRow(RowExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
HeapTuple tuple;
Datum *values;
@@ -3342,10 +3095,8 @@ ExecEvalRow(RowExprState *rstate,
ListCell *arg;
int i;
- /* Set default values for result flags: non-null, not a set result */
+ /* Set default values for result flag: non-null */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/* Allocate workspace */
natts = rstate->tupdesc->natts;
@@ -3361,7 +3112,7 @@ ExecEvalRow(RowExprState *rstate,
{
ExprState *e = (ExprState *) lfirst(arg);
- values[i] = ExecEvalExpr(e, econtext, &isnull[i], NULL);
+ values[i] = ExecEvalExpr(e, econtext, &isnull[i]);
i++;
}
@@ -3380,7 +3131,7 @@ ExecEvalRow(RowExprState *rstate,
static Datum
ExecEvalRowCompare(RowCompareExprState *rstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
bool result;
RowCompareType rctype = ((RowCompareExpr *) rstate->xprstate.expr)->rctype;
@@ -3389,8 +3140,6 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
ListCell *r;
int i;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
i = 0;
@@ -3404,9 +3153,9 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
rstate->collations[i],
NULL, NULL);
locfcinfo.arg[0] = ExecEvalExpr(le, econtext,
- &locfcinfo.argnull[0], NULL);
+ &locfcinfo.argnull[0]);
locfcinfo.arg[1] = ExecEvalExpr(re, econtext,
- &locfcinfo.argnull[1], NULL);
+ &locfcinfo.argnull[1]);
if (rstate->funcs[i].fn_strict &&
(locfcinfo.argnull[0] || locfcinfo.argnull[1]))
return (Datum) 0; /* force NULL result */
@@ -3450,20 +3199,17 @@ ExecEvalRowCompare(RowCompareExprState *rstate,
*/
static Datum
ExecEvalCoalesce(CoalesceExprState *coalesceExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ListCell *arg;
- if (isDone)
- *isDone = ExprSingleResult;
-
/* Simply loop through until something NOT NULL is found */
foreach(arg, coalesceExpr->args)
{
ExprState *e = (ExprState *) lfirst(arg);
Datum value;
- value = ExecEvalExpr(e, econtext, isNull, NULL);
+ value = ExecEvalExpr(e, econtext, isNull);
if (!*isNull)
return value;
}
@@ -3479,7 +3225,7 @@ ExecEvalCoalesce(CoalesceExprState *coalesceExpr, ExprContext *econtext,
*/
static Datum
ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result = (Datum) 0;
MinMaxExpr *minmax = (MinMaxExpr *) minmaxExpr->xprstate.expr;
@@ -3488,8 +3234,6 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
FunctionCallInfoData locfcinfo;
ListCell *arg;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
InitFunctionCallInfoData(locfcinfo, &minmaxExpr->cfunc, 2,
@@ -3504,7 +3248,7 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
bool valueIsNull;
int32 cmpresult;
- value = ExecEvalExpr(e, econtext, &valueIsNull, NULL);
+ value = ExecEvalExpr(e, econtext, &valueIsNull);
if (valueIsNull)
continue; /* ignore NULL inputs */
@@ -3540,14 +3284,12 @@ ExecEvalMinMax(MinMaxExprState *minmaxExpr, ExprContext *econtext,
static Datum
ExecEvalSQLValueFunction(ExprState *svfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result = (Datum) 0;
SQLValueFunction *svf = (SQLValueFunction *) svfExpr->expr;
FunctionCallInfoData fcinfo;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = false;
/*
@@ -3608,7 +3350,7 @@ ExecEvalSQLValueFunction(ExprState *svfExpr,
*/
static Datum
ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
XmlExpr *xexpr = (XmlExpr *) xmlExpr->xprstate.expr;
Datum value;
@@ -3616,8 +3358,6 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
ListCell *arg;
ListCell *narg;
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = true; /* until we get a result */
switch (xexpr->op)
@@ -3630,7 +3370,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
{
ExprState *e = (ExprState *) lfirst(arg);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (!isnull)
values = lappend(values, DatumGetPointer(value));
}
@@ -3655,7 +3395,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
ExprState *e = (ExprState *) lfirst(arg);
char *argname = strVal(lfirst(narg));
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (!isnull)
{
appendStringInfo(&buf, "<%s>%s</%s>",
@@ -3698,13 +3438,13 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 2);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
data = DatumGetTextP(value);
e = (ExprState *) lsecond(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull) /* probably can't happen */
return (Datum) 0;
preserve_whitespace = DatumGetBool(value);
@@ -3728,7 +3468,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
if (xmlExpr->args)
{
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
arg = NULL;
else
@@ -3755,20 +3495,20 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 3);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
data = DatumGetXmlP(value);
e = (ExprState *) lsecond(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
version = NULL;
else
version = DatumGetTextP(value);
e = (ExprState *) lthird(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
standalone = DatumGetInt32(value);
*isNull = false;
@@ -3787,7 +3527,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 1);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
@@ -3805,7 +3545,7 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
Assert(list_length(xmlExpr->args) == 1);
e = (ExprState *) linitial(xmlExpr->args);
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
return (Datum) 0;
else
@@ -3832,14 +3572,10 @@ ExecEvalXml(XmlExprState *xmlExpr, ExprContext *econtext,
static Datum
ExecEvalNullIf(FuncExprState *nullIfExpr,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
FunctionCallInfo fcinfo;
- ExprDoneCond argDone;
-
- if (isDone)
- *isDone = ExprSingleResult;
/*
* Initialize function cache if first time through
@@ -3856,11 +3592,7 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
* Evaluate arguments
*/
fcinfo = &nullIfExpr->fcinfo_data;
- argDone = ExecEvalFuncArgs(fcinfo, nullIfExpr->args, econtext);
- if (argDone != ExprSingleResult)
- ereport(ERROR,
- (errcode(ERRCODE_DATATYPE_MISMATCH),
- errmsg("NULLIF does not support set arguments")));
+ ExecEvalFuncArgs(fcinfo, nullIfExpr->args, econtext);
Assert(fcinfo->nargs == 2);
/* if either argument is NULL they can't be equal */
@@ -3890,16 +3622,12 @@ ExecEvalNullIf(FuncExprState *nullIfExpr,
static Datum
ExecEvalNullTest(NullTestState *nstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
NullTest *ntest = (NullTest *) nstate->xprstate.expr;
Datum result;
- result = ExecEvalExpr(nstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(nstate->arg, econtext, isNull);
if (ntest->argisrow && !(*isNull))
{
@@ -3999,16 +3727,12 @@ ExecEvalNullTest(NullTestState *nstate,
static Datum
ExecEvalBooleanTest(GenericExprState *bstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
BooleanTest *btest = (BooleanTest *) bstate->xprstate.expr;
Datum result;
- result = ExecEvalExpr(bstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(bstate->arg, econtext, isNull);
switch (btest->booltesttype)
{
@@ -4084,16 +3808,13 @@ ExecEvalBooleanTest(GenericExprState *bstate,
*/
static Datum
ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
CoerceToDomain *ctest = (CoerceToDomain *) cstate->xprstate.expr;
Datum result;
ListCell *l;
- result = ExecEvalExpr(cstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to check */
+ result = ExecEvalExpr(cstate->arg, econtext, isNull);
/* Make sure we have up-to-date constraints */
UpdateDomainConstraintRef(cstate->constraint_ref);
@@ -4138,8 +3859,8 @@ ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
cstate->constraint_ref->tcache->typlen);
econtext->domainValue_isNull = *isNull;
- conResult = ExecEvalExpr(con->check_expr,
- econtext, &conIsNull, NULL);
+ conResult = ExecEvalExpr(con->check_expr, econtext,
+ &conIsNull);
if (!conIsNull &&
!DatumGetBool(conResult))
@@ -4174,10 +3895,8 @@ ExecEvalCoerceToDomain(CoerceToDomainState *cstate, ExprContext *econtext,
static Datum
ExecEvalCoerceToDomainValue(ExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- if (isDone)
- *isDone = ExprSingleResult;
*isNull = econtext->domainValue_isNull;
return econtext->domainValue_datum;
}
@@ -4191,8 +3910,7 @@ ExecEvalCoerceToDomainValue(ExprState *exprstate,
static Datum
ExecEvalFieldSelect(FieldSelectState *fstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
FieldSelect *fselect = (FieldSelect *) fstate->xprstate.expr;
AttrNumber fieldnum = fselect->fieldnum;
@@ -4205,9 +3923,8 @@ ExecEvalFieldSelect(FieldSelectState *fstate,
Form_pg_attribute attr;
HeapTupleData tmptup;
- tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull, isDone);
+ tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull);
- /* this test covers the isDone exception too: */
if (*isNull)
return tupDatum;
@@ -4270,8 +3987,7 @@ ExecEvalFieldSelect(FieldSelectState *fstate,
static Datum
ExecEvalFieldStore(FieldStoreState *fstate,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
FieldStore *fstore = (FieldStore *) fstate->xprstate.expr;
HeapTuple tuple;
@@ -4284,10 +4000,7 @@ ExecEvalFieldStore(FieldStoreState *fstate,
ListCell *l1,
*l2;
- tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return tupDatum;
+ tupDatum = ExecEvalExpr(fstate->arg, econtext, isNull);
/* Lookup tupdesc if first time through or after rescan */
tupDesc = get_cached_rowtype(fstore->resulttype, -1,
@@ -4347,8 +4060,7 @@ ExecEvalFieldStore(FieldStoreState *fstate,
values[fieldnum - 1] = ExecEvalExpr(newval,
econtext,
- &isnull[fieldnum - 1],
- NULL);
+ &isnull[fieldnum - 1]);
}
econtext->caseValue_datum = save_datum;
@@ -4371,9 +4083,9 @@ ExecEvalFieldStore(FieldStoreState *fstate,
static Datum
ExecEvalRelabelType(GenericExprState *exprstate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
- return ExecEvalExpr(exprstate->arg, econtext, isNull, isDone);
+ return ExecEvalExpr(exprstate->arg, econtext, isNull);
}
/* ----------------------------------------------------------------
@@ -4385,16 +4097,13 @@ ExecEvalRelabelType(GenericExprState *exprstate,
static Datum
ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
Datum result;
Datum inputval;
char *string;
- inputval = ExecEvalExpr(iostate->arg, econtext, isNull, isDone);
-
- if (isDone && *isDone == ExprEndResult)
- return inputval; /* nothing to do */
+ inputval = ExecEvalExpr(iostate->arg, econtext, isNull);
if (*isNull)
string = NULL; /* output functions are not called on nulls */
@@ -4419,16 +4128,14 @@ ExecEvalCoerceViaIO(CoerceViaIOState *iostate,
static Datum
ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ArrayCoerceExpr *acoerce = (ArrayCoerceExpr *) astate->xprstate.expr;
Datum result;
FunctionCallInfoData locfcinfo;
- result = ExecEvalExpr(astate->arg, econtext, isNull, isDone);
+ result = ExecEvalExpr(astate->arg, econtext, isNull);
- if (isDone && *isDone == ExprEndResult)
- return result; /* nothing to do */
if (*isNull)
return result; /* nothing to do */
@@ -4496,7 +4203,7 @@ ExecEvalArrayCoerceExpr(ArrayCoerceExprState *astate,
*/
static Datum
ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone)
+ bool *isNull)
{
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
@@ -4513,14 +4220,13 @@ ExecEvalCurrentOfExpr(ExprState *exprstate, ExprContext *econtext,
Datum
ExecEvalExprSwitchContext(ExprState *expression,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
Datum retDatum;
MemoryContext oldContext;
oldContext = MemoryContextSwitchTo(econtext->ecxt_per_tuple_memory);
- retDatum = ExecEvalExpr(expression, econtext, isNull, isDone);
+ retDatum = ExecEvalExpr(expression, econtext, isNull);
MemoryContextSwitchTo(oldContext);
return retDatum;
}
@@ -5387,7 +5093,7 @@ ExecQual(List *qual, ExprContext *econtext, bool resultForNull)
Datum expr_value;
bool isNull;
- expr_value = ExecEvalExpr(clause, econtext, &isNull, NULL);
+ expr_value = ExecEvalExpr(clause, econtext, &isNull);
if (isNull)
{
@@ -5445,17 +5151,9 @@ ExecCleanTargetListLength(List *targetlist)
/*
* ExecTargetList
* Evaluates a targetlist with respect to the given
- * expression context. Returns TRUE if we were able to create
- * a result, FALSE if we have exhausted a set-valued expression.
+ * expression context.
*
* Results are stored into the passed values and isnull arrays.
- * The caller must provide an itemIsDone array that persists across calls.
- *
- * As with ExecEvalExpr, the caller should pass isDone = NULL if not
- * prepared to deal with sets of result tuples. Otherwise, a return
- * of *isDone = ExprMultipleResult signifies a set element, and a return
- * of *isDone = ExprEndResult signifies end of the set of tuple.
- * We assume that *isDone has been initialized to ExprSingleResult by caller.
*
* Since fields of the result tuple might be multiply referenced in higher
* plan nodes, we have to force any read/write expanded values to read-only
@@ -5464,19 +5162,16 @@ ExecCleanTargetListLength(List *targetlist)
* actually-multiply-referenced Vars and insert an expression node that
* would do that only where really required.
*/
-static bool
+static void
ExecTargetList(List *targetlist,
TupleDesc tupdesc,
ExprContext *econtext,
Datum *values,
- bool *isnull,
- ExprDoneCond *itemIsDone,
- ExprDoneCond *isDone)
+ bool *isnull)
{
Form_pg_attribute *att = tupdesc->attrs;
MemoryContext oldContext;
ListCell *tl;
- bool haveDoneSets;
/*
* Run in short-lived per-tuple context while computing expressions.
@@ -5486,8 +5181,6 @@ ExecTargetList(List *targetlist,
/*
* evaluate all the expressions in the target list
*/
- haveDoneSets = false; /* any exhausted set exprs in tlist? */
-
foreach(tl, targetlist)
{
GenericExprState *gstate = (GenericExprState *) lfirst(tl);
@@ -5496,117 +5189,15 @@ ExecTargetList(List *targetlist,
values[resind] = ExecEvalExpr(gstate->arg,
econtext,
- &isnull[resind],
- &itemIsDone[resind]);
+ &isnull[resind]);
values[resind] = MakeExpandedObjectReadOnly(values[resind],
isnull[resind],
att[resind]->attlen);
-
- if (itemIsDone[resind] != ExprSingleResult)
- {
- /* We have a set-valued expression in the tlist */
- if (isDone == NULL)
- ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("set-valued function called in context that cannot accept a set")));
- if (itemIsDone[resind] == ExprMultipleResult)
- {
- /* we have undone sets in the tlist, set flag */
- *isDone = ExprMultipleResult;
- }
- else
- {
- /* we have done sets in the tlist, set flag for that */
- haveDoneSets = true;
- }
- }
- }
-
- if (haveDoneSets)
- {
- /*
- * note: can't get here unless we verified isDone != NULL
- */
- if (*isDone == ExprSingleResult)
- {
- /*
- * all sets are done, so report that tlist expansion is complete.
- */
- *isDone = ExprEndResult;
- MemoryContextSwitchTo(oldContext);
- return false;
- }
- else
- {
- /*
- * We have some done and some undone sets. Restart the done ones
- * so that we can deliver a tuple (if possible).
- */
- foreach(tl, targetlist)
- {
- GenericExprState *gstate = (GenericExprState *) lfirst(tl);
- TargetEntry *tle = (TargetEntry *) gstate->xprstate.expr;
- AttrNumber resind = tle->resno - 1;
-
- if (itemIsDone[resind] == ExprEndResult)
- {
- values[resind] = ExecEvalExpr(gstate->arg,
- econtext,
- &isnull[resind],
- &itemIsDone[resind]);
-
- values[resind] = MakeExpandedObjectReadOnly(values[resind],
- isnull[resind],
- att[resind]->attlen);
-
- if (itemIsDone[resind] == ExprEndResult)
- {
- /*
- * Oh dear, this item is returning an empty set. Guess
- * we can't make a tuple after all.
- */
- *isDone = ExprEndResult;
- break;
- }
- }
- }
-
- /*
- * If we cannot make a tuple because some sets are empty, we still
- * have to cycle the nonempty sets to completion, else resources
- * will not be released from subplans etc.
- *
- * XXX is that still necessary?
- */
- if (*isDone == ExprEndResult)
- {
- foreach(tl, targetlist)
- {
- GenericExprState *gstate = (GenericExprState *) lfirst(tl);
- TargetEntry *tle = (TargetEntry *) gstate->xprstate.expr;
- AttrNumber resind = tle->resno - 1;
-
- while (itemIsDone[resind] == ExprMultipleResult)
- {
- values[resind] = ExecEvalExpr(gstate->arg,
- econtext,
- &isnull[resind],
- &itemIsDone[resind]);
- /* no need for MakeExpandedObjectReadOnly */
- }
- }
-
- MemoryContextSwitchTo(oldContext);
- return false;
- }
- }
}
/* Report success */
MemoryContextSwitchTo(oldContext);
-
- return true;
}
/*
@@ -5623,7 +5214,7 @@ ExecTargetList(List *targetlist,
* result slot.
*/
TupleTableSlot *
-ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
+ExecProject(ProjectionInfo *projInfo)
{
TupleTableSlot *slot;
ExprContext *econtext;
@@ -5640,10 +5231,6 @@ ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
slot = projInfo->pi_slot;
econtext = projInfo->pi_exprContext;
- /* Assume single result row until proven otherwise */
- if (isDone)
- *isDone = ExprSingleResult;
-
/*
* Clear any former contents of the result slot. This makes it safe for
* us to use the slot's Datum/isnull arrays as workspace. (Also, we can
@@ -5711,21 +5298,15 @@ ExecProject(ProjectionInfo *projInfo, ExprDoneCond *isDone)
}
/*
- * If there are any generic expressions, evaluate them. It's possible
- * that there are set-returning functions in such expressions; if so and
- * we have reached the end of the set, we return the result slot, which we
- * already marked empty.
+ * If there are any generic expressions, evaluate them.
*/
if (projInfo->pi_targetlist)
{
- if (!ExecTargetList(projInfo->pi_targetlist,
- slot->tts_tupleDescriptor,
- econtext,
- slot->tts_values,
- slot->tts_isnull,
- projInfo->pi_itemIsDone,
- isDone))
- return slot; /* no more result rows, return empty slot */
+ ExecTargetList(projInfo->pi_targetlist,
+ slot->tts_tupleDescriptor,
+ econtext,
+ slot->tts_values,
+ slot->tts_isnull);
}
/*
diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c
index f97db9c211..c0e4641750 100644
--- a/src/backend/executor/execScan.c
+++ b/src/backend/executor/execScan.c
@@ -125,8 +125,6 @@ ExecScan(ScanState *node,
ExprContext *econtext;
List *qual;
ProjectionInfo *projInfo;
- ExprDoneCond isDone;
- TupleTableSlot *resultSlot;
/*
* Fetch data from node
@@ -146,21 +144,6 @@ ExecScan(ScanState *node,
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- Assert(projInfo); /* can't get here if not projecting */
- resultSlot = ExecProject(projInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a scan tuple.
@@ -214,15 +197,9 @@ ExecScan(ScanState *node,
{
/*
* Form a projection tuple, store it in the result tuple slot
- * and return it --- unless we find we can project no tuples
- * from this scan tuple, in which case continue scan.
+ * and return it.
*/
- resultSlot = ExecProject(projInfo, &isDone);
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ return ExecProject(projInfo);
}
else
{
@@ -352,9 +329,6 @@ ExecScanReScan(ScanState *node)
{
EState *estate = node->ps.state;
- /* Stop projecting any tuples from SRFs in the targetlist */
- node->ps.ps_TupFromTlist = false;
-
/* Rescan EvalPlanQual tuple if we're inside an EvalPlanQual recheck */
if (estate->es_epqScanDone != NULL)
{
diff --git a/src/backend/executor/execUtils.c b/src/backend/executor/execUtils.c
index 70646fd15a..e49feff6c0 100644
--- a/src/backend/executor/execUtils.c
+++ b/src/backend/executor/execUtils.c
@@ -586,12 +586,6 @@ ExecBuildProjectionInfo(List *targetList,
projInfo->pi_numSimpleVars = numSimpleVars;
projInfo->pi_directMap = directMap;
- if (exprlist == NIL)
- projInfo->pi_itemIsDone = NULL; /* not needed */
- else
- projInfo->pi_itemIsDone = (ExprDoneCond *)
- palloc(len * sizeof(ExprDoneCond));
-
return projInfo;
}
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index dc64b3262a..e4992134bd 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -854,7 +854,7 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
/* compute input for all aggregates */
if (aggstate->evalproj)
- aggstate->evalslot = ExecProject(aggstate->evalproj, NULL);
+ aggstate->evalslot = ExecProject(aggstate->evalproj);
for (transno = 0; transno < numTrans; transno++)
{
@@ -871,7 +871,7 @@ advance_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
bool isnull;
res = ExecEvalExprSwitchContext(filter, aggstate->tmpcontext,
- &isnull, NULL);
+ &isnull);
if (isnull || !DatumGetBool(res))
continue;
}
@@ -970,7 +970,7 @@ combine_aggregates(AggState *aggstate, AggStatePerGroup pergroup)
Assert(aggstate->phase->numsets == 0);
/* compute input for all aggregates */
- slot = ExecProject(aggstate->evalproj, NULL);
+ slot = ExecProject(aggstate->evalproj);
for (transno = 0; transno < numTrans; transno++)
{
@@ -1368,8 +1368,7 @@ finalize_aggregate(AggState *aggstate,
fcinfo.arg[i] = ExecEvalExpr(expr,
aggstate->ss.ps.ps_ExprContext,
- &fcinfo.argnull[i],
- NULL);
+ &fcinfo.argnull[i]);
anynull |= fcinfo.argnull[i];
i++;
}
@@ -1630,7 +1629,7 @@ finalize_aggregates(AggState *aggstate,
/*
* Project the result of a group (whose aggs have already been calculated by
* finalize_aggregates). Returns the result slot, or NULL if no row is
- * projected (suppressed by qual or by an empty SRF).
+ * projected (suppressed by qual).
*/
static TupleTableSlot *
project_aggregates(AggState *aggstate)
@@ -1643,20 +1642,10 @@ project_aggregates(AggState *aggstate)
if (ExecQual(aggstate->ss.ps.qual, econtext, false))
{
/*
- * Form and return or store a projection tuple using the aggregate
- * results and the representative input tuple.
+ * Form and return projection tuple using the aggregate results and
+ * the representative input tuple.
*/
- ExprDoneCond isDone;
- TupleTableSlot *result;
-
- result = ExecProject(aggstate->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- aggstate->ss.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(aggstate->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(aggstate, 1);
@@ -1911,27 +1900,6 @@ ExecAgg(AggState *node)
{
TupleTableSlot *result;
- /*
- * Check to see if we're still projecting out tuples from a previous agg
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ss.ps.ps_TupFromTlist)
- {
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
- * (We must do the ps_TupFromTlist check first, because in some cases
- * agg_done gets set before we emit the final aggregate tuple, and we have
- * to finish running SRFs for it.)
- */
if (!node->agg_done)
{
/* Dispatch based on strategy */
@@ -2571,8 +2539,6 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&aggstate->ss.ps);
ExecAssignProjectionInfo(&aggstate->ss.ps, NULL);
- aggstate->ss.ps.ps_TupFromTlist = false;
-
/*
* get the count of aggregates in targetlist and quals
*/
@@ -3575,8 +3541,6 @@ ExecReScanAgg(AggState *node)
node->agg_done = false;
- node->ss.ps.ps_TupFromTlist = false;
-
if (aggnode->aggstrategy == AGG_HASHED)
{
/*
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index d5fd57ae4b..f18827de0b 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -575,8 +575,6 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeCtescan.c b/src/backend/executor/nodeCtescan.c
index 2f9c007409..610797b36b 100644
--- a/src/backend/executor/nodeCtescan.c
+++ b/src/backend/executor/nodeCtescan.c
@@ -269,8 +269,6 @@ ExecInitCteScan(CteScan *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&scanstate->ss.ps);
ExecAssignScanProjectionInfo(&scanstate->ss);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
return scanstate;
}
diff --git a/src/backend/executor/nodeCustom.c b/src/backend/executor/nodeCustom.c
index b01e65f362..a27430242a 100644
--- a/src/backend/executor/nodeCustom.c
+++ b/src/backend/executor/nodeCustom.c
@@ -48,8 +48,6 @@ ExecInitCustomScan(CustomScan *cscan, EState *estate, int eflags)
/* create expression context for node */
ExecAssignExprContext(estate, &css->ss.ps);
- css->ss.ps.ps_TupFromTlist = false;
-
/* initialize child expressions */
css->ss.ps.targetlist = (List *)
ExecInitExpr((Expr *) cscan->scan.plan.targetlist,
diff --git a/src/backend/executor/nodeForeignscan.c b/src/backend/executor/nodeForeignscan.c
index 8f21c17f24..86a77e356c 100644
--- a/src/backend/executor/nodeForeignscan.c
+++ b/src/backend/executor/nodeForeignscan.c
@@ -152,8 +152,6 @@ ExecInitForeignScan(ForeignScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeFunctionscan.c b/src/backend/executor/nodeFunctionscan.c
index 1b593dcd71..972022784d 100644
--- a/src/backend/executor/nodeFunctionscan.c
+++ b/src/backend/executor/nodeFunctionscan.c
@@ -331,8 +331,6 @@ ExecInitFunctionScan(FunctionScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* tuple table initialization
*/
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index f95c3d1b19..92b361ebb3 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -100,8 +100,6 @@ ExecInitGather(Gather *node, EState *estate, int eflags)
outerNode = outerPlan(node);
outerPlanState(gatherstate) = ExecInitNode(outerNode, estate, eflags);
- gatherstate->ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
@@ -132,8 +130,6 @@ ExecGather(GatherState *node)
TupleTableSlot *fslot = node->funnel_slot;
int i;
TupleTableSlot *slot;
- TupleTableSlot *resultSlot;
- ExprDoneCond isDone;
ExprContext *econtext;
/*
@@ -200,20 +196,6 @@ ExecGather(GatherState *node)
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note we can't do this
* until we're done projecting. This will also clear any previous tuple
@@ -241,13 +223,8 @@ ExecGather(GatherState *node)
* back around for another tuple
*/
econtext->ecxt_outertuple = slot;
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ return ExecProject(node->ps.ps_ProjInfo);
}
return slot;
diff --git a/src/backend/executor/nodeGroup.c b/src/backend/executor/nodeGroup.c
index 6a05023e50..66c095bc72 100644
--- a/src/backend/executor/nodeGroup.c
+++ b/src/backend/executor/nodeGroup.c
@@ -50,23 +50,6 @@ ExecGroup(GroupState *node)
grpColIdx = ((Group *) node->ss.ps.plan)->grpColIdx;
/*
- * Check to see if we're still projecting out tuples from a previous group
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ss.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
* The ScanTupleSlot holds the (copied) first tuple of each group.
*/
firsttupleslot = node->ss.ss_ScanTupleSlot;
@@ -107,16 +90,7 @@ ExecGroup(GroupState *node)
/*
* Form and return a projection tuple using the first input tuple.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ss.ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(node, 1);
@@ -170,16 +144,7 @@ ExecGroup(GroupState *node)
/*
* Form and return a projection tuple using the first input tuple.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->ss.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ss.ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->ss.ps.ps_ProjInfo);
}
else
InstrCountFiltered1(node, 1);
@@ -246,8 +211,6 @@ ExecInitGroup(Group *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&grpstate->ss.ps);
ExecAssignProjectionInfo(&grpstate->ss.ps, NULL);
- grpstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Precompute fmgr lookup data for inner loop
*/
@@ -283,7 +246,6 @@ ExecReScanGroup(GroupState *node)
PlanState *outerPlan = outerPlanState(node);
node->grp_done = FALSE;
- node->ss.ps.ps_TupFromTlist = false;
/* must clear first tuple */
ExecClearTuple(node->ss.ss_ScanTupleSlot);
diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c
index 11db08f5fa..af5934d2bc 100644
--- a/src/backend/executor/nodeHash.c
+++ b/src/backend/executor/nodeHash.c
@@ -959,7 +959,7 @@ ExecHashGetHashValue(HashJoinTable hashtable,
/*
* Get the join attribute value of the tuple
*/
- keyval = ExecEvalExpr(keyexpr, econtext, &isNull, NULL);
+ keyval = ExecEvalExpr(keyexpr, econtext, &isNull);
/*
* If the attribute is NULL, and the join operator is strict, then
diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c
index b41e4e2f98..f34e476bad 100644
--- a/src/backend/executor/nodeHashjoin.c
+++ b/src/backend/executor/nodeHashjoin.c
@@ -66,7 +66,6 @@ ExecHashJoin(HashJoinState *node)
List *joinqual;
List *otherqual;
ExprContext *econtext;
- ExprDoneCond isDone;
HashJoinTable hashtable;
TupleTableSlot *outerTupleSlot;
uint32 hashvalue;
@@ -83,22 +82,6 @@ ExecHashJoin(HashJoinState *node)
econtext = node->js.ps.ps_ExprContext;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -314,18 +297,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
- {
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
else
InstrCountFiltered2(node, 1);
}
@@ -353,18 +325,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
- {
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
else
InstrCountFiltered2(node, 1);
}
@@ -392,18 +353,7 @@ ExecHashJoin(HashJoinState *node)
if (otherqual == NIL ||
ExecQual(otherqual, econtext, false))
- {
- TupleTableSlot *result;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
else
InstrCountFiltered2(node, 1);
break;
@@ -586,7 +536,6 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags)
/* child Hash node needs to evaluate inner hash keys, too */
((HashState *) innerPlanState(hjstate))->hashkeys = rclauses;
- hjstate->js.ps.ps_TupFromTlist = false;
hjstate->hj_JoinState = HJ_BUILD_HASHTABLE;
hjstate->hj_MatchedOuter = false;
hjstate->hj_OuterNotEmpty = false;
@@ -1000,7 +949,6 @@ ExecReScanHashJoin(HashJoinState *node)
node->hj_CurSkewBucketNo = INVALID_SKEW_BUCKET_NO;
node->hj_CurTuple = NULL;
- node->js.ps.ps_TupFromTlist = false;
node->hj_MatchedOuter = false;
node->hj_FirstOuterTupleSlot = NULL;
diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c
index ddef3a42bf..d5b19b7c11 100644
--- a/src/backend/executor/nodeIndexonlyscan.c
+++ b/src/backend/executor/nodeIndexonlyscan.c
@@ -412,8 +412,6 @@ ExecInitIndexOnlyScan(IndexOnlyScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &indexstate->ss.ps);
- indexstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 97a6fac34d..5734550d2c 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -336,8 +336,7 @@ EvalOrderByExpressions(IndexScanState *node, ExprContext *econtext)
node->iss_OrderByValues[i] = ExecEvalExpr(orderby,
econtext,
- &node->iss_OrderByNulls[i],
- NULL);
+ &node->iss_OrderByNulls[i]);
i++;
}
@@ -590,8 +589,7 @@ ExecIndexEvalRuntimeKeys(ExprContext *econtext,
*/
scanvalue = ExecEvalExpr(key_expr,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
{
scan_key->sk_argument = scanvalue;
@@ -648,8 +646,7 @@ ExecIndexEvalArrayKeys(ExprContext *econtext,
*/
arraydatum = ExecEvalExpr(array_expr,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
{
result = false;
@@ -837,8 +834,6 @@ ExecInitIndexScan(IndexScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &indexstate->ss.ps);
- indexstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*
diff --git a/src/backend/executor/nodeLimit.c b/src/backend/executor/nodeLimit.c
index 885931e594..aaec132218 100644
--- a/src/backend/executor/nodeLimit.c
+++ b/src/backend/executor/nodeLimit.c
@@ -239,8 +239,7 @@ recompute_limits(LimitState *node)
{
val = ExecEvalExprSwitchContext(node->limitOffset,
econtext,
- &isNull,
- NULL);
+ &isNull);
/* Interpret NULL offset as no offset */
if (isNull)
node->offset = 0;
@@ -263,8 +262,7 @@ recompute_limits(LimitState *node)
{
val = ExecEvalExprSwitchContext(node->limitCount,
econtext,
- &isNull,
- NULL);
+ &isNull);
/* Interpret NULL count as no count (LIMIT ALL) */
if (isNull)
{
@@ -346,18 +344,11 @@ pass_down_bound(LimitState *node, PlanState *child_node)
else if (IsA(child_node, ResultState))
{
/*
- * An extra consideration here is that if the Result is projecting a
- * targetlist that contains any SRFs, we can't assume that every input
- * tuple generates an output tuple, so a Sort underneath might need to
- * return more than N tuples to satisfy LIMIT N. So we cannot use
- * bounded sort.
- *
* If Result supported qual checking, we'd have to punt on seeing a
- * qual, too. Note that having a resconstantqual is not a
- * showstopper: if that fails we're not getting any rows at all.
+ * qual. Note that having a resconstantqual is not a showstopper: if
+ * that fails we're not getting any rows at all.
*/
- if (outerPlanState(child_node) &&
- !expression_returns_set((Node *) child_node->plan->targetlist))
+ if (outerPlanState(child_node))
pass_down_bound(node, outerPlanState(child_node));
}
}
diff --git a/src/backend/executor/nodeMergejoin.c b/src/backend/executor/nodeMergejoin.c
index 2fd1856603..5150776b00 100644
--- a/src/backend/executor/nodeMergejoin.c
+++ b/src/backend/executor/nodeMergejoin.c
@@ -313,7 +313,7 @@ MJEvalOuterValues(MergeJoinState *mergestate)
MergeJoinClause clause = &mergestate->mj_Clauses[i];
clause->ldatum = ExecEvalExpr(clause->lexpr, econtext,
- &clause->lisnull, NULL);
+ &clause->lisnull);
if (clause->lisnull)
{
/* match is impossible; can we end the join early? */
@@ -360,7 +360,7 @@ MJEvalInnerValues(MergeJoinState *mergestate, TupleTableSlot *innerslot)
MergeJoinClause clause = &mergestate->mj_Clauses[i];
clause->rdatum = ExecEvalExpr(clause->rexpr, econtext,
- &clause->risnull, NULL);
+ &clause->risnull);
if (clause->risnull)
{
/* match is impossible; can we end the join early? */
@@ -465,19 +465,10 @@ MJFillOuter(MergeJoinState *node)
* qualification succeeded. now form the desired projection tuple and
* return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
MJ_printf("ExecMergeJoin: returning outer fill tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -506,19 +497,9 @@ MJFillInner(MergeJoinState *node)
* qualification succeeded. now form the desired projection tuple and
* return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
MJ_printf("ExecMergeJoin: returning inner fill tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -642,23 +623,6 @@ ExecMergeJoin(MergeJoinState *node)
doFillInner = node->mj_FillInner;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -856,20 +820,9 @@ ExecMergeJoin(MergeJoinState *node)
* qualification succeeded. now form the desired
* projection tuple and return the slot containing it.
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
MJ_printf("ExecMergeJoin: returning tuple\n");
- result = ExecProject(node->js.ps.ps_ProjInfo,
- &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -1629,7 +1582,6 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags)
* initialize join state
*/
mergestate->mj_JoinState = EXEC_MJ_INITIALIZE_OUTER;
- mergestate->js.ps.ps_TupFromTlist = false;
mergestate->mj_MatchedOuter = false;
mergestate->mj_MatchedInner = false;
mergestate->mj_OuterTupleSlot = NULL;
@@ -1684,7 +1636,6 @@ ExecReScanMergeJoin(MergeJoinState *node)
ExecClearTuple(node->mj_MarkedTupleSlot);
node->mj_JoinState = EXEC_MJ_INITIALIZE_OUTER;
- node->js.ps.ps_TupFromTlist = false;
node->mj_MatchedOuter = false;
node->mj_MatchedInner = false;
node->mj_OuterTupleSlot = NULL;
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 4692427e60..dab9c4129a 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -175,7 +175,7 @@ ExecProcessReturning(ResultRelInfo *resultRelInfo,
econtext->ecxt_outertuple = planSlot;
/* Compute the RETURNING expressions */
- return ExecProject(projectReturning, NULL);
+ return ExecProject(projectReturning);
}
/*
@@ -1302,7 +1302,7 @@ ExecOnConflictUpdate(ModifyTableState *mtstate,
}
/* Project the new tuple version */
- ExecProject(resultRelInfo->ri_onConflictSetProj, NULL);
+ ExecProject(resultRelInfo->ri_onConflictSetProj);
/*
* Note that it is possible that the target tuple has been modified in
diff --git a/src/backend/executor/nodeNestloop.c b/src/backend/executor/nodeNestloop.c
index e05842768a..5af04fde04 100644
--- a/src/backend/executor/nodeNestloop.c
+++ b/src/backend/executor/nodeNestloop.c
@@ -82,23 +82,6 @@ ExecNestLoop(NestLoopState *node)
econtext = node->js.ps.ps_ExprContext;
/*
- * Check to see if we're still projecting out tuples from a previous join
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->js.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- node->js.ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a join tuple.
@@ -201,19 +184,10 @@ ExecNestLoop(NestLoopState *node)
* the slot containing the result tuple using
* ExecProject().
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
ENL1_printf("qualification succeeded, projecting tuple");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -259,19 +233,10 @@ ExecNestLoop(NestLoopState *node)
* qualification was satisfied so we project and return the
* slot containing the result tuple using ExecProject().
*/
- TupleTableSlot *result;
- ExprDoneCond isDone;
ENL1_printf("qualification succeeded, projecting tuple");
- result = ExecProject(node->js.ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->js.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
- }
+ return ExecProject(node->js.ps.ps_ProjInfo);
}
else
InstrCountFiltered2(node, 1);
@@ -377,7 +342,6 @@ ExecInitNestLoop(NestLoop *node, EState *estate, int eflags)
/*
* finally, wipe the current outer tuple clean.
*/
- nlstate->js.ps.ps_TupFromTlist = false;
nlstate->nl_NeedNewOuter = true;
nlstate->nl_MatchedOuter = false;
@@ -441,7 +405,6 @@ ExecReScanNestLoop(NestLoopState *node)
* outer Vars are used as run-time keys...
*/
- node->js.ps.ps_TupFromTlist = false;
node->nl_NeedNewOuter = true;
node->nl_MatchedOuter = false;
}
diff --git a/src/backend/executor/nodeProjectSet.c b/src/backend/executor/nodeProjectSet.c
index 391e97ea6f..eae0f1dad9 100644
--- a/src/backend/executor/nodeProjectSet.c
+++ b/src/backend/executor/nodeProjectSet.c
@@ -169,7 +169,7 @@ ExecProjectSRF(ProjectSetState *node, bool continuing)
else
{
/* Non-SRF tlist expression, just evaluate normally. */
- *result = ExecEvalExpr(gstate->arg, econtext, isnull, NULL);
+ *result = ExecEvalExpr(gstate->arg, econtext, isnull);
*isdone = ExprSingleResult;
}
diff --git a/src/backend/executor/nodeResult.c b/src/backend/executor/nodeResult.c
index 59dacd33ef..759cbe6aec 100644
--- a/src/backend/executor/nodeResult.c
+++ b/src/backend/executor/nodeResult.c
@@ -67,10 +67,8 @@ TupleTableSlot *
ExecResult(ResultState *node)
{
TupleTableSlot *outerTupleSlot;
- TupleTableSlot *resultSlot;
PlanState *outerPlan;
ExprContext *econtext;
- ExprDoneCond isDone;
econtext = node->ps.ps_ExprContext;
@@ -92,20 +90,6 @@ ExecResult(ResultState *node)
}
/*
- * Check to see if we're still projecting out tuples from a previous scan
- * tuple (because there is a function-returning-set in the projection
- * expressions). If so, try to project another one.
- */
- if (node->ps.ps_TupFromTlist)
- {
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return resultSlot;
- /* Done with that source tuple... */
- node->ps.ps_TupFromTlist = false;
- }
-
- /*
* Reset per-tuple memory context to free any expression evaluation
* storage allocated in the previous tuple cycle. Note this can't happen
* until we're done projecting out tuples from a scan tuple.
@@ -147,18 +131,8 @@ ExecResult(ResultState *node)
node->rs_done = true;
}
- /*
- * form the result tuple using ExecProject(), and return it --- unless
- * the projection produces an empty set, in which case we must loop
- * back to see if there are more outerPlan tuples.
- */
- resultSlot = ExecProject(node->ps.ps_ProjInfo, &isDone);
-
- if (isDone != ExprEndResult)
- {
- node->ps.ps_TupFromTlist = (isDone == ExprMultipleResult);
- return resultSlot;
- }
+ /* form the result tuple using ExecProject(), and return it */
+ return ExecProject(node->ps.ps_ProjInfo);
}
return NULL;
@@ -228,8 +202,6 @@ ExecInitResult(Result *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &resstate->ps);
- resstate->ps.ps_TupFromTlist = false;
-
/*
* tuple table initialization
*/
@@ -295,7 +267,6 @@ void
ExecReScanResult(ResultState *node)
{
node->rs_done = false;
- node->ps.ps_TupFromTlist = false;
node->rs_checkqual = (node->resconstantqual == NULL) ? false : true;
/*
diff --git a/src/backend/executor/nodeSamplescan.c b/src/backend/executor/nodeSamplescan.c
index 8db5469d5a..d38265e810 100644
--- a/src/backend/executor/nodeSamplescan.c
+++ b/src/backend/executor/nodeSamplescan.c
@@ -189,8 +189,6 @@ ExecInitSampleScan(SampleScan *node, EState *estate, int eflags)
*/
InitScanRelation(scanstate, estate, eflags);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
@@ -300,8 +298,7 @@ tablesample_init(SampleScanState *scanstate)
params[i] = ExecEvalExprSwitchContext(argstate,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLESAMPLE_ARGUMENT),
@@ -313,8 +310,7 @@ tablesample_init(SampleScanState *scanstate)
{
datum = ExecEvalExprSwitchContext(scanstate->repeatable,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TABLESAMPLE_REPEAT),
diff --git a/src/backend/executor/nodeSeqscan.c b/src/backend/executor/nodeSeqscan.c
index 439a94694b..e61895de0a 100644
--- a/src/backend/executor/nodeSeqscan.c
+++ b/src/backend/executor/nodeSeqscan.c
@@ -206,8 +206,6 @@ ExecInitSeqScan(SeqScan *node, EState *estate, int eflags)
*/
InitScanRelation(scanstate, estate, eflags);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c
index 68edcd4567..12115bc541 100644
--- a/src/backend/executor/nodeSubplan.c
+++ b/src/backend/executor/nodeSubplan.c
@@ -41,12 +41,10 @@
static Datum ExecSubPlan(SubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecAlternativeSubPlan(AlternativeSubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
static Datum ExecHashSubPlan(SubPlanState *node,
ExprContext *econtext,
bool *isNull);
@@ -69,15 +67,12 @@ static bool slotNoNulls(TupleTableSlot *slot);
static Datum
ExecSubPlan(SubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
SubPlan *subplan = (SubPlan *) node->xprstate.expr;
/* Set default values for result flags: non-null, not a set result */
*isNull = false;
- if (isDone)
- *isDone = ExprSingleResult;
/* Sanity checks */
if (subplan->subLinkType == CTE_SUBLINK)
@@ -128,7 +123,7 @@ ExecHashSubPlan(SubPlanState *node,
* have to set the econtext to use (hack alert!).
*/
node->projLeft->pi_exprContext = econtext;
- slot = ExecProject(node->projLeft, NULL);
+ slot = ExecProject(node->projLeft);
/*
* Note: because we are typically called in a per-tuple context, we have
@@ -285,8 +280,7 @@ ExecScanSubPlan(SubPlanState *node,
prm->value = ExecEvalExprSwitchContext((ExprState *) lfirst(pvar),
econtext,
- &(prm->isnull),
- NULL);
+ &(prm->isnull));
planstate->chgParam = bms_add_member(planstate->chgParam, paramid);
}
@@ -403,7 +397,7 @@ ExecScanSubPlan(SubPlanState *node,
}
rowresult = ExecEvalExprSwitchContext(node->testexpr, econtext,
- &rownull, NULL);
+ &rownull);
if (subLinkType == ANY_SUBLINK)
{
@@ -572,7 +566,7 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext)
&(prmdata->isnull));
col++;
}
- slot = ExecProject(node->projRight, NULL);
+ slot = ExecProject(node->projRight);
/*
* If result contains any nulls, store separately or not at all.
@@ -985,8 +979,7 @@ ExecSetParamPlan(SubPlanState *node, ExprContext *econtext)
prm->value = ExecEvalExprSwitchContext((ExprState *) lfirst(pvar),
econtext,
- &(prm->isnull),
- NULL);
+ &(prm->isnull));
planstate->chgParam = bms_add_member(planstate->chgParam, paramid);
}
@@ -1222,8 +1215,7 @@ ExecInitAlternativeSubPlan(AlternativeSubPlan *asplan, PlanState *parent)
static Datum
ExecAlternativeSubPlan(AlternativeSubPlanState *node,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone)
+ bool *isNull)
{
/* Just pass control to the active subplan */
SubPlanState *activesp = (SubPlanState *) list_nth(node->subplans,
@@ -1231,8 +1223,5 @@ ExecAlternativeSubPlan(AlternativeSubPlanState *node,
Assert(IsA(activesp, SubPlanState));
- return ExecSubPlan(activesp,
- econtext,
- isNull,
- isDone);
+ return ExecSubPlan(activesp, econtext, isNull);
}
diff --git a/src/backend/executor/nodeSubqueryscan.c b/src/backend/executor/nodeSubqueryscan.c
index a4387da80a..230a96f9d2 100644
--- a/src/backend/executor/nodeSubqueryscan.c
+++ b/src/backend/executor/nodeSubqueryscan.c
@@ -138,8 +138,6 @@ ExecInitSubqueryScan(SubqueryScan *node, EState *estate, int eflags)
*/
subquerystate->subplan = ExecInitNode(node->subplan, estate, eflags);
- subquerystate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize scan tuple type (needed by ExecAssignScanProjectionInfo)
*/
diff --git a/src/backend/executor/nodeTidscan.c b/src/backend/executor/nodeTidscan.c
index e3d3fc3842..13ed886577 100644
--- a/src/backend/executor/nodeTidscan.c
+++ b/src/backend/executor/nodeTidscan.c
@@ -104,8 +104,7 @@ TidListCreate(TidScanState *tidstate)
itemptr = (ItemPointer)
DatumGetPointer(ExecEvalExprSwitchContext(exstate,
econtext,
- &isNull,
- NULL));
+ &isNull));
if (!isNull &&
ItemPointerIsValid(itemptr) &&
ItemPointerGetBlockNumber(itemptr) < nblocks)
@@ -133,8 +132,7 @@ TidListCreate(TidScanState *tidstate)
exstate = (ExprState *) lsecond(saexstate->fxprstate.args);
arraydatum = ExecEvalExprSwitchContext(exstate,
econtext,
- &isNull,
- NULL);
+ &isNull);
if (isNull)
continue;
itemarray = DatumGetArrayTypeP(arraydatum);
@@ -469,8 +467,6 @@ ExecInitTidScan(TidScan *node, EState *estate, int eflags)
*/
ExecAssignExprContext(estate, &tidstate->ss.ps);
- tidstate->ss.ps.ps_TupFromTlist = false;
-
/*
* initialize child expressions
*/
diff --git a/src/backend/executor/nodeValuesscan.c b/src/backend/executor/nodeValuesscan.c
index 5b42ca93cf..9883a8b130 100644
--- a/src/backend/executor/nodeValuesscan.c
+++ b/src/backend/executor/nodeValuesscan.c
@@ -140,8 +140,7 @@ ValuesNext(ValuesScanState *node)
values[resind] = ExecEvalExpr(estate,
econtext,
- &isnull[resind],
- NULL);
+ &isnull[resind]);
/*
* We must force any R/W expanded datums to read-only state, in
@@ -272,8 +271,6 @@ ExecInitValuesScan(ValuesScan *node, EState *estate, int eflags)
scanstate->exprlists[i++] = (List *) lfirst(vtl);
}
- scanstate->ss.ps.ps_TupFromTlist = false;
-
/*
* Initialize result tuple type and projection info.
*/
diff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c
index 17884d2c44..6ac6b83cdd 100644
--- a/src/backend/executor/nodeWindowAgg.c
+++ b/src/backend/executor/nodeWindowAgg.c
@@ -256,7 +256,7 @@ advance_windowaggregate(WindowAggState *winstate,
if (filter)
{
bool isnull;
- Datum res = ExecEvalExpr(filter, econtext, &isnull, NULL);
+ Datum res = ExecEvalExpr(filter, econtext, &isnull);
if (isnull || !DatumGetBool(res))
{
@@ -272,7 +272,7 @@ advance_windowaggregate(WindowAggState *winstate,
ExprState *argstate = (ExprState *) lfirst(arg);
fcinfo->arg[i] = ExecEvalExpr(argstate, econtext,
- &fcinfo->argnull[i], NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -433,7 +433,7 @@ advance_windowaggregate_base(WindowAggState *winstate,
if (filter)
{
bool isnull;
- Datum res = ExecEvalExpr(filter, econtext, &isnull, NULL);
+ Datum res = ExecEvalExpr(filter, econtext, &isnull);
if (isnull || !DatumGetBool(res))
{
@@ -449,7 +449,7 @@ advance_windowaggregate_base(WindowAggState *winstate,
ExprState *argstate = (ExprState *) lfirst(arg);
fcinfo->arg[i] = ExecEvalExpr(argstate, econtext,
- &fcinfo->argnull[i], NULL);
+ &fcinfo->argnull[i]);
i++;
}
@@ -1584,15 +1584,12 @@ update_frametailpos(WindowObject winobj, TupleTableSlot *slot)
* ExecWindowAgg receives tuples from its outer subplan and
* stores them into a tuplestore, then processes window functions.
* This node doesn't reduce nor qualify any row so the number of
- * returned rows is exactly the same as its outer subplan's result
- * (ignoring the case of SRFs in the targetlist, that is).
+ * returned rows is exactly the same as its outer subplan's result.
* -----------------
*/
TupleTableSlot *
ExecWindowAgg(WindowAggState *winstate)
{
- TupleTableSlot *result;
- ExprDoneCond isDone;
ExprContext *econtext;
int i;
int numfuncs;
@@ -1601,23 +1598,6 @@ ExecWindowAgg(WindowAggState *winstate)
return NULL;
/*
- * Check to see if we're still projecting out tuples from a previous
- * output tuple (because there is a function-returning-set in the
- * projection expressions). If so, try to project another one.
- */
- if (winstate->ss.ps.ps_TupFromTlist)
- {
- TupleTableSlot *result;
- ExprDoneCond isDone;
-
- result = ExecProject(winstate->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprMultipleResult)
- return result;
- /* Done with that source tuple... */
- winstate->ss.ps.ps_TupFromTlist = false;
- }
-
- /*
* Compute frame offset values, if any, during first call.
*/
if (winstate->all_first)
@@ -1634,8 +1614,7 @@ ExecWindowAgg(WindowAggState *winstate)
Assert(winstate->startOffset != NULL);
value = ExecEvalExprSwitchContext(winstate->startOffset,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
@@ -1660,8 +1639,7 @@ ExecWindowAgg(WindowAggState *winstate)
Assert(winstate->endOffset != NULL);
value = ExecEvalExprSwitchContext(winstate->endOffset,
econtext,
- &isnull,
- NULL);
+ &isnull);
if (isnull)
ereport(ERROR,
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
@@ -1684,7 +1662,6 @@ ExecWindowAgg(WindowAggState *winstate)
winstate->all_first = false;
}
-restart:
if (winstate->buffer == NULL)
{
/* Initialize for first partition and set current row = 0 */
@@ -1776,17 +1753,8 @@ restart:
* evaluated with respect to that row.
*/
econtext->ecxt_outertuple = winstate->ss.ss_ScanTupleSlot;
- result = ExecProject(winstate->ss.ps.ps_ProjInfo, &isDone);
- if (isDone == ExprEndResult)
- {
- /* SRF in tlist returned no rows, so advance to next input tuple */
- goto restart;
- }
-
- winstate->ss.ps.ps_TupFromTlist =
- (isDone == ExprMultipleResult);
- return result;
+ return ExecProject(winstate->ss.ps.ps_ProjInfo);
}
/* -----------------
@@ -1896,8 +1864,6 @@ ExecInitWindowAgg(WindowAgg *node, EState *estate, int eflags)
ExecAssignResultTypeFromTL(&winstate->ss.ps);
ExecAssignProjectionInfo(&winstate->ss.ps, NULL);
- winstate->ss.ps.ps_TupFromTlist = false;
-
/* Set up data for comparing tuples */
if (node->partNumCols > 0)
winstate->partEqfunctions = execTuplesMatchPrepare(node->partNumCols,
@@ -2090,8 +2056,6 @@ ExecReScanWindowAgg(WindowAggState *node)
ExprContext *econtext = node->ss.ps.ps_ExprContext;
node->all_done = false;
-
- node->ss.ps.ps_TupFromTlist = false;
node->all_first = true;
/* release tuplestore et al */
@@ -2712,7 +2676,7 @@ WinGetFuncArgInPartition(WindowObject winobj, int argno,
}
econtext->ecxt_outertuple = slot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
}
@@ -2811,7 +2775,7 @@ WinGetFuncArgInFrame(WindowObject winobj, int argno,
}
econtext->ecxt_outertuple = slot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
}
@@ -2841,5 +2805,5 @@ WinGetFuncArgCurrent(WindowObject winobj, int argno, bool *isnull)
econtext->ecxt_outertuple = winstate->ss.ss_ScanTupleSlot;
return ExecEvalExpr((ExprState *) list_nth(winobj->argstates, argno),
- econtext, isnull, NULL);
+ econtext, isnull);
}
diff --git a/src/backend/executor/nodeWorktablescan.c b/src/backend/executor/nodeWorktablescan.c
index 73a1a8238a..bdba9e0bfc 100644
--- a/src/backend/executor/nodeWorktablescan.c
+++ b/src/backend/executor/nodeWorktablescan.c
@@ -174,8 +174,6 @@ ExecInitWorkTableScan(WorkTableScan *node, EState *estate, int eflags)
*/
ExecAssignResultTypeFromTL(&scanstate->ss.ps);
- scanstate->ss.ps.ps_TupFromTlist = false;
-
return scanstate;
}
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 85ffa3afc7..83519fa140 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -4303,7 +4303,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
/*
* Forget it if the function is not SQL-language or has other showstopper
- * properties. (The nargs check is just paranoia.)
+ * properties. (The nargs and retset checks are just paranoia.)
*/
if (funcform->prolang != SQLlanguageId ||
funcform->prosecdef ||
@@ -4685,7 +4685,7 @@ evaluate_expr(Expr *expr, Oid result_type, int32 result_typmod,
*/
const_val = ExecEvalExprSwitchContext(exprstate,
GetPerTupleExprContext(estate),
- &const_is_null, NULL);
+ &const_is_null);
/* Get info needed about result datatype */
get_typlenbyval(result_type, &resultTypLen, &resultTypByVal);
diff --git a/src/backend/optimizer/util/predtest.c b/src/backend/optimizer/util/predtest.c
index fd009e135e..c4a04cfa95 100644
--- a/src/backend/optimizer/util/predtest.c
+++ b/src/backend/optimizer/util/predtest.c
@@ -1596,7 +1596,7 @@ operator_predicate_proof(Expr *predicate, Node *clause, bool refute_it)
/* And execute it. */
test_result = ExecEvalExprSwitchContext(test_exprstate,
GetPerTupleExprContext(estate),
- &isNull, NULL);
+ &isNull);
/* Get back to outer memory context */
MemoryContextSwitchTo(oldcontext);
diff --git a/src/backend/utils/adt/domains.c b/src/backend/utils/adt/domains.c
index 14fa119f07..c2ad440013 100644
--- a/src/backend/utils/adt/domains.c
+++ b/src/backend/utils/adt/domains.c
@@ -179,7 +179,7 @@ domain_check_input(Datum value, bool isnull, DomainIOData *my_extra)
conResult = ExecEvalExprSwitchContext(con->check_expr,
econtext,
- &conIsNull, NULL);
+ &conIsNull);
if (!conIsNull &&
!DatumGetBool(conResult))
diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c
index dcc5d6287a..e8bce3b806 100644
--- a/src/backend/utils/adt/xml.c
+++ b/src/backend/utils/adt/xml.c
@@ -603,7 +603,7 @@ xmlelement(XmlExprState *xmlExpr, ExprContext *econtext)
bool isnull;
char *str;
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
if (isnull)
str = NULL;
else
@@ -620,7 +620,7 @@ xmlelement(XmlExprState *xmlExpr, ExprContext *econtext)
bool isnull;
char *str;
- value = ExecEvalExpr(e, econtext, &isnull, NULL);
+ value = ExecEvalExpr(e, econtext, &isnull);
/* here we can just forget NULL elements immediately */
if (!isnull)
{
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index d424031676..d00014a191 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -70,8 +70,8 @@
* now it's just a macro invoking the function pointed to by an ExprState
* node. Beware of double evaluation of the ExprState argument!
*/
-#define ExecEvalExpr(expr, econtext, isNull, isDone) \
- ((*(expr)->evalfunc) (expr, econtext, isNull, isDone))
+#define ExecEvalExpr(expr, econtext, isNull) \
+ ((*(expr)->evalfunc) (expr, econtext, isNull))
/* Hook for plugins to get control in ExecutorStart() */
@@ -258,14 +258,13 @@ extern Datum ExecMakeFunctionResultSet(FuncExprState *fcache,
bool *isNull,
ExprDoneCond *isDone);
extern Datum ExecEvalExprSwitchContext(ExprState *expression, ExprContext *econtext,
- bool *isNull, ExprDoneCond *isDone);
+ bool *isNull);
extern ExprState *ExecInitExpr(Expr *node, PlanState *parent);
extern ExprState *ExecPrepareExpr(Expr *node, EState *estate);
extern bool ExecQual(List *qual, ExprContext *econtext, bool resultForNull);
extern int ExecTargetListLength(List *targetlist);
extern int ExecCleanTargetListLength(List *targetlist);
-extern TupleTableSlot *ExecProject(ProjectionInfo *projInfo,
- ExprDoneCond *isDone);
+extern TupleTableSlot *ExecProject(ProjectionInfo *projInfo);
/*
* prototypes from functions in execScan.c
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 1da1e1f804..cf1e5ef1e8 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -156,7 +156,8 @@ typedef struct ExprContext
} ExprContext;
/*
- * Set-result status returned by ExecEvalExpr()
+ * Set-result status used when evaluating functions potentially returning a
+ * set.
*/
typedef enum
{
@@ -245,7 +246,6 @@ typedef struct ProjectionInfo
List *pi_targetlist;
ExprContext *pi_exprContext;
TupleTableSlot *pi_slot;
- ExprDoneCond *pi_itemIsDone;
bool pi_directMap;
int pi_numSimpleVars;
int *pi_varSlotOffsets;
@@ -586,8 +586,7 @@ typedef struct ExprState ExprState;
typedef Datum (*ExprStateEvalFunc) (ExprState *expression,
ExprContext *econtext,
- bool *isNull,
- ExprDoneCond *isDone);
+ bool *isNull);
struct ExprState
{
@@ -732,13 +731,6 @@ typedef struct FuncExprState
bool setArgsValid;
/*
- * Flag to remember whether we found a set-valued argument to the
- * function. This causes the function result to be a set as well. Valid
- * only when setArgsValid is true or funcResultStore isn't NULL.
- */
- bool setHasSetArg; /* some argument returns a set */
-
- /*
* Flag to remember whether we have registered a shutdown callback for
* this FuncExprState. We do so only if funcResultStore or setArgsValid
* has been set at least once (since all the callback is for is to release
@@ -1081,8 +1073,6 @@ typedef struct PlanState
TupleTableSlot *ps_ResultTupleSlot; /* slot for my result tuples */
ExprContext *ps_ExprContext; /* node's expression-evaluation context */
ProjectionInfo *ps_ProjInfo; /* info for doing tuple projection */
- bool ps_TupFromTlist;/* state flag for processing set-valued
- * functions in targetlist */
} PlanState;
/* ----------------
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
index bc7b00199e..b48146a362 100644
--- a/src/pl/plpgsql/src/pl_exec.c
+++ b/src/pl/plpgsql/src/pl_exec.c
@@ -5606,8 +5606,7 @@ exec_eval_simple_expr(PLpgSQL_execstate *estate,
*/
*result = ExecEvalExpr(expr->expr_simple_state,
econtext,
- isNull,
- NULL);
+ isNull);
/* Assorted cleanup */
expr->expr_simple_in_use = false;
@@ -6272,7 +6271,7 @@ exec_cast_value(PLpgSQL_execstate *estate,
cast_entry->cast_in_use = true;
value = ExecEvalExpr(cast_entry->cast_exprstate, econtext,
- isnull, NULL);
+ isnull);
cast_entry->cast_in_use = false;
--
2.11.0.22.g8d7a455.dirty
Andres Freund <andres@anarazel.de> writes:
(I also noticed the previous patch should have had a catversion bump :(,
will do after the meeting).
Uh, why? It isn't touching any on-disk data structure.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-01-18 17:34:56 -0500, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
(I also noticed the previous patch should have had a catversion bump :(,
will do after the meeting).Uh, why? It isn't touching any on-disk data structure.
Forget what I said - I was rushing to a meeting and not thinking
entirely clearly. Was thinking about the new node types and that we now
(de)serialize plans for parallelism - but that's guaranteed to be the
same version.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
I wrote:
I'll try to write something about the SRF-in-CASE issue too. Seeing
whether we can document that adequately seems like an important part
of making the decision about whether we need to block it.
Here's what I came up with:
This behavior also means that set-returning functions will be evaluated
even when it might appear that they should be skipped because of a
conditional-evaluation construct, such as CASE or COALESCE. For example,
consider
SELECT x, CASE WHEN x > 0 THEN generate_series(1, 5) ELSE 0 END FROM tab;
It might seem that this should produce five repetitions of input rows
that have x > 0, and a single repetition of those that do not; but
actually it will produce five repetitions of every input row. This is
because generate_series() is run first, and then the CASE expression is
applied to its result rows. The behavior is thus comparable to
SELECT x, CASE WHEN x > 0 THEN g ELSE 0 END
FROM tab, LATERAL generate_series(1,5) AS g;
It would be exactly the same, except that in this specific example, the
planner could choose to put g on the outside of the nestloop join, since
g has no actual lateral dependency on tab. That would result in a
different output row order. Set-returning functions in the select list
are always evaluated as though they are on the inside of a nestloop join
with the rest of the FROM clause, so that the function(s) are run to
completion before the next row from the FROM clause is considered.
So is this too ugly to live, or shall we put up with it?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2017-01-18 18:14:26 -0500, Tom Lane wrote:
I wrote:
I'll try to write something about the SRF-in-CASE issue too. Seeing
whether we can document that adequately seems like an important part
of making the decision about whether we need to block it.Here's what I came up with:
This behavior also means that set-returning functions will be evaluated
even when it might appear that they should be skipped because of a
conditional-evaluation construct, such as CASE or COALESCE. For example,
considerSELECT x, CASE WHEN x > 0 THEN generate_series(1, 5) ELSE 0 END FROM tab;
It might seem that this should produce five repetitions of input rows
that have x > 0, and a single repetition of those that do not; but
actually it will produce five repetitions of every input row. This is
because generate_series() is run first, and then the CASE expression is
applied to its result rows. The behavior is thus comparable toSELECT x, CASE WHEN x > 0 THEN g ELSE 0 END
FROM tab, LATERAL generate_series(1,5) AS g;It would be exactly the same, except that in this specific example, the
planner could choose to put g on the outside of the nestloop join, since
g has no actual lateral dependency on tab. That would result in a
different output row order. Set-returning functions in the select list
are always evaluated as though they are on the inside of a nestloop join
with the rest of the FROM clause, so that the function(s) are run to
completion before the next row from the FROM clause is considered.So is this too ugly to live, or shall we put up with it?
I'm very tentatively in favor of living with it.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Jan 18, 2017 at 4:14 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I wrote:
I'll try to write something about the SRF-in-CASE issue too. Seeing
whether we can document that adequately seems like an important part
of making the decision about whether we need to block it.Here's what I came up with:
This behavior also means that set-returning functions will be evaluated
even when it might appear that they should be skipped because of a
conditional-evaluation construct, such as CASE or COALESCE. For example,
considerSELECT x, CASE WHEN x > 0 THEN generate_series(1, 5) ELSE 0 END FROM tab;
It might seem that this should produce five repetitions of input rows
that have x > 0, and a single repetition of those that do not; but
actually it will produce five repetitions of every input row.So is this too ugly to live, or shall we put up with it?
Disallowing such an unlikely, and un-intuitive, corner-case strikes my
sensibilities.
I'd rather fail now and allow for the possibility of future implementation
of the "it might seem that..." behavior.
David J.
On 2017-01-18 16:27:53 -0700, David G. Johnston wrote:
I'd rather fail now and allow for the possibility of future implementation
of the "it might seem that..." behavior.
That's very unlikely to happen.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Jan 18, 2017 at 6:19 PM, Andres Freund <andres@anarazel.de> wrote:
SELECT x, CASE WHEN x > 0 THEN generate_series(1, 5) ELSE 0 END FROM tab;
It might seem that this should produce five repetitions of input rows
that have x > 0, and a single repetition of those that do not; but
actually it will produce five repetitions of every input row. This is
because generate_series() is run first, and then the CASE expression is
applied to its result rows. The behavior is thus comparable toSELECT x, CASE WHEN x > 0 THEN g ELSE 0 END
FROM tab, LATERAL generate_series(1,5) AS g;It would be exactly the same, except that in this specific example, the
planner could choose to put g on the outside of the nestloop join, since
g has no actual lateral dependency on tab. That would result in a
different output row order. Set-returning functions in the select list
are always evaluated as though they are on the inside of a nestloop join
with the rest of the FROM clause, so that the function(s) are run to
completion before the next row from the FROM clause is considered.So is this too ugly to live, or shall we put up with it?
I'm very tentatively in favor of living with it.
So, one of the big reasons I use CASE is to avoid evaluating
expressions in cases where they might throw an ERROR. Like, you know:
CASE WHEN d != 0 THEN n / d ELSE NULL END
I guess it's not the end of the world if that only works for
non-set-returning functions, but it's something to think about.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On January 18, 2017 3:59:00 PM PST, Robert Haas <robertmhaas@gmail.com> wrote:
On Wed, Jan 18, 2017 at 6:19 PM, Andres Freund <andres@anarazel.de>
wrote:SELECT x, CASE WHEN x > 0 THEN generate_series(1, 5) ELSE 0 END
FROM tab;
It might seem that this should produce five repetitions of input
rows
that have x > 0, and a single repetition of those that do not; but
actually it will produce five repetitions of every input row. Thisis
because generate_series() is run first, and then the CASE
expression is
applied to its result rows. The behavior is thus comparable to
SELECT x, CASE WHEN x > 0 THEN g ELSE 0 END
FROM tab, LATERAL generate_series(1,5) AS g;It would be exactly the same, except that in this specific
example, the
planner could choose to put g on the outside of the nestloop join,
since
g has no actual lateral dependency on tab. That would result in a
different output row order. Set-returning functions in the selectlist
are always evaluated as though they are on the inside of a
nestloop join
with the rest of the FROM clause, so that the function(s) are run
to
completion before the next row from the FROM clause is considered.
So is this too ugly to live, or shall we put up with it?
I'm very tentatively in favor of living with it.
So, one of the big reasons I use CASE is to avoid evaluating
expressions in cases where they might throw an ERROR. Like, you know:CASE WHEN d != 0 THEN n / d ELSE NULL END
I guess it's not the end of the world if that only works for
non-set-returning functions, but it's something to think about.
That's already not reliable in a bunch of cases, particularly evaluation during planning... Not saying that's good, but it is.
Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, Jan 18, 2017 at 7:00 PM, Andres Freund <andres@anarazel.de> wrote:
So, one of the big reasons I use CASE is to avoid evaluating
expressions in cases where they might throw an ERROR. Like, you know:CASE WHEN d != 0 THEN n / d ELSE NULL END
I guess it's not the end of the world if that only works for
non-set-returning functions, but it's something to think about.That's already not reliable in a bunch of cases, particularly evaluation during planning... Not saying that's good, but it is.
Whee!
:-)
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
So, one of the big reasons I use CASE is to avoid evaluating
expressions in cases where they might throw an ERROR. Like, you know:
CASE WHEN d != 0 THEN n / d ELSE NULL END
I guess it's not the end of the world if that only works for
non-set-returning functions, but it's something to think about.
Well, refusing CASE-containing-SRF at all isn't going to make your
life any better in that regard :-(
It's possibly worth noting that this is also true for aggregates and
window functions: wrapping those in a CASE doesn't stop them from being
evaluated, either. People seem to be generally used to that, although
I think I've seen one or two complaints about it from folks who seemed
unclear on the concept of aggregates.
In the end I think this is mostly about backwards compatibility:
are we sufficiently worried about that that we'd rather throw an
error than have a silent change of behavior? TBH I'm not sure.
We've certainly got two other silent changes of behavior in this
same patch. The argument for treating this one differently,
I think, is that it's changing from a less surprising behavior
to a more surprising one whereas the other changes are the reverse,
or at worst neutral.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
On 2017-01-18 16:56:46 -0500, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
I have not actually looked at 0003 at all yet. So yeah, please post
for review after you're done rebasing.
Here's a rebased and lightly massaged version.
I've read through this and made some minor improvements, mostly additional
comment cleanup. One thing I wanted to ask about:
@@ -4303,7 +4303,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
/*
* Forget it if the function is not SQL-language or has other showstopper
- * properties. (The nargs check is just paranoia.)
+ * properties. (The nargs and retset checks are just paranoia.)
*/
if (funcform->prolang != SQLlanguageId ||
funcform->prosecdef ||
I thought this change was simply wrong, and removed it; AFAIK it's
perfectly possible to get here for set-returning functions, since
the planner does expression simplification long before it worries
about splitting out SRFs. Did you have a reason to think differently?
Other than that possible point, I think the attached is committable.
regards, tom lane
Attachments:
On 2017-01-19 13:06:20 -0500, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2017-01-18 16:56:46 -0500, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
I have not actually looked at 0003 at all yet. So yeah, please post
for review after you're done rebasing.Here's a rebased and lightly massaged version.
I've read through this and made some minor improvements, mostly additional
comment cleanup.
Thanks!
One thing I wanted to ask about:
@@ -4303,7 +4303,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
/* * Forget it if the function is not SQL-language or has other showstopper - * properties. (The nargs check is just paranoia.) + * properties. (The nargs and retset checks are just paranoia.) */ if (funcform->prolang != SQLlanguageId || funcform->prosecdef ||I thought this change was simply wrong, and removed it;
Hm. I made that change a while ago. It might have been a holdover from
the old approach, where it'd indeed have been impossible to see any
tSRFs here. Or it might have been because we check
querytree->hasTargetSRFs below (which should prevent inlining a function
that actually returns multiple rows). I agree it's better to leave the
check there. Maybe we ought to remove the paranoia bit about retset
though - it's not paranoia if it has an effect.
Other than that possible point, I think the attached is committable.
Will do so in a bit, after a s/and retset checks are/check is/. And then
fix that big-endian ordering issue.
- Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@anarazel.de> writes:
Maybe we ought to remove the paranoia bit about retset
though - it's not paranoia if it has an effect.
Exactly, and I already did that in my version.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers