dynamic result sets support in extended query protocol
I want to progress work on stored procedures returning multiple result
sets. Examples of how this could work on the SQL side have previously
been shown [0]/messages/by-id/4580ff7b-d610-eaeb-e06f-4d686896b93b@2ndquadrant.com. We also have ongoing work to make psql show multiple
result sets [1]https://commitfest.postgresql.org/29/2096/. This appears to work fine in the simple query
protocol. But the extended query protocol doesn't support multiple
result sets at the moment [2]/messages/by-id/9507.1534370765@sss.pgh.pa.us. This would be desirable to be able to
use parameter binding, and also since one of the higher-level goals
would be to support the use case of stored procedures returning multiple
result sets via JDBC.
[0]: /messages/by-id/4580ff7b-d610-eaeb-e06f-4d686896b93b@2ndquadrant.com
/messages/by-id/4580ff7b-d610-eaeb-e06f-4d686896b93b@2ndquadrant.com
[1]: https://commitfest.postgresql.org/29/2096/
[2]: /messages/by-id/9507.1534370765@sss.pgh.pa.us
(Terminology: I'm calling this project "dynamic result sets", which
includes several concepts: 1) multiple result sets, 2) those result sets
can have different structures, 3) the structure of the result sets is
decided at run time, not declared in the schema/procedure definition/etc.)
One possibility I rejected was to invent a third query protocol beside
the simple and extended one. This wouldn't really match with the
requirements of JDBC and similar APIs because the APIs for sending
queries don't indicate whether dynamic result sets are expected or
required, you only indicate that later by how you process the result
sets. So we really need to use the existing ways of sending off the
queries. Also, avoiding a third query protocol is probably desirable in
general to avoid extra code and APIs.
So here is my sketch on how this functionality could be woven into the
extended query protocol. I'll go through how the existing protocol
exchange works and then point out the additions that I have in mind.
These additions could be enabled by a _pq_ startup parameter sent by the
client. Alternatively, it might also work without that because the
client would just reject protocol messages it doesn't understand, but
that's probably less desirable behavior.
So here is how it goes:
C: Parse
S: ParseComplete
At this point, the server would know whether the statement it has parsed
can produce dynamic result sets. For a stored procedure, this would be
declared with the procedure definition, so when the CALL statement is
parsed, this can be noticed. I don't actually plan any other cases, but
for the sake of discussion, perhaps some variant of EXPLAIN could also
return multiple result sets, and that could also be detected from
parsing the EXPLAIN invocation.
At this point a client would usually do
C: Describe (statement)
S: ParameterDescription
S: RowDescription
New would be that the server would now also respond with a new message, say,
S: DynamicResultInfo
that indicates that dynamic result sets will follow later. The message
would otherwise be empty. (We could perhaps include the number of
result sets, but this might not actually be useful, and perhaps it's
better not to spent effort on counting things that don't need to be
counted.)
(If we don't guard this by a _pq_ startup parameter from the client, an
old client would now error out because of an unexpected protocol message.)
Now the normal bind and execute sequence follows:
C: Bind
S: BindComplete
(C: Describe (portal))
(S: RowDescription)
C: Execute
S: ... (DataRows)
S: CommandComplete
In the case of a CALL with output parameters, this "primary" result set
contains one row with the output parameters (existing behavior).
Now, if the client has seen DynamicResultInfo earlier, it should now go
into a new subsequence to get the remaining result sets, like this
(naming obviously to be refined):
C: NextResult
S: NextResultReady
C: Describe (portal)
S: RowDescription
C: Execute
....
S: CommandComplete
C: NextResult
...
C: NextResult
S: NoNextResult
C: Sync
S: ReadyForQuery
I think this would all have to use the unnamed portal, but perhaps there
could be other uses with named portals. Some details to be worked out.
One could perhaps also do without the DynamicResultInfo message and just
put extra information into the CommandComplete message indicating "there
are more result sets after this one".
(Following the model from the simple query protocol, CommandComplete
really means one result set complete, not the whole top-level command.
ReadyForQuery means the whole command is complete. This is perhaps
debatable, and interesting questions could also arise when considering
what should happen in the simple query protocol when a query string
consists of multiple commands each returning multiple result sets. But
it doesn't really seem sensible to cater to that.)
One thing that's missing in this sequence is a way to specify the
desired output format (text/binary) for each result set. This could be
added to the NextResult message, but at that point the client doesn't
yet know the number of columns in the result set, so we could only do it
globally. Then again, since the result sets are dynamic, it's less
likely that a client would be coded to set per-column output codes.
Then again, I would hate to bake such a restriction into the protocol,
because some is going to try. (I suspect what would be more useful in
practice is to designate output formats per data type.) So if we wanted
to have this fully featured, it might have to look something like this:
C: NextResult
S: NextResultReady
C: Describe (dynamic) (new message subkind)
S: RowDescription
C: Bind (zero parameters, optionally format codes)
S: BindComplete
C: Describe (portal)
S: RowDescription
C: Execute
...
While this looks more complicated, client libraries could reuse existing
code that starts processing with a Bind message and continues to
CommandComplete, and then just loops back around.
The mapping of this to libpq in a simple case could look like this:
PQsendQueryParams(conn, "CALL ...", ...);
PQgetResult(...); // gets output parameters
PQnextResult(...); // new: sends NextResult+Bind
PQgetResult(...); // and repeat
Again, it's not clear here how to declare the result column output
formats. Since libpq doesn't appear to expose the Bind message
separately, I'm not sure what to do here.
In JDBC, the NextResult message would correspond to the
Statement.getMoreResults() method. It will need a bit of conceptual
adjustment because the first result set sent on the protocol is actually
the output parameters, which the JDBC API returns separately from a
ResultSet, so the initial CallableStatement.execute() call will need to
process the primary result set and then send NextResult and obtain the
first dynamic result as the first ResultSet for its API, but that can be
handled internally.
Thoughts so far?
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Are you proposing to bump up the protocol version (either major or
minor)? I am asking because it seems you are going to introduce some
new message types.
Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
Show quoted text
I want to progress work on stored procedures returning multiple result
sets. Examples of how this could work on the SQL side have previously
been shown [0]. We also have ongoing work to make psql show multiple
result sets [1]. This appears to work fine in the simple query
protocol. But the extended query protocol doesn't support multiple
result sets at the moment [2]. This would be desirable to be able to
use parameter binding, and also since one of the higher-level goals
would be to support the use case of stored procedures returning
multiple result sets via JDBC.[0]:
/messages/by-id/4580ff7b-d610-eaeb-e06f-4d686896b93b@2ndquadrant.com
[1]: https://commitfest.postgresql.org/29/2096/
[2]:
/messages/by-id/9507.1534370765@sss.pgh.pa.us(Terminology: I'm calling this project "dynamic result sets", which
includes several concepts: 1) multiple result sets, 2) those result
sets can have different structures, 3) the structure of the result
sets is decided at run time, not declared in the schema/procedure
definition/etc.)One possibility I rejected was to invent a third query protocol beside
the simple and extended one. This wouldn't really match with the
requirements of JDBC and similar APIs because the APIs for sending
queries don't indicate whether dynamic result sets are expected or
required, you only indicate that later by how you process the result
sets. So we really need to use the existing ways of sending off the
queries. Also, avoiding a third query protocol is probably desirable
in general to avoid extra code and APIs.So here is my sketch on how this functionality could be woven into the
extended query protocol. I'll go through how the existing protocol
exchange works and then point out the additions that I have in mind.These additions could be enabled by a _pq_ startup parameter sent by
the client. Alternatively, it might also work without that because
the client would just reject protocol messages it doesn't understand,
but that's probably less desirable behavior.So here is how it goes:
C: Parse
S: ParseCompleteAt this point, the server would know whether the statement it has
parsed can produce dynamic result sets. For a stored procedure, this
would be declared with the procedure definition, so when the CALL
statement is parsed, this can be noticed. I don't actually plan any
other cases, but for the sake of discussion, perhaps some variant of
EXPLAIN could also return multiple result sets, and that could also be
detected from parsing the EXPLAIN invocation.At this point a client would usually do
C: Describe (statement)
S: ParameterDescription
S: RowDescriptionNew would be that the server would now also respond with a new
message, say,S: DynamicResultInfo
that indicates that dynamic result sets will follow later. The
message would otherwise be empty. (We could perhaps include the
number of result sets, but this might not actually be useful, and
perhaps it's better not to spent effort on counting things that don't
need to be counted.)(If we don't guard this by a _pq_ startup parameter from the client,
an old client would now error out because of an unexpected protocol
message.)Now the normal bind and execute sequence follows:
C: Bind
S: BindComplete
(C: Describe (portal))
(S: RowDescription)
C: Execute
S: ... (DataRows)
S: CommandCompleteIn the case of a CALL with output parameters, this "primary" result
set contains one row with the output parameters (existing behavior).Now, if the client has seen DynamicResultInfo earlier, it should now
go into a new subsequence to get the remaining result sets, like this
(naming obviously to be refined):C: NextResult
S: NextResultReady
C: Describe (portal)
S: RowDescription
C: Execute
....
S: CommandComplete
C: NextResult
...
C: NextResult
S: NoNextResult
C: Sync
S: ReadyForQueryI think this would all have to use the unnamed portal, but perhaps
there could be other uses with named portals. Some details to be
worked out.One could perhaps also do without the DynamicResultInfo message and
just put extra information into the CommandComplete message indicating
"there are more result sets after this one".(Following the model from the simple query protocol, CommandComplete
really means one result set complete, not the whole top-level
command. ReadyForQuery means the whole command is complete. This is
perhaps debatable, and interesting questions could also arise when
considering what should happen in the simple query protocol when a
query string consists of multiple commands each returning multiple
result sets. But it doesn't really seem sensible to cater to that.)One thing that's missing in this sequence is a way to specify the
desired output format (text/binary) for each result set. This could
be added to the NextResult message, but at that point the client
doesn't yet know the number of columns in the result set, so we could
only do it globally. Then again, since the result sets are dynamic,
it's less likely that a client would be coded to set per-column output
codes. Then again, I would hate to bake such a restriction into the
protocol, because some is going to try. (I suspect what would be more
useful in practice is to designate output formats per data type.) So
if we wanted to have this fully featured, it might have to look
something like this:C: NextResult
S: NextResultReady
C: Describe (dynamic) (new message subkind)
S: RowDescription
C: Bind (zero parameters, optionally format codes)
S: BindComplete
C: Describe (portal)
S: RowDescription
C: Execute
...While this looks more complicated, client libraries could reuse
existing code that starts processing with a Bind message and continues
to CommandComplete, and then just loops back around.The mapping of this to libpq in a simple case could look like this:
PQsendQueryParams(conn, "CALL ...", ...);
PQgetResult(...); // gets output parameters
PQnextResult(...); // new: sends NextResult+Bind
PQgetResult(...); // and repeatAgain, it's not clear here how to declare the result column output
formats. Since libpq doesn't appear to expose the Bind message
separately, I'm not sure what to do here.In JDBC, the NextResult message would correspond to the
Statement.getMoreResults() method. It will need a bit of conceptual
adjustment because the first result set sent on the protocol is
actually the output parameters, which the JDBC API returns separately
from a ResultSet, so the initial CallableStatement.execute() call will
need to process the primary result set and then send NextResult and
obtain the first dynamic result as the first ResultSet for its API,
but that can be handled internally.Thoughts so far?
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2020-10-08 10:23, Tatsuo Ishii wrote:
Are you proposing to bump up the protocol version (either major or
minor)? I am asking because it seems you are going to introduce some
new message types.
It wouldn't be a new major version. It could either be a new minor
version, or it would be guarded by a _pq_ protocol message to enable
this functionality from the client, as described. Or both? We haven't
done this sort of thing a lot, so some discussion on the details might
be necessary.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 10/8/20 3:46 AM, Peter Eisentraut wrote:
I want to progress work on stored procedures returning multiple result
sets. Examples of how this could work on the SQL side have previously
been shown [0]. We also have ongoing work to make psql show multiple
result sets [1]. This appears to work fine in the simple query
protocol. But the extended query protocol doesn't support multiple
result sets at the moment [2]. This would be desirable to be able to
use parameter binding, and also since one of the higher-level goals
would be to support the use case of stored procedures returning
multiple result sets via JDBC.[0]:
/messages/by-id/4580ff7b-d610-eaeb-e06f-4d686896b93b@2ndquadrant.com
[1]: https://commitfest.postgresql.org/29/2096/
[2]:
/messages/by-id/9507.1534370765@sss.pgh.pa.us(Terminology: I'm calling this project "dynamic result sets", which
includes several concepts: 1) multiple result sets, 2) those result
sets can have different structures, 3) the structure of the result
sets is decided at run time, not declared in the schema/procedure
definition/etc.)One possibility I rejected was to invent a third query protocol beside
the simple and extended one. This wouldn't really match with the
requirements of JDBC and similar APIs because the APIs for sending
queries don't indicate whether dynamic result sets are expected or
required, you only indicate that later by how you process the result
sets. So we really need to use the existing ways of sending off the
queries. Also, avoiding a third query protocol is probably desirable
in general to avoid extra code and APIs.So here is my sketch on how this functionality could be woven into the
extended query protocol. I'll go through how the existing protocol
exchange works and then point out the additions that I have in mind.These additions could be enabled by a _pq_ startup parameter sent by
the client. Alternatively, it might also work without that because
the client would just reject protocol messages it doesn't understand,
but that's probably less desirable behavior.So here is how it goes:
C: Parse
S: ParseCompleteAt this point, the server would know whether the statement it has
parsed can produce dynamic result sets. For a stored procedure, this
would be declared with the procedure definition, so when the CALL
statement is parsed, this can be noticed. I don't actually plan any
other cases, but for the sake of discussion, perhaps some variant of
EXPLAIN could also return multiple result sets, and that could also be
detected from parsing the EXPLAIN invocation.At this point a client would usually do
C: Describe (statement)
S: ParameterDescription
S: RowDescriptionNew would be that the server would now also respond with a new
message, say,S: DynamicResultInfo
that indicates that dynamic result sets will follow later. The
message would otherwise be empty. (We could perhaps include the
number of result sets, but this might not actually be useful, and
perhaps it's better not to spent effort on counting things that don't
need to be counted.)(If we don't guard this by a _pq_ startup parameter from the client,
an old client would now error out because of an unexpected protocol
message.)Now the normal bind and execute sequence follows:
C: Bind
S: BindComplete
(C: Describe (portal))
(S: RowDescription)
C: Execute
S: ... (DataRows)
S: CommandCompleteIn the case of a CALL with output parameters, this "primary" result
set contains one row with the output parameters (existing behavior).Now, if the client has seen DynamicResultInfo earlier, it should now
go into a new subsequence to get the remaining result sets, like this
(naming obviously to be refined):C: NextResult
S: NextResultReady
C: Describe (portal)
S: RowDescription
C: Execute
....
S: CommandComplete
C: NextResult
...
C: NextResult
S: NoNextResult
C: Sync
S: ReadyForQueryI think this would all have to use the unnamed portal, but perhaps
there could be other uses with named portals. Some details to be
worked out.One could perhaps also do without the DynamicResultInfo message and
just put extra information into the CommandComplete message indicating
"there are more result sets after this one".(Following the model from the simple query protocol, CommandComplete
really means one result set complete, not the whole top-level command.
ReadyForQuery means the whole command is complete. This is perhaps
debatable, and interesting questions could also arise when considering
what should happen in the simple query protocol when a query string
consists of multiple commands each returning multiple result sets.
But it doesn't really seem sensible to cater to that.)One thing that's missing in this sequence is a way to specify the
desired output format (text/binary) for each result set. This could
be added to the NextResult message, but at that point the client
doesn't yet know the number of columns in the result set, so we could
only do it globally. Then again, since the result sets are dynamic,
it's less likely that a client would be coded to set per-column output
codes. Then again, I would hate to bake such a restriction into the
protocol, because some is going to try. (I suspect what would be more
useful in practice is to designate output formats per data type.) So
if we wanted to have this fully featured, it might have to look
something like this:C: NextResult
S: NextResultReady
C: Describe (dynamic) (new message subkind)
S: RowDescription
C: Bind (zero parameters, optionally format codes)
S: BindComplete
C: Describe (portal)
S: RowDescription
C: Execute
...While this looks more complicated, client libraries could reuse
existing code that starts processing with a Bind message and continues
to CommandComplete, and then just loops back around.The mapping of this to libpq in a simple case could look like this:
PQsendQueryParams(conn, "CALL ...", ...);
PQgetResult(...); // gets output parameters
PQnextResult(...); // new: sends NextResult+Bind
PQgetResult(...); // and repeatAgain, it's not clear here how to declare the result column output
formats. Since libpq doesn't appear to expose the Bind message
separately, I'm not sure what to do here.In JDBC, the NextResult message would correspond to the
Statement.getMoreResults() method. It will need a bit of conceptual
adjustment because the first result set sent on the protocol is
actually the output parameters, which the JDBC API returns separately
from a ResultSet, so the initial CallableStatement.execute() call will
need to process the primary result set and then send NextResult and
obtain the first dynamic result as the first ResultSet for its API,
but that can be handled internally.Thoughts so far?
Exciting stuff. But I'm a bit concerned about the sequence of
resultsets. The JDBC docco for CallableStatement says:
A CallableStatement can return one ResultSet object or multiple
ResultSet objects. Multiple ResultSet objects are handled using
operations inherited from Statement.
For maximum portability, a call's ResultSet objects and update
counts should be processed prior to getting the values of output
parameters.
And this is more or less in line with the pattern that I've seen when
converting SPs from other systems - the OUT params are usually set at
the end with things like status flags and error messages.
If the OUT parameter resultset has to come first (which is how I read
your proposal - please correct me if I'm wrong) we'll have to stack up
all the resultsets until the SP returns, then send the OUT params, then
send the remaining resultsets. That seems ... suboptimal. The
alternative would be to send the OUT params last. That might result in
the driver needing to do some lookahead and caching, but I don't think
it's unmanageable. Of course, your protocol would also need changing.
cheers
andrew
--
Andrew Dunstan
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Fri, 9 Oct 2020 at 13:33, Andrew Dunstan <andrew@dunslane.net> wrote:
On 10/8/20 3:46 AM, Peter Eisentraut wrote:
I want to progress work on stored procedures returning multiple result
sets. Examples of how this could work on the SQL side have previously
been shown [0]. We also have ongoing work to make psql show multiple
result sets [1]. This appears to work fine in the simple query
protocol. But the extended query protocol doesn't support multiple
result sets at the moment [2]. This would be desirable to be able to
use parameter binding, and also since one of the higher-level goals
would be to support the use case of stored procedures returning
multiple result sets via JDBC.[0]:
/messages/by-id/4580ff7b-d610-eaeb-e06f-4d686896b93b@2ndquadrant.com
[1]: https://commitfest.postgresql.org/29/2096/
[2]:
/messages/by-id/9507.1534370765@sss.pgh.pa.us(Terminology: I'm calling this project "dynamic result sets", which
includes several concepts: 1) multiple result sets, 2) those result
sets can have different structures, 3) the structure of the result
sets is decided at run time, not declared in the schema/procedure
definition/etc.)One possibility I rejected was to invent a third query protocol beside
the simple and extended one. This wouldn't really match with the
requirements of JDBC and similar APIs because the APIs for sending
queries don't indicate whether dynamic result sets are expected or
required, you only indicate that later by how you process the result
sets. So we really need to use the existing ways of sending off the
queries. Also, avoiding a third query protocol is probably desirable
in general to avoid extra code and APIs.So here is my sketch on how this functionality could be woven into the
extended query protocol. I'll go through how the existing protocol
exchange works and then point out the additions that I have in mind.These additions could be enabled by a _pq_ startup parameter sent by
the client. Alternatively, it might also work without that because
the client would just reject protocol messages it doesn't understand,
but that's probably less desirable behavior.So here is how it goes:
C: Parse
S: ParseCompleteAt this point, the server would know whether the statement it has
parsed can produce dynamic result sets. For a stored procedure, this
would be declared with the procedure definition, so when the CALL
statement is parsed, this can be noticed. I don't actually plan any
other cases, but for the sake of discussion, perhaps some variant of
EXPLAIN could also return multiple result sets, and that could also be
detected from parsing the EXPLAIN invocation.At this point a client would usually do
C: Describe (statement)
S: ParameterDescription
S: RowDescriptionNew would be that the server would now also respond with a new
message, say,S: DynamicResultInfo
that indicates that dynamic result sets will follow later. The
message would otherwise be empty. (We could perhaps include the
number of result sets, but this might not actually be useful, and
perhaps it's better not to spent effort on counting things that don't
need to be counted.)(If we don't guard this by a _pq_ startup parameter from the client,
an old client would now error out because of an unexpected protocol
message.)Now the normal bind and execute sequence follows:
C: Bind
S: BindComplete
(C: Describe (portal))
(S: RowDescription)
C: Execute
S: ... (DataRows)
S: CommandCompleteIn the case of a CALL with output parameters, this "primary" result
set contains one row with the output parameters (existing behavior).Now, if the client has seen DynamicResultInfo earlier, it should now
go into a new subsequence to get the remaining result sets, like this
(naming obviously to be refined):C: NextResult
S: NextResultReady
C: Describe (portal)
S: RowDescription
C: Execute
....
S: CommandComplete
C: NextResult
...
C: NextResult
S: NoNextResult
C: Sync
S: ReadyForQueryI think this would all have to use the unnamed portal, but perhaps
there could be other uses with named portals. Some details to be
worked out.One could perhaps also do without the DynamicResultInfo message and
just put extra information into the CommandComplete message indicating
"there are more result sets after this one".(Following the model from the simple query protocol, CommandComplete
really means one result set complete, not the whole top-level command.
ReadyForQuery means the whole command is complete. This is perhaps
debatable, and interesting questions could also arise when considering
what should happen in the simple query protocol when a query string
consists of multiple commands each returning multiple result sets.
But it doesn't really seem sensible to cater to that.)One thing that's missing in this sequence is a way to specify the
desired output format (text/binary) for each result set. This could
be added to the NextResult message, but at that point the client
doesn't yet know the number of columns in the result set, so we could
only do it globally. Then again, since the result sets are dynamic,
it's less likely that a client would be coded to set per-column output
codes. Then again, I would hate to bake such a restriction into the
protocol, because some is going to try. (I suspect what would be more
useful in practice is to designate output formats per data type.) So
if we wanted to have this fully featured, it might have to look
something like this:C: NextResult
S: NextResultReady
C: Describe (dynamic) (new message subkind)
S: RowDescription
C: Bind (zero parameters, optionally format codes)
S: BindComplete
C: Describe (portal)
S: RowDescription
C: Execute
...While this looks more complicated, client libraries could reuse
existing code that starts processing with a Bind message and continues
to CommandComplete, and then just loops back around.The mapping of this to libpq in a simple case could look like this:
PQsendQueryParams(conn, "CALL ...", ...);
PQgetResult(...); // gets output parameters
PQnextResult(...); // new: sends NextResult+Bind
PQgetResult(...); // and repeatAgain, it's not clear here how to declare the result column output
formats. Since libpq doesn't appear to expose the Bind message
separately, I'm not sure what to do here.In JDBC, the NextResult message would correspond to the
Statement.getMoreResults() method. It will need a bit of conceptual
adjustment because the first result set sent on the protocol is
actually the output parameters, which the JDBC API returns separately
from a ResultSet, so the initial CallableStatement.execute() call will
need to process the primary result set and then send NextResult and
obtain the first dynamic result as the first ResultSet for its API,
but that can be handled internally.Thoughts so far?
Exciting stuff. But I'm a bit concerned about the sequence of
resultsets. The JDBC docco for CallableStatement says:A CallableStatement can return one ResultSet object or multiple
ResultSet objects. Multiple ResultSet objects are handled using
operations inherited from Statement.For maximum portability, a call's ResultSet objects and update
counts should be processed prior to getting the values of output
parameters.And this is more or less in line with the pattern that I've seen when
converting SPs from other systems - the OUT params are usually set at
the end with things like status flags and error messages.If the OUT parameter resultset has to come first (which is how I read
your proposal - please correct me if I'm wrong) we'll have to stack up
all the resultsets until the SP returns, then send the OUT params, then
send the remaining resultsets. That seems ... suboptimal. The
alternative would be to send the OUT params last. That might result in
the driver needing to do some lookahead and caching, but I don't think
it's unmanageable. Of course, your protocol would also need changing.cheers
andrew
--
Andrew Dunstan
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Currently the JDBC driver does NOT do :
At this point a client would usually do
C: Describe (statement)
S: ParameterDescription
S: RowDescription
We do not do the Describe until we use a named statement and decide that
the extra round trip is worth it.
Making this assumption will cause a performance regression on all queries.
If we are going to make a protocol change there are a number of other
things the drivers want.
https://github.com/pgjdbc/pgjdbc/blob/master/backend_protocol_v4_wanted_features.md
Thanks,
Dave
Hi,
On 2020-10-08 09:46:38 +0200, Peter Eisentraut wrote:
New would be that the server would now also respond with a new message, say,
S: DynamicResultInfo
Now, if the client has seen DynamicResultInfo earlier, it should now go into
a new subsequence to get the remaining result sets, like this (naming
obviously to be refined):
Hm. Isn't this going to be a lot more latency sensitive than we'd like?
This would basically require at least one additional roundtrip for
everything that *potentially* could return multiple result sets, even if
no additional results are returned, right? And it'd add at least one
additional roundtrip for every result set that's actually sent.
Is there really a good reason for forcing the client to issue
NextResult, Describe, Execute for each of the dynamic result sets? It's
not like there's really a case for allowing the clients to skip them,
right? Why aren't we sending something more like
S: CommandPartiallyComplete
S: RowDescription
S: DataRow...
S: CommandPartiallyComplete
S: RowDescription
S: DataRow...
...
S: CommandComplete
C: Sync
gated by a _pq_ parameter, of course.
I think this would all have to use the unnamed portal, but perhaps there
could be other uses with named portals. Some details to be worked out.
Which'd avoid this too, but:
One thing that's missing in this sequence is a way to specify the desired
output format (text/binary) for each result set.
Is a good point. I personally think avoiding the back and forth is more
important though. But if we could address both at the same time...
(I suspect what would be more useful in practice is to designate
output formats per data type.)
Yea, that'd be *really* useful. It sucks that we basically require
multiple round trips to make realistic use of the binary data for the
few types where it's a huge win (e.g. bytea).
Greetings,
Andres Freund
Hi,
On Fri, 9 Oct 2020 at 14:46, Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2020-10-08 09:46:38 +0200, Peter Eisentraut wrote:
New would be that the server would now also respond with a new message,
say,
S: DynamicResultInfo
Now, if the client has seen DynamicResultInfo earlier, it should now go
into
a new subsequence to get the remaining result sets, like this (naming
obviously to be refined):Hm. Isn't this going to be a lot more latency sensitive than we'd like?
This would basically require at least one additional roundtrip for
everything that *potentially* could return multiple result sets, even if
no additional results are returned, right? And it'd add at least one
additional roundtrip for every result set that's actually sent.
Agreed as mentioned.
Is there really a good reason for forcing the client to issue
NextResult, Describe, Execute for each of the dynamic result sets? It's
not like there's really a case for allowing the clients to skip them,
right? Why aren't we sending something more likeS: CommandPartiallyComplete
S: RowDescription
S: DataRow...
S: CommandPartiallyComplete
S: RowDescription
S: DataRow...
...
S: CommandComplete
C: Syncgated by a _pq_ parameter, of course.
I think this would all have to use the unnamed portal, but perhaps there
could be other uses with named portals. Some details to be worked out.Which'd avoid this too, but:
One thing that's missing in this sequence is a way to specify the desired
output format (text/binary) for each result set.Is a good point. I personally think avoiding the back and forth is more
important though. But if we could address both at the same time...(I suspect what would be more useful in practice is to designate
output formats per data type.)Yea, that'd be *really* useful. It sucks that we basically require
multiple round trips to make realistic use of the binary data for the
few types where it's a huge win (e.g. bytea).
Yes!!! Ideally in the startup message.
Dave
Hi,
On 2020-10-09 14:49:11 -0400, Dave Cramer wrote:
On Fri, 9 Oct 2020 at 14:46, Andres Freund <andres@anarazel.de> wrote:
(I suspect what would be more useful in practice is to designate
output formats per data type.)Yea, that'd be *really* useful. It sucks that we basically require
multiple round trips to make realistic use of the binary data for the
few types where it's a huge win (e.g. bytea).Yes!!! Ideally in the startup message.
I don't think startup is a good choice. For one, it's size limited. But
more importantly, before having successfully established a connection,
there's really no way the driver can know which types it should list as
to be sent in binary (consider e.g. some postgis types, which'd greatly
benefit from being sent in binary, but also just version dependent
stuff).
The hard part around this really is whether and how to deal with changes
in type definitions. From types just being created - comparatively
simple - to extensions being dropped and recreated, with oids
potentially being reused.
Greetings,
Andres Freund
On Fri, 9 Oct 2020 at 14:59, Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2020-10-09 14:49:11 -0400, Dave Cramer wrote:
On Fri, 9 Oct 2020 at 14:46, Andres Freund <andres@anarazel.de> wrote:
(I suspect what would be more useful in practice is to designate
output formats per data type.)Yea, that'd be *really* useful. It sucks that we basically require
multiple round trips to make realistic use of the binary data for the
few types where it's a huge win (e.g. bytea).Yes!!! Ideally in the startup message.
I don't think startup is a good choice. For one, it's size limited. But
more importantly, before having successfully established a connection,
there's really no way the driver can know which types it should list as
to be sent in binary (consider e.g. some postgis types, which'd greatly
benefit from being sent in binary, but also just version dependent
stuff).For the most part we know exactly which types we want in binary for 99% of
queries.
The hard part around this really is whether and how to deal with changes
in type definitions. From types just being created - comparatively
simple - to extensions being dropped and recreated, with oids
potentially being reused.
Fair point but this is going to be much more complex than just sending most
of the results in binary which would speed up the overwhelming majority of
queries
Dave Cramer
Show quoted text
On 2020-10-09 21:02, Dave Cramer wrote:
For the most part we know exactly which types we want in binary for 99%
of queries.The hard part around this really is whether and how to deal with changes
in type definitions. From types just being created - comparatively
simple - to extensions being dropped and recreated, with oids
potentially being reused.Fair point but this is going to be much more complex than just sending
most of the results in binary which would speed up the overwhelming
majority of queries
I've been studying in more detail how the JDBC driver handles binary
format use. Having some kind of message "use binary for these types"
would match its requirements quite exactly. (I have also studied
npgsql, but it appears to work quite differently. More input from there
and other places with similar requirements would be welcome.) The
question as mentioned above is how to deal with type changes. Let's
work through a couple of options.
We could send the type/format list with every query. For example, we
could extend/enhance/alter the Bind message so that instead of a
format-per-column it sends a format-per-type. But then you'd need to
send the complete type list every time. The JDBC driver currently has
20+ types already hardcoded and more optionally, so you'd send 100+
bytes for every query, plus required effort for encoding and decoding.
That seems unattractive.
Or we send the type/format list once near the beginning of the session.
Then we need to deal with types being recreated or updated etc.
The first option is that we "lock" the types against changes (ignoring
whether that's actually possible right now). That would mean you
couldn't update an affected type/extension while a JDBC session is
active. That's no good. (Imagine connection pools with hours of server
lifetime.)
Another option is that we invalidate the session when a thus-registered
type changes. Also no good. (We don't want an extension upgrade
suddenly breaking all open connections.)
Finally, we could do it an a best-effort basis. We use binary format
for registered types, until there is some invalidation event for the
type, at which point we revert to default/text format until the end of a
session (or until another protocol message arrives re-registering the
type). This should work, because the result row descriptor contains the
actual format type, and there is no guarantee that it's the same one
that was requested.
So how about that last option? I imagine a new protocol message, say,
TypeFormats, that contains a number of type/format pairs. The message
would typically be sent right after the first ReadyForQuery, gets no
response. It could also be sent at any other time, but I expect that to
be less used in practice. Binary format is used for registered types if
they have binary format support functions, otherwise text continues to
be used. There is no error response for types without binary support.
(There should probably be an error response for registering a type that
does not exist.)
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, 20 Oct 2020 at 05:57, Peter Eisentraut <
peter.eisentraut@2ndquadrant.com> wrote:
On 2020-10-09 21:02, Dave Cramer wrote:
For the most part we know exactly which types we want in binary for 99%
of queries.The hard part around this really is whether and how to deal with
changes
in type definitions. From types just being created - comparatively
simple - to extensions being dropped and recreated, with oids
potentially being reused.Fair point but this is going to be much more complex than just sending
most of the results in binary which would speed up the overwhelming
majority of queriesI've been studying in more detail how the JDBC driver handles binary
format use. Having some kind of message "use binary for these types"
would match its requirements quite exactly. (I have also studied
npgsql, but it appears to work quite differently. More input from there
and other places with similar requirements would be welcome.) The
question as mentioned above is how to deal with type changes. Let's
work through a couple of options.
I've added Vladimir (pgjdbc), Shay (npgsql) and Mark Paluch (r2dbc) to
this discussion.
I'm sure there are others but I'm not acquainted with them
We could send the type/format list with every query. For example, we
could extend/enhance/alter the Bind message so that instead of a
format-per-column it sends a format-per-type. But then you'd need to
send the complete type list every time. The JDBC driver currently has
20+ types already hardcoded and more optionally, so you'd send 100+
bytes for every query, plus required effort for encoding and decoding.
That seems unattractive.Or we send the type/format list once near the beginning of the session.
Then we need to deal with types being recreated or updated etc.The first option is that we "lock" the types against changes (ignoring
whether that's actually possible right now). That would mean you
couldn't update an affected type/extension while a JDBC session is
active. That's no good. (Imagine connection pools with hours of server
lifetime.)Another option is that we invalidate the session when a thus-registered
type changes. Also no good. (We don't want an extension upgrade
suddenly breaking all open connections.)Agreed the first 2 options are not viable.
Finally, we could do it an a best-effort basis. We use binary format
for registered types, until there is some invalidation event for the
type, at which point we revert to default/text format until the end of a
session (or until another protocol message arrives re-registering the
type).
Does the driver tell the server what registered types it wants in binary ?
This should work, because the result row descriptor contains the
actual format type, and there is no guarantee that it's the same one
that was requested.So how about that last option? I imagine a new protocol message, say,
TypeFormats, that contains a number of type/format pairs. The message
would typically be sent right after the first ReadyForQuery, gets no
response.
This seems a bit hard to control. How long do you wait for no response?
It could also be sent at any other time, but I expect that to
be less used in practice. Binary format is used for registered types if
they have binary format support functions, otherwise text continues to
be used. There is no error response for types without binary support.
(There should probably be an error response for registering a type that
does not exist.)I'm not sure we (pgjdbc) want all types with binary support functions sent
automatically. Turns out that decoding binary is sometimes slower than
decoding the text and the on wire overhead isn't significant.
Timestamps/dates with timezone are also interesting as the binary output
does not include the timezone.
The notion of a status change message is appealing however. I used the term
status change on purpose as there are other server changes we would like to
be made aware of. For instance if someone changes the search path, we would
like to know. I'm sort of expanding the scope here but if we are imagining
... :)
Dave
Very interesting conversation, thanks for including me Dave. Here are some
thoughts from the Npgsql perspective,
Re the binary vs. text discussion... A long time ago, Npgsql became a
"binary-only" driver, meaning that it never sends or receives values in
text encoding, and practically always uses the extended protocol. This was
because in most (all?) cases, encoding/decoding binary is more efficient,
and maintaining two encoders/decoders (one for text, one for binary) made
less and less sense. So by default, Npgsql just requests "all binary" in
all Bind messages it sends (there's an API for the user to request text, in
which case they get pure strings which they're responsible for parsing).
Binary handling is implemented for almost all PG types which support it,
and I've hardly seen any complaints about this for the last few years. I'd
be interested in any arguments against this decision (Dave, when have you
seen that decoding binary is slower than decoding text?).
Given the above, allowing the client to specify in advance which types
should be in binary sounds good, but wouldn't help Npgsql much (since by
default it already requests binary for everything). It would slightly help
in allowing binary-unsupported types to automatically come back as text
without manual user API calls, but as I wrote above this is an extremely
rare scenario that people don't care much about.
Is there really a good reason for forcing the client to issue NextResult,
Describe, Execute for each of the dynamic result sets?
I very much agree - it should be possible to execute a procedure and
consume all results in a single roundtrip, otherwise this is quite a perf
killer.
Peter, from your original message:
Following the model from the simple query protocol, CommandComplete
really means one result set complete, not the whole top-level command.
ReadyForQuery means the whole command is complete. This is perhaps
debatable, and interesting questions could also arise when considering what
should happen in the simple query protocol when a query string consists of
multiple commands each returning multiple result sets. But it doesn't
really seem sensible to cater to that
Npgsql implements batching of multiple statements via the extended protocol
in a similar way. In other words, the .NET API allows users to pack
multiple SQL statements and execute them in one roundtrip, and Npgsql does
this by sending
Parse1/Bind1/Describe1/Execute1/Parse2/Bind2/Describe2/Execute2/Sync. So
CommandComplete signals completion of a single statement in the batch,
whereas ReadyForQuery signals completion of the entire batch. This means
that the "interesting questions" mentioned above are possibly relevant to
the extended protocol as well.
Regarding decoding binary vs text performance: There can be a significant
performance cost to fetching the binary format over the text format for
types such as text. See
/messages/by-id/CAMovtNoHFod2jMAKQjjxv209PCTJx5Kc66anwWvX0mEiaXwgmA@mail.gmail.com
for the previous discussion.
From the pgx driver (https://github.com/jackc/pgx) perspective:
A "use binary for these types" message sent once at the beginning of the
session would not only be helpful for dynamic result sets but could
simplify use of the extended protocol in general.
Upthread someone posted a page pgjdbc detailing desired changes to the
backend protocol (
https://github.com/pgjdbc/pgjdbc/blob/master/backend_protocol_v4_wanted_features.md).
I concur with almost everything there, but in particular the first
suggestion of the backend automatically converting binary values like it
does text values would be huge. That combined with the "use binary for
these types" message could greatly simplify the driver side work in using
the binary format.
CommandComplete vs ReadyForQuery -- pgx does the same as Npgsql in that it
bundles batches multiple queries together in the extended protocol and uses
CommandComplete for statement completion and ReadyForQuery for batch
completion.
On Tue, Oct 20, 2020 at 9:28 AM Shay Rojansky <roji@roji.org> wrote:
Show quoted text
Very interesting conversation, thanks for including me Dave. Here are some
thoughts from the Npgsql perspective,Re the binary vs. text discussion... A long time ago, Npgsql became a
"binary-only" driver, meaning that it never sends or receives values in
text encoding, and practically always uses the extended protocol. This was
because in most (all?) cases, encoding/decoding binary is more efficient,
and maintaining two encoders/decoders (one for text, one for binary) made
less and less sense. So by default, Npgsql just requests "all binary" in
all Bind messages it sends (there's an API for the user to request text, in
which case they get pure strings which they're responsible for parsing).
Binary handling is implemented for almost all PG types which support it,
and I've hardly seen any complaints about this for the last few years. I'd
be interested in any arguments against this decision (Dave, when have you
seen that decoding binary is slower than decoding text?).Given the above, allowing the client to specify in advance which types
should be in binary sounds good, but wouldn't help Npgsql much (since by
default it already requests binary for everything). It would slightly help
in allowing binary-unsupported types to automatically come back as text
without manual user API calls, but as I wrote above this is an extremely
rare scenario that people don't care much about.Is there really a good reason for forcing the client to issue
NextResult, Describe, Execute for each of the dynamic result sets?
I very much agree - it should be possible to execute a procedure and
consume all results in a single roundtrip, otherwise this is quite a perf
killer.Peter, from your original message:
Following the model from the simple query protocol, CommandComplete
really means one result set complete, not the whole top-level command.
ReadyForQuery means the whole command is complete. This is perhaps
debatable, and interesting questions could also arise when considering what
should happen in the simple query protocol when a query string consists of
multiple commands each returning multiple result sets. But it doesn't
really seem sensible to cater to thatNpgsql implements batching of multiple statements via the extended
protocol in a similar way. In other words, the .NET API allows users to
pack multiple SQL statements and execute them in one roundtrip, and Npgsql
does this by sending
Parse1/Bind1/Describe1/Execute1/Parse2/Bind2/Describe2/Execute2/Sync. So
CommandComplete signals completion of a single statement in the batch,
whereas ReadyForQuery signals completion of the entire batch. This means
that the "interesting questions" mentioned above are possibly relevant to
the extended protocol as well.
Hi,
On 2020-10-20 18:55:41 -0500, Jack Christensen wrote:
Upthread someone posted a page pgjdbc detailing desired changes to the
backend protocol (
https://github.com/pgjdbc/pgjdbc/blob/master/backend_protocol_v4_wanted_features.md).
A lot of the stuff on there seems way beyond what can be achieved in
something incrementally added to the protocol. Fair enough in an article
about "v4" of the protocol. But I don't think we are - nor should we be
- talking about a full new protocol version here. Instead we are talking
about extending the protocol, where the extensions are opt-in.
Greetings,
Andres Freund
On Tue, 20 Oct 2020 at 20:09, Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2020-10-20 18:55:41 -0500, Jack Christensen wrote:
Upthread someone posted a page pgjdbc detailing desired changes to the
backend protocol (https://github.com/pgjdbc/pgjdbc/blob/master/backend_protocol_v4_wanted_features.md
).A lot of the stuff on there seems way beyond what can be achieved in
something incrementally added to the protocol. Fair enough in an article
about "v4" of the protocol. But I don't think we are - nor should we be
- talking about a full new protocol version here. Instead we are talking
about extending the protocol, where the extensions are opt-in.
You are correct we are not talking about a whole new protocol, but why not ?
Seems to me we would have a lot more latitude to get it right if we didn't
have this limitation.
Dave
Show quoted text
Hi,
On 2020-10-20 20:17:45 -0400, Dave Cramer wrote:
You are correct we are not talking about a whole new protocol, but why not ?
Seems to me we would have a lot more latitude to get it right if we didn't
have this limitation.
A new protocol will face a much bigger adoption hurdle, and there's much
stuff that we'll want to do that we'll have a hard time ever getting off
the ground. Whereas opt-in extensions are much easier to get off the ground.
Greetings,
Andres Freund
On 2020-10-20 12:24, Dave Cramer wrote:
Finally, we could do it an a best-effort basis. We use binary format
for registered types, until there is some invalidation event for the
type, at which point we revert to default/text format until the end
of a
session (or until another protocol message arrives re-registering the
type).Does the driver tell the server what registered types it wants in binary ?
Yes, the driver tells the server, "whenever you send these types, send
them in binary" (all other types keep sending in text).
This should work, because the result row descriptor contains the
actual format type, and there is no guarantee that it's the same one
that was requested.So how about that last option? I imagine a new protocol message, say,
TypeFormats, that contains a number of type/format pairs. The message
would typically be sent right after the first ReadyForQuery, gets no
response.This seems a bit hard to control. How long do you wait for no response?
In this design, you don't need a response.
It could also be sent at any other time, but I expect that to
be less used in practice. Binary format is used for registered
types if
they have binary format support functions, otherwise text continues to
be used. There is no error response for types without binary support.
(There should probably be an error response for registering a type that
does not exist.)I'm not sure we (pgjdbc) want all types with binary support functions
sent automatically. Turns out that decoding binary is sometimes slower
than decoding the text and the on wire overhead isn't significant.
Timestamps/dates with timezone are also interesting as the binary output
does not include the timezone.
In this design, you pick the types you want.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2020-10-09 20:46, Andres Freund wrote:
Is there really a good reason for forcing the client to issue
NextResult, Describe, Execute for each of the dynamic result sets? It's
not like there's really a case for allowing the clients to skip them,
right? Why aren't we sending something more likeS: CommandPartiallyComplete
S: RowDescription
S: DataRow...
S: CommandPartiallyComplete
S: RowDescription
S: DataRow...
...
S: CommandComplete
C: Sync
I want to post my current patch, to keep this discussion moving. There
are still a number of pieces to pull together, but what I have is a
self-contained functioning prototype.
The interesting thing about the above message sequence is that the
"CommandPartiallyComplete" isn't actually necessary. Since an Execute
message normally does not issue a RowDescription response, the
appearance of one is already enough to mark the beginning of a new
result set. Moreover, libpq already handles this correctly, so we
wouldn't need to change it at all.
We might still want to add a new protocol message, for clarity perhaps,
and that would probably only be a few lines of code on either side, but
that would only serve for additional error checking and wouldn't
actually be needed to identify what's going on.
What else we need:
- Think about what should happen if the Execute message specifies a row
count, and what should happen during subsequent Execute messages on the
same portal. I suspect that there isn't a particularly elegant answer,
but we need to pick some behavior.
- Some way for psql to display multiple result sets. Proposals have been
made in [0]/messages/by-id/4580ff7b-d610-eaeb-e06f-4d686896b93b@2ndquadrant.com and [1]https://commitfest.postgresql.org/29/2096/. (You need either patch or one like it for the
regression tests in this patch to pass.)
- Session-level default result formats setting, proposed in [2]https://commitfest.postgresql.org/31/2812/. Not
strictly necessary, but would be most sensible to coordinate these two.
- We don't have a way to test the extended query protocol. I have
attached my test program, but we might want to think about something
more permanent. Proposals for this have already been made in [3]/messages/by-id/4f733cca-5e07-e167-8b38-05b5c9066d04@2ndQuadrant.com.
- Right now, this only supports returning dynamic result sets from a
top-level CALL. Specifications for passing dynamic result sets from one
procedure to a calling procedure exist in the SQL standard and could be
added later.
(All the SQL additions in this patch are per SQL standard. DB2 appears
to be the closest existing implementation.)
[0]: /messages/by-id/4580ff7b-d610-eaeb-e06f-4d686896b93b@2ndquadrant.com
/messages/by-id/4580ff7b-d610-eaeb-e06f-4d686896b93b@2ndquadrant.com
[1]: https://commitfest.postgresql.org/29/2096/
[2]: https://commitfest.postgresql.org/31/2812/
[3]: /messages/by-id/4f733cca-5e07-e167-8b38-05b5c9066d04@2ndQuadrant.com
/messages/by-id/4f733cca-5e07-e167-8b38-05b5c9066d04@2ndQuadrant.com
--
Peter Eisentraut
2ndQuadrant, an EDB company
https://www.2ndquadrant.com/
Attachments:
v1-0001-Dynamic-result-sets-from-procedures.patchtext/plain; charset=UTF-8; name=v1-0001-Dynamic-result-sets-from-procedures.patch; x-mac-creator=0; x-mac-type=0Download+443-35
test-extended-dynres.ctext/plain; charset=UTF-8; name=test-extended-dynres.c; x-mac-creator=0; x-mac-type=0Download
Hi Peter,
On 12/30/20 9:33 AM, Peter Eisentraut wrote:
On 2020-10-09 20:46, Andres Freund wrote:
Is there really a good reason for forcing the client to issue
NextResult, Describe, Execute for each of the dynamic result sets? It's
not like there's really a case for allowing the clients to skip them,
right? Why aren't we sending something more likeS: CommandPartiallyComplete
S: RowDescription
S: DataRow...
S: CommandPartiallyComplete
S: RowDescription
S: DataRow...
...
S: CommandComplete
C: SyncI want to post my current patch, to keep this discussion moving.
CFBot reports that tests are failing, although the patch applies.
Also, you dropped all the driver authors from the thread. Not sure if
that was intentional, but you might want to add them back if you need
their input.
Regards,
--
-David
david@pgmasters.net
On 15.03.21 14:56, David Steele wrote:
Hi Peter,
On 12/30/20 9:33 AM, Peter Eisentraut wrote:
On 2020-10-09 20:46, Andres Freund wrote:
Is there really a good reason for forcing the client to issue
NextResult, Describe, Execute for each of the dynamic result sets? It's
not like there's really a case for allowing the clients to skip them,
right? Why aren't we sending something more likeS: CommandPartiallyComplete
S: RowDescription
S: DataRow...
S: CommandPartiallyComplete
S: RowDescription
S: DataRow...
...
S: CommandComplete
C: SyncI want to post my current patch, to keep this discussion moving.
CFBot reports that tests are failing, although the patch applies.
Yes, as explained in the message, you need another patch that makes psql
show the additional result sets. The cfbot cannot handle that kind of
thing.
In the meantime, I have made a few small fixes, so I'm attaching another
patch.