libpq sendQuery -- getResult not returning until all queries complete
Hello, I'm sending a group of queries to the database with PQsendQuery
and using PQgetResult to return results similar to this:
PQsendQuery( "select current_timestamp; select pg_sleep(1); select
current_timestamp" );
while( result = PQgetResult() )
doSomethingWith( result )
I'm finding that PQgetResult() will not return the first result until
all three results are ready. In real life I would be sending
consequential queries and would like the front end to be processing
one result while the backend is producing the next one. The
documentation for PQgetResult in section 30.4 of the 8.4.5 manual
suggests that this is what should happen.
Can anyone explain if sendQuery/getResult is intended to work as
documented? And if so what my problem may be? (perhaps my pg_sleep
is preventing the first result from being written?)
Thanks,
-Kelly
You can't concurrently execute queries from within a single
connection. Perhaps you should use multiple connections, while
understanding the implications of having each operate within a
separate snapshot.
Don't forget to free memory with PQclear() . I guess you omitted that
because it's just pseudo-code.
--
Regards,
Peter Geoghegan
On Tue, Dec 21, 2010 at 2:21 PM, Peter Geoghegan
<peter.geoghegan86@gmail.com> wrote:
You can't concurrently execute queries from within a single
connection. Perhaps you should use multiple connections, while
understanding the implications of having each operate within a
separate snapshot.
OP is not suggesting that queries run concurrently, but asking why
result sets can't be popped off as the queries resolve. It's a good
question; it's probably either a bug in the database or the
documentation (if it does not turn out to be operator error).
Kelly, if you can produce small test case in C I'll double check it.
merlin
Yes, I omitted the PQclear for simplicity.
I'm not concurrently executing queries, I'm sending multiple queries
to be executed serially by the backend. I'm expecting the server to
send me the PQresult objects as each query completes rather than
sending them all *after* all of the queries have completed.
That will result in some amount of concurrency as my client will be
processing one result while the backend is processing the next query.
-K
On Tue, Dec 21, 2010 at 1:21 PM, Peter Geoghegan
<peter.geoghegan86@gmail.com> wrote:
Show quoted text
You can't concurrently execute queries from within a single
connection. Perhaps you should use multiple connections, while
understanding the implications of having each operate within a
separate snapshot.Don't forget to free memory with PQclear() . I guess you omitted that
because it's just pseudo-code.--
Regards,
Peter Geoghegan
This should do it:
#include <stdio.h>
#include <stdlib.h>
#include <libpq-fe.h>
#define CONNINFO "your info here"
#define COMMANDS "select current_timestamp; select pg_sleep(5); select
current_timestamp"
void fatal( const char *msg ) { fprintf( stderr, "%s\n", msg ); exit(1); }
int
main()
{
PGresult *res = 0;
PGconn *conn = PQconnectdb( CONNINFO );
if (!conn) fatal("PQconnectdb returned null");
if ( PQstatus(conn) != CONNECTION_OK ) {
PQfinish( conn );
fatal("PQconnectdb failed");
}
if (!PQsendQuery(conn, COMMANDS)) {
PQfinish( conn );
fatal("PQsendQuery failed");
}
while( (res = PQgetResult( conn )) != 0 ) {
printf("retrieved result\n");
PQclear(res);
}
PQfinish( conn );
return 0;
}
Show quoted text
On Tue, Dec 21, 2010 at 1:26 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Tue, Dec 21, 2010 at 2:21 PM, Peter Geoghegan
<peter.geoghegan86@gmail.com> wrote:You can't concurrently execute queries from within a single
connection. Perhaps you should use multiple connections, while
understanding the implications of having each operate within a
separate snapshot.OP is not suggesting that queries run concurrently, but asking why
result sets can't be popped off as the queries resolve. It's a good
question; it's probably either a bug in the database or the
documentation (if it does not turn out to be operator error).Kelly, if you can produce small test case in C I'll double check it.
merlin
Kelly Burkhart wrote:
#define COMMANDS "select current_timestamp; select pg_sleep(5); select
current_timestamp"
You should use current_clock() instead of current_timestamp, because
current_timestamp returns a fixed value throughout a transaction.
Best regards,
--
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org
On Tue, Dec 21, 2010 at 3:07 PM, Daniel Verite <daniel@manitou-mail.org> wrote:
Kelly Burkhart wrote:
#define COMMANDS "select current_timestamp; select pg_sleep(5); select
current_timestamp"You should use current_clock() instead of current_timestamp, because
current_timestamp returns a fixed value throughout a transaction.
Well, that's correct, but irrelevant -- Kelly's analysis is correct.
The documentation for PQgetResult states:
"Using PQsendQuery and PQgetResult solves one of PQexec's problems: If
a command string contains multiple SQL commands, the results of those
commands can be obtained individually. (This allows a simple form of
overlapped processing, by the way: the client can be handling the
results of one command while the server is still working on later
queries in the same command string.) However, calling PQgetResult will
still cause the client to block until the server completes the next
SQL command. This can be avoided by proper use of two more functions:"
but control is not returned until all three queries have resolved.
this is probably an issue with libpq. investigating...
merlin
On Tue, Dec 21, 2010 at 3:14 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Tue, Dec 21, 2010 at 3:07 PM, Daniel Verite <daniel@manitou-mail.org> wrote:
Kelly Burkhart wrote:
#define COMMANDS "select current_timestamp; select pg_sleep(5); select
current_timestamp"You should use current_clock() instead of current_timestamp, because
current_timestamp returns a fixed value throughout a transaction.Well, that's correct, but irrelevant -- Kelly's analysis is correct.
The documentation for PQgetResult states:"Using PQsendQuery and PQgetResult solves one of PQexec's problems: If
a command string contains multiple SQL commands, the results of those
commands can be obtained individually. (This allows a simple form of
overlapped processing, by the way: the client can be handling the
results of one command while the server is still working on later
queries in the same command string.) However, calling PQgetResult will
still cause the client to block until the server completes the next
SQL command. This can be avoided by proper use of two more functions:"but control is not returned until all three queries have resolved.
this is probably an issue with libpq. investigating...
hm, it looks like the backend is not flushing the command complete for
each command until finishing all the queries. This is what signals
libpq that a particular command has been executed.
merlin
On Tue, Dec 21, 2010 at 3:37 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Tue, Dec 21, 2010 at 3:14 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Tue, Dec 21, 2010 at 3:07 PM, Daniel Verite <daniel@manitou-mail.org> wrote:
Kelly Burkhart wrote:
#define COMMANDS "select current_timestamp; select pg_sleep(5); select
current_timestamp"You should use current_clock() instead of current_timestamp, because
current_timestamp returns a fixed value throughout a transaction.Well, that's correct, but irrelevant -- Kelly's analysis is correct.
The documentation for PQgetResult states:"Using PQsendQuery and PQgetResult solves one of PQexec's problems: If
a command string contains multiple SQL commands, the results of those
commands can be obtained individually. (This allows a simple form of
overlapped processing, by the way: the client can be handling the
results of one command while the server is still working on later
queries in the same command string.) However, calling PQgetResult will
still cause the client to block until the server completes the next
SQL command. This can be avoided by proper use of two more functions:"but control is not returned until all three queries have resolved.
this is probably an issue with libpq. investigating...hm, it looks like the backend is not flushing the command complete for
each command until finishing all the queries. This is what signals
libpq that a particular command has been executed.
to see this in action, you can interject a query between queries 1 & 2
that sends a lot of data. the 'lots of data' forces query one protocol
to flush out, which the client handles properly. this is likely
backend bug -- it needs to force a flush upon command completion?
merlin
On Tue, Dec 21, 2010 at 3:40 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Tue, Dec 21, 2010 at 3:37 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Tue, Dec 21, 2010 at 3:14 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Tue, Dec 21, 2010 at 3:07 PM, Daniel Verite <daniel@manitou-mail.org> wrote:
Kelly Burkhart wrote:
#define COMMANDS "select current_timestamp; select pg_sleep(5); select
current_timestamp"You should use current_clock() instead of current_timestamp, because
current_timestamp returns a fixed value throughout a transaction.Well, that's correct, but irrelevant -- Kelly's analysis is correct.
The documentation for PQgetResult states:"Using PQsendQuery and PQgetResult solves one of PQexec's problems: If
a command string contains multiple SQL commands, the results of those
commands can be obtained individually. (This allows a simple form of
overlapped processing, by the way: the client can be handling the
results of one command while the server is still working on later
queries in the same command string.) However, calling PQgetResult will
still cause the client to block until the server completes the next
SQL command. This can be avoided by proper use of two more functions:"but control is not returned until all three queries have resolved.
this is probably an issue with libpq. investigating...hm, it looks like the backend is not flushing the command complete for
each command until finishing all the queries. This is what signals
libpq that a particular command has been executed.to see this in action, you can interject a query between queries 1 & 2
that sends a lot of data. the 'lots of data' forces query one protocol
to flush out, which the client handles properly. this is likely
backend bug -- it needs to force a flush upon command completion?
hm, a pq_flush() after command completion putmessage in
backend/tcop/dest.c seems to fix the problem. I'll send up a patch to
-hackers. They might backpatch it, unless there is a good reason not
to do this (I can't think of any).
merlin
Merlin Moncure <mmoncure@gmail.com> writes:
hm, a pq_flush() after command completion putmessage in
backend/tcop/dest.c seems to fix the problem. I'll send up a patch to
-hackers. They might backpatch it, unless there is a good reason not
to do this (I can't think of any).
If you just unconditionally flush there, it will result in an extra
network message in the normal case where there's not another query
to do. The current code is designed not to flush until it sends
ReadyForQuery.
regards, tom lane
On Tue, Dec 21, 2010 at 6:49 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Merlin Moncure <mmoncure@gmail.com> writes:
hm, a pq_flush() after command completion putmessage in
backend/tcop/dest.c seems to fix the problem. I'll send up a patch to
-hackers. They might backpatch it, unless there is a good reason not
to do this (I can't think of any).If you just unconditionally flush there, it will result in an extra
network message in the normal case where there's not another query
to do. The current code is designed not to flush until it sends
ReadyForQuery.
yeah, I was looking at that. I don't see an easy way to test if there
is another query waiting to execute right there. Maybe a
documentation patch is in order :-).
merlin
Merlin Moncure <mmoncure@gmail.com> writes:
On Tue, Dec 21, 2010 at 6:49 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
If you just unconditionally flush there, it will result in an extra
network message in the normal case where there's not another query
to do. �The current code is designed not to flush until it sends
ReadyForQuery.
yeah, I was looking at that. I don't see an easy way to test if there
is another query waiting to execute right there. Maybe a
documentation patch is in order :-).
dest.c doesn't have the info available. I think that to do this, we'd
need to move the responsibility for calling pq_flush out to postgres.c.
Not sure if it's worth it.
regards, tom lane
I attempted to unsubscribe from this list (for the holidays) without
success.
Could anyone please help me. I am continuing to get messages from the
list.
I broke open the message header and did as it said for unsubscribing.
See below for what the majordomo sent back.
-Will
unsub pgsql-general
**** The unsubscribe command did not succeed.
****
**** No e-mail addresses matching
**** "William Gordon Rutherdale (rutherw)" <rutherw@cisco.com>
**** are subscribed to the pgsql-general mailing list.
****
Valid commands processed: 1
0 succeeded, 0 stalled, and 1 failed.
Use the following command:
sessioninfo ea4ce02c75fbad19425a55ff2bd2bc20b17fd794
to see technical information about this session.
On Tuesday 21 December 2010 4:16:00 pm William Gordon Rutherdale (rutherw)
wrote:
I attempted to unsubscribe from this list (for the holidays) without
success.Could anyone please help me. I am continuing to get messages from the
list.I broke open the message header and did as it said for unsubscribing.
See below for what the majordomo sent back.
-Will
unsub pgsql-general
**** The unsubscribe command did not succeed.
****
**** No e-mail addresses matching
**** "William Gordon Rutherdale (rutherw)" <rutherw@cisco.com>
**** are subscribed to the pgsql-general mailing list.
****Valid commands processed: 1
0 succeeded, 0 stalled, and 1 failed.Use the following command:
sessioninfo ea4ce02c75fbad19425a55ff2bd2bc20b17fd794
to see technical information about this session.
You may want to try the Web link at the bottom of the page and access your
subscription from there. You will need to know the password you where issued
when you joined though.
--
Adrian Klaver
adrian.klaver@gmail.com
On Tue, Dec 21, 2010 at 6:11 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Merlin Moncure <mmoncure@gmail.com> writes:
On Tue, Dec 21, 2010 at 6:49 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
If you just unconditionally flush there, it will result in an extra
network message in the normal case where there's not another query
to do. The current code is designed not to flush until it sends
ReadyForQuery.yeah, I was looking at that. I don't see an easy way to test if there
is another query waiting to execute right there. Maybe a
documentation patch is in order :-).dest.c doesn't have the info available. I think that to do this, we'd
need to move the responsibility for calling pq_flush out to postgres.c.
Not sure if it's worth it.
So if I understand correctly, a flush will occur when all commands are
completed and prior to completion, whenever PqSendBuffer is full.
Analogous to stdio full vs. line buffering, this is full rather than
result buffering.
It seems to me that is not quite optimal and 'result buffering' would
be better. Did you come to the same conclusion but decide that it's
not 'better enough' to justify polluting postgres.c with a special
flush to satisfy this case?
-K
-----Original Message-----
From: Adrian Klaver [mailto:adrian.klaver@gmail.com]
Sent: 21 December 2010 20:36
To: pgsql-general@postgresql.org
Cc: William Gordon Rutherdale (rutherw)
Subject: Re: [GENERAL] Cannot unsubscribeOn Tuesday 21 December 2010 4:16:00 pm William Gordon Rutherdale
(rutherw)
wrote:I attempted to unsubscribe from this list (for the holidays) without
success.Could anyone please help me. I am continuing to get messages from
the
list.
I broke open the message header and did as it said for
unsubscribing.
See below for what the majordomo sent back.
-Will
unsub pgsql-general
**** The unsubscribe command did not succeed.
****
**** No e-mail addresses matching
**** "William Gordon Rutherdale (rutherw)" <rutherw@cisco.com>
**** are subscribed to the pgsql-general mailing list.
****Valid commands processed: 1
0 succeeded, 0 stalled, and 1 failed.Use the following command:
sessioninfo ea4ce02c75fbad19425a55ff2bd2bc20b17fd794
to see technical information about this session.You may want to try the Web link at the bottom of the page and access
your
subscription from there. You will need to know the password you where
issued
when you joined though.--
Adrian Klaver
adrian.klaver@gmail.com
Unfortunately I haven't a clue what my password is supposed to be.
Clearly something is broken in the system, as it's contradicting itself
by sending me emails but failing to recognise that same email address
when I attempt to unsubscribe. This is not my fault.
I need to have this problem fixed today. This is my last day before the
holidays.
Could you please just fix it and get me off the list.
-Will
On Wed, Dec 22, 2010 at 10:07 AM, Kelly Burkhart
<kelly.burkhart@gmail.com> wrote:
On Tue, Dec 21, 2010 at 6:11 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Merlin Moncure <mmoncure@gmail.com> writes:
On Tue, Dec 21, 2010 at 6:49 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
If you just unconditionally flush there, it will result in an extra
network message in the normal case where there's not another query
to do. The current code is designed not to flush until it sends
ReadyForQuery.yeah, I was looking at that. I don't see an easy way to test if there
is another query waiting to execute right there. Maybe a
documentation patch is in order :-).dest.c doesn't have the info available. I think that to do this, we'd
need to move the responsibility for calling pq_flush out to postgres.c.
Not sure if it's worth it.So if I understand correctly, a flush will occur when all commands are
completed and prior to completion, whenever PqSendBuffer is full.
Analogous to stdio full vs. line buffering, this is full rather than
result buffering.It seems to me that is not quite optimal and 'result buffering' would
be better. Did you come to the same conclusion but decide that it's
not 'better enough' to justify polluting postgres.c with a special
flush to satisfy this case?
The basic issue is that multiple queries per libpq call is more or
less a misfeature -- it's not worth refactoring the backend protocol
handling to peek ahead to know if it needs to flush on command
complete. Note that you can't use parametrized statements when using
this method, and use of parameterized statements should always be
encouraged.
Maybe there is another way to do what you are trying to do?
merlin
On Wednesday 22 December 2010 7:18:00 am William Gordon Rutherdale (rutherw)
wrote:
Unfortunately I haven't a clue what my password is supposed to be.
It would have been in the confirmation reply that you got when you signed up, on
the off chance you still have it:)
Clearly something is broken in the system, as it's contradicting itself
by sending me emails but failing to recognise that same email address
when I attempt to unsubscribe. This is not my fault.
One of those left hand/right hand problems unfortunately.
I need to have this problem fixed today. This is my last day before the
holidays.Could you please just fix it and get me off the list.
I am afraid I do not have the access. Your best bet would be to send an email to
the admin- majordomo-owner@postgresql.org
-Will
--
Adrian Klaver
adrian.klaver@gmail.com
On Wed, Dec 22, 2010 at 9:18 AM, Merlin Moncure <mmoncure@gmail.com> wrote:
On Wed, Dec 22, 2010 at 10:07 AM, Kelly Burkhart
<kelly.burkhart@gmail.com> wrote:On Tue, Dec 21, 2010 at 6:11 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Merlin Moncure <mmoncure@gmail.com> writes:
On Tue, Dec 21, 2010 at 6:49 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
If you just unconditionally flush there, it will result in an extra
network message in the normal case where there's not another query
to do. The current code is designed not to flush until it sends
ReadyForQuery.yeah, I was looking at that. I don't see an easy way to test if there
is another query waiting to execute right there. Maybe a
documentation patch is in order :-).dest.c doesn't have the info available. I think that to do this, we'd
need to move the responsibility for calling pq_flush out to postgres.c.
Not sure if it's worth it.So if I understand correctly, a flush will occur when all commands are
completed and prior to completion, whenever PqSendBuffer is full.
Analogous to stdio full vs. line buffering, this is full rather than
result buffering.It seems to me that is not quite optimal and 'result buffering' would
be better. Did you come to the same conclusion but decide that it's
not 'better enough' to justify polluting postgres.c with a special
flush to satisfy this case?The basic issue is that multiple queries per libpq call is more or
less a misfeature -- it's not worth refactoring the backend protocol
handling to peek ahead to know if it needs to flush on command
complete. Note that you can't use parametrized statements when using
this method, and use of parameterized statements should always be
encouraged.Maybe there is another way to do what you are trying to do?
The real world thing I'm trying to accomplish: We have several GUI
applications that start up and must perform several queries before
displaying anything useful. When the app is (latency wise) far away
from the DB, startup time is noticeably slower. I was experimenting
with the multiple queries per libpq call (which doesn't seem like a
misfeature at all to me) to suggest to the owners of these apps a way
to realize faster startup.
In my particular case, sending multiple queries in one go will help
them regardless of weather they get the results back as they complete
or all at once. Current behavior is surprising though based on how
the documentation is written. I think I agree that 'result buffering'
is not better enough to justify significant work, on the other hand if
the fix is simple it would IMO be an improvement.
-K