Re: I want to change libpq and libpgtcl for better handling of large query

Started by Hannu Krosingover 28 years ago1 messageshackers
Jump to latest
#1Hannu Krosing
hannu@tm.ee

From: Constantin Teodorescu <teo@flex.ro> wrote

Peter T Mount wrote:

The only solution I was able to give was for them to use cursors, and
fetch the result in chunks.

Got it!!!

Seems everyone has 'voted' for using cursors.

As I saw it the cursors were suggested as a replacement for opening a
separate connection, not as a substitute for row-level callbacks, which
would be very nice to have and which can probably be implemented in a
backward compatible manner anyhow.

As a matter of fact, I have tested both a
BEGIN ; DECLARE CURSOR ; FETCH N; END;
and a
SELECT FROM

Both of them are locking for write the tables that they use, until end
of processing.

Fetching records in chunks (100) would speed up a little the processing.

But I am still convinced that if frontend would be able to process
tuples as soon as they come, the overall time of processing a big table
would be less.
Fetching in chunks, the frontend waits for the 100 records to come (time
A) and then process them (time B). A and B cannot be overlapped.

Perhaps you could overlap A2 and B1 (by sending the request for next 100
and then processing the first 100.

Still I think that using callbacks for special cases would be more
efficient and also more "symmetric" with what backend does

Thanks a lot for helping me to decide. Reports in PgAccess will use
cursors.

I still urge you to add callbacks to libpq and libpgtcl.

The way I see it would be one additional function that sets (or resets
if given NULL) the callback.

BTW, are you sure that you can't do something similar using the current
libpq?

Hannu