About large objects asynchronous and non-blocking support
Hi.
At the moment libpq doesn't seem to support asynchronous and
non-blocking support for large objects, in the style of
PQsendQuery/PQgetResult. This makes large objects hardly suited for
single-threaded programs based on some variant of select().
I would like to know whether this is a deliberate decision or it is
considered a bug, and, in case, whether it is scheduled to be fixed.
Though I cannot guarantee anything, I may be interested into working out
a patch, if no one is already doing the same (of course I understand
that this patch wouldn't be for 9.3, which is already in its late
release cycle).
Do you think this may be of interest?
Thanks, Giovanni.
--
Giovanni Mascellani <mascellani@poisson.phc.unipi.it>
Pisa, Italy
Web: http://poisson.phc.unipi.it/~mascellani
Jabber: g.mascellani@jabber.org / giovanni@elabor.homelinux.org
2013/6/5 Giovanni Mascellani <g.mascellani@gmail.com>
Hi.
At the moment libpq doesn't seem to support asynchronous and
non-blocking support for large objects, in the style of
PQsendQuery/PQgetResult. This makes large objects hardly suited for
single-threaded programs based on some variant of select().
According to http://www.postgresql.org/docs/9.2/static/lo-funcs.html
"There are server-side functions callable from SQL that correspond to each
of
the client-side functions". Hence, you can call these functions by using
asynchronous API.
--
// Dmitriy.
Hi.
At the moment libpq doesn't seem to support asynchronous and
non-blocking support for large objects, in the style of
PQsendQuery/PQgetResult. This makes large objects hardly suited for
single-threaded programs based on some variant of select().I would like to know whether this is a deliberate decision or it is
considered a bug, and, in case, whether it is scheduled to be fixed.
Certainly not bug, since the doc clearly stats that PQsendQuery can
only be used as a substituation of PQexec. (see "Asynchronous Command
Processing" section" for more details). The large object API is
completely different from PQexec and its friends, so it cannot be used
with PQsendQuery.
Talking about more details, PQexec and PQsendQuery is designed to
handle only "Q" messsage out of PostgreSQL frontend/backend protocol,
while to access large objects, you need to handle "V" message.
Though I cannot guarantee anything, I may be interested into working out
a patch, if no one is already doing the same (of course I understand
that this patch wouldn't be for 9.3, which is already in its late
release cycle).Do you think this may be of interest?
Yes, I understand your pain, and I myself think we need new APIs for
large objects. Probably that would be not terribly hard. One idea
would be inventing an asynchronous version of PQfn and let
lo_read/lo_write allow to use the new API.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
2013/6/6 Tatsuo Ishii <ishii@postgresql.org>
Hi.
At the moment libpq doesn't seem to support asynchronous and
non-blocking support for large objects, in the style of
PQsendQuery/PQgetResult. This makes large objects hardly suited for
single-threaded programs based on some variant of select().I would like to know whether this is a deliberate decision or it is
considered a bug, and, in case, whether it is scheduled to be fixed.Certainly not bug, since the doc clearly stats that PQsendQuery can
only be used as a substituation of PQexec. (see "Asynchronous Command
Processing" section" for more details). The large object API is
completely different from PQexec and its friends, so it cannot be used
with PQsendQuery.Talking about more details, PQexec and PQsendQuery is designed to
handle only "Q" messsage out of PostgreSQL frontend/backend protocol,
while to access large objects, you need to handle "V" message.
Really? I've specialized a C++ standard std::streambuf class by using
only extended query protocol (by using prepared statements via
PQsendPrepare,
PQsendQueryPrepared) to call SQL functions like loread(), lowrite(),
lo_tell(), etc.
All these functions just needs to be called inside BEGIN block. And yes,
it can be done asynchronously.
Though I cannot guarantee anything, I may be interested into working out
a patch, if no one is already doing the same (of course I understand
that this patch wouldn't be for 9.3, which is already in its late
release cycle).Do you think this may be of interest?
Yes, I understand your pain, and I myself think we need new APIs for
large objects. Probably that would be not terribly hard. One idea
would be inventing an asynchronous version of PQfn and let
lo_read/lo_write allow to use the new API.
Yes, but according to
http://www.postgresql.org/docs/9.2/static/protocol-flow.html#AEN95330
and/or http://www.postgresql.org/docs/9.2/static/libpq-fastpath.html
function call sub-protocol is obsolete. Thats why personally I decided to
use prepared statements.
--
// Dmitriy.
Hi.
Il 05/06/2013 22:52, Dmitriy Igrishin ha scritto:
At the moment libpq doesn't seem to support asynchronous and
non-blocking support for large objects, in the style of
PQsendQuery/PQgetResult. This makes large objects hardly suited for
single-threaded programs based on some variant of select().According to http://www.postgresql.org/docs/9.2/static/lo-funcs.html
"There are server-side functions callable from SQL that correspond to each
of
the client-side functions". Hence, you can call these functions by using
asynchronous API.
Thanks, I'll try this way (BTW, it may help to specify on the
documentation that lo_read and lo_write lose the "_"). I wonder whether
having to escape all the content for lowrite can't have a negative
impact on performances. It shouldn't be too bad for my case, though.
Giovanni.
--
Giovanni Mascellani <mascellani@poisson.phc.unipi.it>
Pisa, Italy
Web: http://poisson.phc.unipi.it/~mascellani
Jabber: g.mascellani@jabber.org / giovanni@elabor.homelinux.org
2013/6/8 Giovanni Mascellani <g.mascellani@gmail.com>
Hi.
Il 05/06/2013 22:52, Dmitriy Igrishin ha scritto:
At the moment libpq doesn't seem to support asynchronous and
non-blocking support for large objects, in the style of
PQsendQuery/PQgetResult. This makes large objects hardly suited for
single-threaded programs based on some variant of select().According to http://www.postgresql.org/docs/9.2/static/lo-funcs.html
"There are server-side functions callable from SQL that correspond toeach
of
the client-side functions". Hence, you can call these functions by using
asynchronous API.Thanks, I'll try this way (BTW, it may help to specify on the
documentation that lo_read and lo_write lose the "_"). I wonder whether
having to escape all the content for lowrite can't have a negative
impact on performances. It shouldn't be too bad for my case, though.
You may avoid escaping bytea data by using PQsendPrepare,
PQsendQueryPrepared specifying binary data format.
--
// Dmitriy.
2013/6/6 Tatsuo Ishii <ishii@postgresql.org>
Hi.
At the moment libpq doesn't seem to support asynchronous and
non-blocking support for large objects, in the style of
PQsendQuery/PQgetResult. This makes large objects hardly suited for
single-threaded programs based on some variant of select().I would like to know whether this is a deliberate decision or it is
considered a bug, and, in case, whether it is scheduled to be fixed.Certainly not bug, since the doc clearly stats that PQsendQuery can
only be used as a substituation of PQexec. (see "Asynchronous Command
Processing" section" for more details). The large object API is
completely different from PQexec and its friends, so it cannot be used
with PQsendQuery.Talking about more details, PQexec and PQsendQuery is designed to
handle only "Q" messsage out of PostgreSQL frontend/backend protocol,
while to access large objects, you need to handle "V" message.Really? I've specialized a C++ standard std::streambuf class by using
only extended query protocol (by using prepared statements via
PQsendPrepare,
PQsendQueryPrepared) to call SQL functions like loread(), lowrite(),
lo_tell(), etc.
All these functions just needs to be called inside BEGIN block. And yes,
it can be done asynchronously.
Thanks for reminding me. I totally forgot about them.
Though I cannot guarantee anything, I may be interested into working out
a patch, if no one is already doing the same (of course I understand
that this patch wouldn't be for 9.3, which is already in its late
release cycle).Do you think this may be of interest?
Yes, I understand your pain, and I myself think we need new APIs for
large objects. Probably that would be not terribly hard. One idea
would be inventing an asynchronous version of PQfn and let
lo_read/lo_write allow to use the new API.Yes, but according to
http://www.postgresql.org/docs/9.2/static/protocol-flow.html#AEN95330
and/or http://www.postgresql.org/docs/9.2/static/libpq-fastpath.html
function call sub-protocol is obsolete. Thats why personally I decided to
use prepared statements.
I'm not totally pleased with the comment in the doc. For me the only
reason why those extended protocol functions are recommended is the
binary protocol can be used. The price is parsing, planning, and
preparing the query, all of them are essentially unnecessary for a
large object access use case.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers