Networking feature for postgresql...
Hi,
I'm trying to add a -project specific- networking feature to my postgres
build (or database as function). What I want to do is to send a Query
instance (as a String-retrieved through an SPI function) to other
machines and (after they have executed it) to receive result tuples.
It's about a mediator-wrapper project. My first thought was to write 2
SPI functions (one for the server (concurrent) and the other for client)
but I'm not sure if this is going to work. I'm worried about setting up
the server process running on the background while other SPI calls are
made.
Can this be done? Should I consider something else first, before
writting any code?
Any suggestions would be appreciated!
Best regards,
Ntinos Katsaros
Katsaros Kwn/nos wrote:
Hi,
I'm trying to add a -project specific- networking feature to my postgres
build (or database as function). What I want to do is to send a Query
instance (as a String-retrieved through an SPI function) to other
machines and (after they have executed it) to receive result tuples.
It's about a mediator-wrapper project. My first thought was to write 2
SPI functions (one for the server (concurrent) and the other for client)
but I'm not sure if this is going to work. I'm worried about setting up
the server process running on the background while other SPI calls are
made.
Have you looked at the dblink code?
--
Richard Huxton
Archonet Ltd
Well, actually no :) ! Thanks for the hint!
But just from curiosity, would the scenario I described work?
I mean is it possible for an SPI process to run in the background while
other SPI calls are made?
Ntinos Katsaros
Show quoted text
On Thu, 2004-10-14 at 11:15, Richard Huxton wrote:
Katsaros Kwn/nos wrote:
Hi,
I'm trying to add a -project specific- networking feature to my postgres
build (or database as function). What I want to do is to send a Query
instance (as a String-retrieved through an SPI function) to other
machines and (after they have executed it) to receive result tuples.
It's about a mediator-wrapper project. My first thought was to write 2
SPI functions (one for the server (concurrent) and the other for client)
but I'm not sure if this is going to work. I'm worried about setting up
the server process running on the background while other SPI calls are
made.Have you looked at the dblink code?
Katsaros Kwn/nos wrote:
Well, actually no :) ! Thanks for the hint!
But just from curiosity, would the scenario I described work?
I mean is it possible for an SPI process to run in the background while
other SPI calls are made?
I don't think so, you're running in a backend process, so you'd need to
fork the backend itself.
On Thu, 2004-10-14 at 11:15, Richard Huxton wrote:
Katsaros Kwn/nos wrote:
Hi,
I'm trying to add a -project specific- networking feature to my postgres
build (or database as function). What I want to do is to send a Query
instance (as a String-retrieved through an SPI function) to other
machines and (after they have executed it) to receive result tuples.
It's about a mediator-wrapper project. My first thought was to write 2
SPI functions (one for the server (concurrent) and the other for client)
but I'm not sure if this is going to work. I'm worried about setting up
the server process running on the background while other SPI calls are
made.Have you looked at the dblink code?
--
Richard Huxton
Archonet Ltd
I saw your message on the postgresql mailing lists. The TelegraphCQ
project at Berkeley is implemented using the Postgres code base as a
starting point. TelegraphCQ has a generalized mechanism for receiving
data from remote data sources, and also for on demand request-response
style queries to remote data sources. You can get more information
about the project at: http://telegraph.cs.berkeley.edu. A quick
overview of the remote data source features can be found here:
http://telegraph.cs.berkeley.edu/telegraphcq/v0.2/Data_Acquisition.html
I am currently doing preliminary investigations to see how difficult
it would be to integrate access to remote data sources into the
postgres code base.
Thank you very much for this information! I'll definitely take a look at
it.
There is a project in the contrib directory called dblink which may
fit your needs.
Well, I saw it but although it can serve the communication between
db-nodes, it seems it may not be appropriate for what I want to do. Even
though it can be used to send queries and receive results, it blocks
until a transaction is finished (at least I think so). This may be no
problem for other applications but for a mediator-wrapper model with
adaptive/dynamic query processing techniques it is. For example, if it
really blocks (and thats why I asked about concurrent SPI calls & fork)
it is not possible to have multiple queries on wire or monitor the
connection (e.g. data rate etc.).
Excuse me if I make a mistake in what I say above, I'm not the best
programmer :-) !!
Best Regards,
Ntinos Katsaros
PS: Sorry if this mail comes to you for the second time Owen. I have
problems with my SMTP server.
Import Notes
Resolved by subject fallback
Hi again,
Having taken a look at the dblink code I have some questions:
Having a user defined function, is it possible -with no serious memory
overheads- to fork it (outside SPI calls code) in order to make
concurrent dblink calls? What I'm thinking of doing is to create a
function which opens an SPI session, creates some objects (Query nodes),
closes it and then forks as many times as the number of required dblink
calls (having available the appropriate Query node, something like
for(all nodes) fork();...).
My guess is that it is possible cause the backend calls for dblink are
done on the other side (when speaking for Select statements
only).However the returned tuples can only be merged (to produce the
final result) in main memory cause storing them first (e.g. temp table)
would need concurrent SPI calls.Am I right? If so is there any
mechanism that can multiplex this storing procedures?
On the other side, I suppose a server can serve multiple incoming
queries.
Regards,
Ntinos Katsaros
Show quoted text
On Thu, 2004-10-14 at 11:57, Richard Huxton wrote:
Katsaros Kwn/nos wrote:
Well, actually no :) ! Thanks for the hint!
But just from curiosity, would the scenario I described work?
I mean is it possible for an SPI process to run in the background while
other SPI calls are made?I don't think so, you're running in a backend process, so you'd need to
fork the backend itself.On Thu, 2004-10-14 at 11:15, Richard Huxton wrote:
Katsaros Kwn/nos wrote:
Hi,
I'm trying to add a -project specific- networking feature to my postgres
build (or database as function). What I want to do is to send a Query
instance (as a String-retrieved through an SPI function) to other
machines and (after they have executed it) to receive result tuples.
It's about a mediator-wrapper project. My first thought was to write 2
SPI functions (one for the server (concurrent) and the other for client)
but I'm not sure if this is going to work. I'm worried about setting up
the server process running on the background while other SPI calls are
made.Have you looked at the dblink code?
Katsaros Kwn/nos wrote:
Having taken a look at the dblink code I have some questions:
ISTM that you might start with dblink_record() and modify it to suit
using SPI and asynchronous libpq calls. See:
http://www.postgresql.org/docs/current/static/libpq-async.html
Joe