Threads
Hi Everybody.
As well, a few have been asking about multi-threading.
Marc has told me about the past discussions on it.
I'm interested in re-opening some discussion on it, as we may eventually
have funding to do help with it.
Would it not be advantageous to use threading in the PostgreSQL backend?
Many servers make use of threading to increase throughput by performing
IO and computations concurrently. (While one thread is waiting for the
disk to respond to an IO request, another processes the last chunk of data).
As well, threads tend to have a lower process switching time, so that may
help a bit on heavily loaded systems. (Of course, it would tend to be a
combination of forking and threading. Threading benefits have limits).
Any thoughts?
Duane
This was one of the points I was talking about in the original message.
This way, it's still one session per backend, but uses threads to improve
throughput...
"(While one thread is waiting for the disk to respond to
an IO request, another processes the last chunk of data)"
This one looks to me like the best idea.
Now that pthreads is pretty much standard on systems (or available),
threading shouldn't be so problematic from a portability standpoint...
Duane
Show quoted text
Aren't there cases, though, where multi-threading could be used within the
back end design that we have at the moment, for example, to avoid lags
during I/O? So, while the nth block of data is being read from the disk,
the (n-1)th block is being processed by the next process down the line. For
example...
This wouldn't (shouldn't?) break the isolation that currently exists due to
single-process servers.MikeA
As well, a few have been asking about multi-threading.
Any thoughts?Threads within a client backend might be interesting. imho a
single-process multi-client multi-threaded server is just asking for
trouble, putting all clients at risk for any single misbehaving one.
Particularly with our extensibility features, where users and admins
can add functionality through code they have written (or are
trying to
write ;) having each backend isolated is A Good Thing.istm that many of the cases for which multi-threading is
proposed (web
serving always comes up) can be solved using persistant
connections or
other techniques.- Thomas
Import Notes
Reply to msg id not found: 1BF7C7482189D211B03F00805F8527F70ED0A6@S-NATH-EXCH2 | Resolved by subject fallback
"Ross J. Reedstrom" <reedstrm@wallace.ece.rice.edu> writes:
Hmm, what about threads in the frontend? Anyone know if libpq is thread
safe, and if not, how hard it might be to make it so?It is not; the main reason why not is a brain-dead part of the API that
exposes modifiable global variables. Check the mail list archives
(probably psql-interfaces, but maybe -hackers) for previous discussions
with details. Earlier this year I think, or maybe late 98.
hmmm... usually this is repairable by creating wrapper functions which
index the variables by thread id, and enforcing the use of the functions...
(maybe something for a wish list...)
Duane
Import Notes
Reply to msg id not found: 15332.933696835@sss.pgh.pa.us | Resolved by subject fallback
As well, a few have been asking about multi-threading.
Any thoughts?
Threads within a client backend might be interesting. imho a
single-process multi-client multi-threaded server is just asking for
trouble, putting all clients at risk for any single misbehaving one.
Particularly with our extensibility features, where users and admins
can add functionality through code they have written (or are trying to
write ;) having each backend isolated is A Good Thing.
istm that many of the cases for which multi-threading is proposed (web
serving always comes up) can be solved using persistant connections or
other techniques.
- Thomas
--
Thomas Lockhart lockhart@alumni.caltech.edu
South Pasadena, California
Duane Currie <dcurrie@sandman.acadiau.ca> writes:
Would it not be advantageous to use threading in the PostgreSQL backend?
Just so you don't break the code for non-threaded platforms.
I believe mysql *requires* working thread support, which is one reason
it is not so portable as Postgres... we should not give up that advantage.
BTW, I'm not convinced that threading would improve performance very
much within a single backend. It might be a win as a substitute for
multiple backends, ie, instead of postmaster + N backends you have just
one process with a bunch of threads. (But on the downside of *that* is
that a backend crash now takes down your postmaster along with
everything else...)
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofTue3Aug1999105651+0000199908031056.HAA08299@sandman.acadiau.ca | Resolved by subject fallback
Aren't there cases, though, where multi-threading could be used within the
back end design that we have at the moment, for example, to avoid lags
during I/O? So, while the nth block of data is being read from the disk,
the (n-1)th block is being processed by the next process down the line. For
example...
This wouldn't (shouldn't?) break the isolation that currently exists due to
single-process servers.
MikeA
Show quoted text
As well, a few have been asking about multi-threading.
Any thoughts?Threads within a client backend might be interesting. imho a
single-process multi-client multi-threaded server is just asking for
trouble, putting all clients at risk for any single misbehaving one.
Particularly with our extensibility features, where users and admins
can add functionality through code they have written (or are
trying to
write ;) having each backend isolated is A Good Thing.istm that many of the cases for which multi-threading is
proposed (web
serving always comes up) can be solved using persistant
connections or
other techniques.- Thomas
Import Notes
Resolved by subject fallback
On 03-Aug-99 Thomas Lockhart wrote:
As well, a few have been asking about multi-threading.
Any thoughts?
I completly agree with Thomas.
Mutiprocess server is also more convenient
for managing and more portable
Threads within a client backend might be interesting. imho a
single-process multi-client multi-threaded server is just asking for
trouble, putting all clients at risk for any single misbehaving one.
Particularly with our extensibility features, where users and admins
can add functionality through code they have written (or are trying to
write ;) having each backend isolated is A Good Thing.istm that many of the cases for which multi-threading is proposed (web
serving always comes up) can be solved using persistant connections or
other techniques.- Thomas
--
Thomas Lockhart lockhart@alumni.caltech.edu
South Pasadena, California
---
Dmitry Samersoff, dms@wplus.net, ICQ:3161705
http://devnull.wplus.net
* There will come soft rains ...
On Tue, 3 Aug 1999, Tom Lane wrote:
Duane Currie <dcurrie@sandman.acadiau.ca> writes:
Would it not be advantageous to use threading in the PostgreSQL backend?
Just so you don't break the code for non-threaded platforms.
I believe mysql *requires* working thread support, which is one reason
it is not so portable as Postgres... we should not give up that advantage.
Just curious here, but out of all the platforms we support, are there any
remaining that don't support threads?
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
The Hermit Hacker <scrappy@hub.org> writes:
Just curious here, but out of all the platforms we support, are there any
remaining that don't support threads?
Pretty much all of the older ones, I imagine --- I know HPUX 9 does not.
Of course HPUX 9 will be a dead issue by the end of the year, because
HP isn't going to fix its Y2K bugs; I wonder whether there is a similar
forcing function for old SunOS and other systems?
But still, I believe there are several different flavors of thread
packages running around, so we will be opening a brand new can of
portability worms. We'd best keep a "no threads" fallback option...
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofTue3Aug1999121311-0300Pine.BSF.4.10.9908031212440.27315-100000@thelab.hub.org | Resolved by subject fallback
On Tue, Aug 03, 1999 at 02:18:07PM +0000, Thomas Lockhart wrote:
As well, a few have been asking about multi-threading.
Any thoughts?Threads within a client backend might be interesting. [...]
Hmm, what about threads in the frontend? Anyone know if libpq is thread
safe, and if not, how hard it might be to make it so?
Ross
--
Ross J. Reedstrom, Ph.D., <reedstrm@rice.edu>
NSBRI Research Scientist/Programmer
Computer and Information Technology Institute
Rice University, 6100 S. Main St., Houston, TX 77005
On Tue, 3 Aug 1999, Tom Lane wrote:
The Hermit Hacker <scrappy@hub.org> writes:
Just curious here, but out of all the platforms we support, are there any
remaining that don't support threads?Pretty much all of the older ones, I imagine --- I know HPUX 9 does not.
Of course HPUX 9 will be a dead issue by the end of the year, because
HP isn't going to fix its Y2K bugs; I wonder whether there is a similar
forcing function for old SunOS and other systems?
HPUX9 won't be that dead. It's still the highest version the 300
series machines will run and the Y2K problems in it aren't enough
to force alot of folks to upgrade the hardware and OS. We're still
using HPUX8 on a number of our test stands and they even pass enuf
Y2K to keep 'em going (RMB can't do the dates but the system can).
Vince.
--
==========================================================================
Vince Vielhaber -- KA8CSH email: vev@michvhf.com flame-mail: /dev/null
# include <std/disclaimers.h> TEAM-OS2
Online Campground Directory http://www.camping-usa.com
Online Giftshop Superstore http://www.cloudninegifts.com
==========================================================================
On Tue, 3 Aug 1999, Tom Lane wrote:
But still, I believe there are several different flavors of thread
packages running around, so we will be opening a brand new can of
portability worms. We'd best keep a "no threads" fallback option...
Sounds reasonable, but is it feasible? I think the general thread right
now is to go with partial threading, but how hard is it going to be to
implement even partial threading will maintaining the no-thread features?
Basically just massive #ifdef blocks? *raised eyebrow*
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
"Ross J. Reedstrom" <reedstrm@wallace.ece.rice.edu> writes:
Hmm, what about threads in the frontend? Anyone know if libpq is thread
safe, and if not, how hard it might be to make it so?
It is not; the main reason why not is a brain-dead part of the API that
exposes modifiable global variables. Check the mail list archives
(probably psql-interfaces, but maybe -hackers) for previous discussions
with details. Earlier this year I think, or maybe late 98.
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofTue3Aug1999105724-050019990803105724.B23867@wallace.ece.rice.edu | Resolved by subject fallback
On Tue, Aug 03, 1999 at 02:18:07PM +0000, Thomas Lockhart wrote:
As well, a few have been asking about multi-threading.
Any thoughts?Threads within a client backend might be interesting. [...]
Hmm, what about threads in the frontend? Anyone know if libpq is thread
safe, and if not, how hard it might be to make it so?
I believe it is thread-safe.
--
Bruce Momjian | http://www.op.net/~candle
maillist@candle.pha.pa.us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
As well, a few have been asking about multi-threading.
Any thoughts?Threads within a client backend might be interesting. [...]
Hmm, what about threads in the frontend? Anyone know if libpq
is thread
safe, and if not, how hard it might be to make it so?
I beleive it is, as long as you only use each PGconn or PGresult from one
thread at a time. If you have two threads using the same PGconn, you're in
for trouble.
Making handling of PGresult thread-safe shouldn't be too hard, except you
have to do it platform-specific (you need some kind of mutex or similar, and
I doubt you can use the same code on e.g. any Unix and Win32).
Doing the same for PGconn is probably a lot harder, since the
frontend/backend protocol is "single threaded". So some kind of "tagging" of
each packet telling which thread it belongs to would be required. It would
probably be possible to "lock" the whole PGconn at the start of any
processing (such as sending a query), and then "unlock" once all the results
have been moved into a PGresult, but that is going to leave the PGconn
locked almost always, which kind of takes away the advantage of threading.
I think.
//Magnus
Import Notes
Resolved by subject fallback
:
Hmm, what about threads in the frontend? Anyone know if
libpq is thread
safe, and if not, how hard it might be to make it so?
It is not; the main reason why not is a brain-dead part of
the API that
exposes modifiable global variables. Check the mail list archives
(probably psql-interfaces, but maybe -hackers) for previous
discussions
with details. Earlier this year I think, or maybe late 98.
Hmm. Really?
AFAIK, only:
pgresStatus[]
is exported, and that one is a) read-only and b) deprecated, and replaced
with a function.
No?
Otherwise, I've been darn lucky in the multithreaded stuff I have :-) (I run
with a different PGconn for each thread, and the PGresult:s are protected by
CriticalSections (this is Win32)). And if that's it, then I really need to
fix it...
//Magnus
Import Notes
Resolved by subject fallback
Magnus Hagander <mha@sollentuna.net> writes:
It is not; the main reason why not is a brain-dead part of
the API that exposes modifiable global variables.
Hmm. Really?
PQconnectdb() is the function that's not thread-safe; if you had
multiple threads invoking PQconnectdb() in parallel you would see a
problem. PQconndefaults() is the function that creates an API problem,
because it exposes the static variable that PQconnectdb() ought not have
had in the first place.
There might be some other problems too, but that's the main one I'm
aware of. If we didn't mind breaking existing apps that use
PQconndefaults(), it would be straightforward to fix...
Otherwise, I've been darn lucky in the multithreaded stuff I have :-) (I run
with a different PGconn for each thread, and the PGresult:s are protected by
CriticalSections (this is Win32)). And if that's it, then I really need to
fix it...
Seems reasonable. PGconns do need to be per-thread, or else protected
by mutexes, but you can run them in parallel. PGresults are actually
almost read-only, and I've been wondering if they can't be made entirely
so (until destroyed of course). Then you wouldn't need a
CriticalSection. You might want some kind of reference-counting
mechanism for PGresults though.
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofTue3Aug1999183452+0200215896B6B5E1CF11BC5600805FFEA82101F70BF4@sirius.edu.sollentuna.se | Resolved by subject fallback
Tom Lane wrote:
PQconnectdb() is the function that's not thread-safe; if you had
multiple threads invoking PQconnectdb() in parallel you would see a
problem. PQconndefaults() is the function that creates an API problem,
because it exposes the static variable that PQconnectdb() ought not have
had in the first place.There might be some other problems too, but that's the main one I'm
aware of. If we didn't mind breaking existing apps that use
PQconndefaults(), it would be straightforward to fix...
Oh, this is interesting. I've been pointer and thread chasing for the
last few hours trying to figure out why AOLserver (a multithreaded open
source web server that supports pooled database (inluding postgresql)
connections) doesn't hit this snag -- and I haven't yet found the
answer...
However, this does answer a question that I had had but had never
asked...
In any case, I have a couple of cents to throw in to the multithreaded
discussion at large:
1.) While threads are nice for those programs that can benefit from
them, there are those tasks that are not ideally suited to threads.
Whether postgresql could benefit or not, I don't know; it would be an
interesting excercise to wrewrite the executor to be multithreaded -- of
course, the hard part is identifying what each thread would do, etc.
2.) A large multithreaded program, AOLserver, has just gone from a
multihost multiclient multithread model to a single host multiclient
multithread model: where AOLserver before would server as many virtual
hosts as you wished out of a single multi-threaded process, it was
determined through heavy stress-testing (after all, this server sits
behind www.aol.com, www.digitalcity.com, and others), that it was more
efficient to let the TCP/IP stack in the kernel handle address
multiplexing -- thus, the latest version requires you to start a
(multi-threaded) server process for each virtual host. The source code
for this server is a model of multithreaded server design -- see
aolserver.lcs.mit.edu for more.
3.) Redesigning an existing single-threaded program to efficiently
utilize multithreading is non-trivial. Highly non-trivial. In fact,
efficiently multithreading an existing program may involve a complete
redesign of basic structures and algorithms -- it did in the case of
AOLserver (then called Naviserver), which was originally a thread-take
on the CERN httpd.
4.) Threading PostgreSQL is going to be a massive effort -- and the
biggest part of that effort involves understanding the existing code
well enough to completely redesign the interaction of the internals --
it might be found that an efficient thread model involves multiple
layers of threads: one or more threads to parse the SQL source; one or
more threads to optimize the query, and one or more threads to execute
optimized SQL -- even while the parser is still parsing later statements
-- I realize that doesn't fit very well in the existing PostgreSQL
model. However, the pipelined thread model could be a good fit -- for a
pooled connection or for long queries. The most benefit could be had by
eliminating the postmaster/postgres linkage altogether, and having a
single postgres process handle multiple connections on its port in a
multiplexed-pipelined manner -- which is the model AOLserver uses.
AOLserver works like this: when a connection request is received, a
thread is immediately dispatched to service the connection -- if a
thread in the precreated thread pool is available, it gets it, otherwise
a new thread is created, up to MAXTHREADS.
The connection thread then pulls the data necessary to service the HTTP
request (which can include dispatching a tcl interpreter thread or a
database driver thread out of the available database pools (up to
MAXPOOLS) to service dynamic content). The data is sequentially
streamed to the connection, the connection is closed, and the thread
sleeps for a another dispatch.
Pretty simple in theory; a bear in practice.
So, hackers, are there areas of the backend itself that would benefit
from threading? I'm sure the whole 'postmaster forking a backend'
process would benefit from threading from a pure performance point of
view, but stability could possibly suffer (although, this model is good
enough for www.aol.com....). Can parsing/optimizing/executing be done
in a parallel/semi-parallel fashion? Of course, one of the benefits is
going to be effective SMP utilization on architectures that support SMP
threading. Multithreading the whole shooting match also eliminates the
need for interprocess communication via shared memory -- each connection
thread has the whole process context to work with.
The point is that it should be a full architectural redesign to properly
thread something as large as an RDBMS -- is it worth it, and, if so,
does anybody want to do it (who has enough pthreads experience to do it,
that is)? No, I'm not volunteering -- I know enough about threads to be
dangerous, and know less about the postgres backend. Not to mention a
great deal of hard work is going to be involved -- every single line of
code will have to be threadsafed -- not a fun prospect, IMO.
Anyone interesting in this stuff should take a look at some
well-threaded programs (such as AOLserver), and should be familiar with
some of the essential literature (such as O'Rielly's pthreads book).
Incidentally, with AOLserver's database connection pooling and
persistence, you get most of the benefits of a multithreaded backend
without the headaches of a multithreaded backend....
Lamar Owen
WGCR Internet Radio
At 07:56 PM 8/3/99 -0400, Lamar Owen wrote:
Tom Lane wrote:
PQconnectdb() is the function that's not thread-safe; if you had
multiple threads invoking PQconnectdb() in parallel you would see a
problem. PQconndefaults() is the function that creates an API problem,
because it exposes the static variable that PQconnectdb() ought not have
had in the first place.There might be some other problems too, but that's the main one I'm
aware of. If we didn't mind breaking existing apps that use
PQconndefaults(), it would be straightforward to fix...Oh, this is interesting. I've been pointer and thread chasing for the
last few hours trying to figure out why AOLserver (a multithreaded open
source web server that supports pooled database (inluding postgresql)
connections) doesn't hit this snag -- and I haven't yet found the
answer...
AOLserver rarely does a connect once the server gets fired up and
receives traffic. It may be that actual db connects are
guarded by semaphores or the like. It may be that conflicts are
rare because on a busy site a connection will be made once and only
once and later lives on in the pool forever, with the handle being
allocated and released by individual .tcl scripts and .adp pages.
- Don Baccus, Portland OR <dhogaza@pacifier.com>
Nature photos, on-line guides, and other goodies at
http://donb.photo.net
It is not; the main reason why not is a brain-dead part of
the API that exposes modifiable global variables.Hmm. Really?
PQconnectdb() is the function that's not thread-safe; if you had
multiple threads invoking PQconnectdb() in parallel you would see a
problem. PQconndefaults() is the function that creates an
API problem,
because it exposes the static variable that PQconnectdb()
ought not have
had in the first place.
Ok. Now I see it. I guess my code worked because I run PQconnectdb() at the
start of the program, and hand down the PGconn:s to the threads later. So
only one thread can call PQconnectdb().
There might be some other problems too, but that's the main one I'm
aware of. If we didn't mind breaking existing apps that use
PQconndefaults(), it would be straightforward to fix...
Wouldn't it be possible to do something like this (ok, a little bit ugly,
but shouldn't break the clients):
Store the "active" PQconninfoOptions array inside the PGconn struct. That
way, the user should not require to change anything when doing PQconnectdb()
and the likes.
Rewrite the conninfo_xxx functions to take a (PQconninfoOptions *) as
parameter to work on, instead of working on the static array.
Keep the static array, rename it to PQconninfoDefaultOptions, make it
contain the *default* options *from the beginning*, and declare it as
"const". Then have PQconndefaults() return that array. Then the
PQconndefaults() works just like before, and does not break the old
programs.
Shouldn't this be possible to achieve without any changes in the API?
If you don't see anything obviously wrong with this, I can try to put
together a patch to do that. It'd be really nice to have a thread-safe
client lib :-)
You might want some kind of reference-counting
mechanism for PGresults though.
In this case, the PGresults are owned by the client connections, and are
only used by one client connection at a time, and they are freed when the
client connection ends. The PGconns are owned one each by the Worker Threads
in the pool, and are freed when the worker thread is stopped (which is when
the application is stopped). So no special reference-counting should be
needed.
//Magnus
Import Notes
Resolved by subject fallback
Lamar Owen's comments brought up a thought. Bruce has talked several
times about moving in Oracle's direction, with dedicated backends for
each database (or maybe in Ingres' direction, since they allow both
dedicated backends as well as multi-database backends). In any case,
IFF we went that way, would it make sense to reduce the postmaster's
role to more of a traffic cop (a la Ingres' iigcn)?
Effectively, what we'd end up with is a postmaster that knows "which
backends serve which data" that would then either tell the client to
reconnect directly to the backend, or else provide a mediated
connection.
Redirection will end up costing us a whole 'nother TCP connection
build/destroy which can be disregarded for non-trivial queries, but
still may prove too much (depending upon query patterns). On the
other hand, it would probably be easier to code and have better
throughput than funneling all data through the postmaster. On the
gripping hand, a postmaster that mediated all transactions could also
implement QoS style controls, or throttle connections taking an unfair
share of the available bandwidth.
In any event, this could also be the start of a naming service. It
should be relatively easy, with either method, to have the postmaster
handle connections to databases (not just tables, mind you) on other
machines.
--
=====================================================================
| JAVA must have been developed in the wilds of West Virginia. |
| After all, why else would it support only single inheritance?? |
=====================================================================
| Finger geek@cmu.edu for my public key. |
=====================================================================
[ interfaces list added, since we are discussing an incompatible libpq
API change to fix the problem that PQconnectdb() is not thread-safe ]
Magnus Hagander <mha@sollentuna.net> writes:
There might be some other problems too, but that's the main one I'm
aware of. If we didn't mind breaking existing apps that use
PQconndefaults(), it would be straightforward to fix...
Wouldn't it be possible to do something like this (ok, a little bit ugly,
but shouldn't break the clients):
Store the "active" PQconninfoOptions array inside the PGconn struct. That
way, the user should not require to change anything when doing PQconnectdb()
and the likes.
I don't think we'd need to store the array on a long-term basis; it is
really only needed as working storage during PQconnectdb. So it could
be malloc'd at PQconnectdb entry and free'd at exit, in the normal case
where it's just supporting a PQconnectdb call. As you saw, that'd be
a pretty straightforward set of changes inside fe-connect.c.
The problem is how should PQconndefaults() act.
Keep the static array, rename it to PQconninfoDefaultOptions, make it
contain the *default* options *from the beginning*, and declare it as
"const". Then have PQconndefaults() return that array.
It's not const because the default values are not constants --- they
depend on environment variables. In theory the results could change
from call to call, if the client program does putenv()'s in between.
I dunno whether there are any thread packages that keep separate
environment-variable sets for each thread, but if so then the results
could theoretically vary across threads.
The most natural thing given the above change would be to say that what
PQconndefaults() returns is a malloc'd array, and the user program is
required to free said array when done with it. (Actually, required to
call some helper routine inside fe-connect.c, which would know to free
the subsidiary variable strings along with the top-level array...)
The breakage here is that existing code won't know to call the free
routine, and will therefore suffer a memory leak of ~ a few hundred bytes
per PQconndefaults() call. It might well be that we could live with
that, since I'll bet that most client programs don't use PQconndefaults
at all, much less call it so many times that a leak of that size would be
a problem. Comments?
What I'm envisioning is a static const array that contains all the
fixed fields of PQconninfoOptions, but the variable fields (just
the "val" current-value field, AFAIR) are permanently NULL. Then
to create a working copy you do
ptr = malloc(sizeof(PQconninfoOptions));
memcpy(ptr, PQconninfoOptions, sizeof(PQconninfoOptions));
The free routine would run down the work array freeing any non-null val
strings, then free the array itself. No copying or freeing of the
constant subsidiary strings (such as "keyword") is needed.
If you want to work on it, be my guest...
regards, tom lane
Import Notes
Reply to msg id not found: YourmessageofWed4Aug1999145215+0200215896B6B5E1CF11BC5600805FFEA82101F70C0F@sirius.edu.sollentuna.se | Resolved by subject fallback
Redirection will end up costing us a whole 'nother TCP connection
build/destroy which can be disregarded for non-trivial queries, but
still may prove too much (depending upon query patterns). On the
other hand, it would probably be easier to code and have better
throughput than funneling all data through the postmaster. On the
gripping hand, a postmaster that mediated all transactions could also
implement QoS style controls, or throttle connections taking an unfair
share of the available bandwidth.
In any event, this could also be the start of a naming service. It
should be relatively easy, with either method, to have the postmaster
handle connections to databases (not just tables, mind you) on other
machines.
Starting to sound suspiciously like the Corba work I've been doing on
my day job.
We're using ACE/TAO for it's realtime and QoS features, but other
implementations are probably much lower footprint wrt installation and
use. I suppose we'd want a C implementation; the ones I've been using
are all C++...
- Thomas
--
Thomas Lockhart lockhart@alumni.caltech.edu
South Pasadena, California
On Wed, 4 Aug 1999, Thomas Lockhart wrote:
Redirection will end up costing us a whole 'nother TCP connection
build/destroy which can be disregarded for non-trivial queries, but
still may prove too much (depending upon query patterns). On the
other hand, it would probably be easier to code and have better
throughput than funneling all data through the postmaster. On the
gripping hand, a postmaster that mediated all transactions could also
implement QoS style controls, or throttle connections taking an unfair
share of the available bandwidth.
In any event, this could also be the start of a naming service. It
should be relatively easy, with either method, to have the postmaster
handle connections to databases (not just tables, mind you) on other
machines.Starting to sound suspiciously like the Corba work I've been doing on
my day job.We're using ACE/TAO for it's realtime and QoS features, but other
implementations are probably much lower footprint wrt installation and
use. I suppose we'd want a C implementation; the ones I've been using
are all C++...
KDE/KOffice uses Mico, which is also C++...
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
KDE/KOffice uses Mico, which is also C++...
Right, it's a nice implementation from the little I've used it (I have
it installed to use the tcl binding).
Presumably we would want a C implementation to match the rest of our
environment, but I can't help thinking that a C++ one for the Corba
parts would be more natural (it maps to the Corba OO view of the
world).
ACE/TAO includes OS abstractions to allow porting to a *bunch* of
platforms, including real-time OSes. However, if you build the whole
package and include debugging symbols then you're taking up 1.2GB of
disk space for a from-source build (yes, that's GigaByte with a gee
:()
The libraries are substantially smaller, but the packaging is not very
good yet so you end up keeping the full installation around.
- Thomas
--
Thomas Lockhart lockhart@alumni.caltech.edu
South Pasadena, California
On 4 Aug, Tom Lane wrote:
[ Snip discussion regarding PQconnectdb()s thread-safety ]
If you want to work on it, be my guest...
I don't have time to think about this today, so I can't comment on how
it should work, but I _am_ currently working in this area - I am
providing non-blocking versions of the connect statements, as discussed
on the interfaces list a couple of weeks ago. In fact, it is pretty
much done, apart from a tidy-up, documentation, and testing. I don't
see any point in two people hammering away at the same code - it will
only make work when we try to merge again - so perhaps I should
implement what ever is decided - I don't mind doing so. However, if I
didn't get it done this weekend it would have to be mid-to-late
September, since I'm going away. Would that be a problem for anyone?
I had noticed that the connect statements weren't thread-safe, but
was neither aware that that was a problem for anyone, nor inclined to
audit the whole of libpq for thread-safety, so I left it alone.
Ewan.
Thomas Lockhart <lockhart@alumni.caltech.edu> writes:
Presumably we would want a C implementation to match the rest of our
environment
In which case one might want to consider the Orbit ORB from the GNOME
project. It's pure C, and is supposed to be quite small and fast, and
aims for full CORBA compliance---which I understand MICO and maybe TAO
don't quite achieve.
Mike.
Import Notes
Reply to msg id not found: ThomasLockhartsmessageofWed04Aug1999154248+0000
On 4 Aug 1999, Michael Alan Dorman wrote:
Thomas Lockhart <lockhart@alumni.caltech.edu> writes:
Presumably we would want a C implementation to match the rest of our
environmentIn which case one might want to consider the Orbit ORB from the GNOME
project. It's pure C, and is supposed to be quite small and fast, and
aims for full CORBA compliance---which I understand MICO and maybe TAO
don't quite achieve.
Actually, unless Orbit has recently changed, MICO is more compliant then
it is...last time we all looked into it, at least, that was the case...
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
On 4 Aug, Tom Lane wrote:
[ Snip discussion regarding PQconnectdb()s thread-safety ]
If you want to work on it, be my guest...
I don't have time to think about this today, so I can't comment on how
it should work, but I _am_ currently working in this area - I am
providing non-blocking versions of the connect statements, as discussed
on the interfaces list a couple of weeks ago. In fact, it is pretty
much done, apart from a tidy-up, documentation, and testing. I don't
see any point in two people hammering away at the same code - it will
only make work when we try to merge again - so perhaps I should
implement what ever is decided - I don't mind doing so. However, if I
didn't get it done this weekend it would have to be mid-to-late
September, since I'm going away. Would that be a problem for anyone?
I had noticed that the connect statements weren't thread-safe, but
was neither aware that that was a problem for anyone, nor inclined to
audit the whole of libpq for thread-safety, so I left it alone.
Ewan.
Import Notes
Resolved by subject fallback