Multiple logical databases
I am working on an issue that I deal with a lot, there is of course a
standard answer, but maybe it is something to think about for PostgreSQL
9.0 or something. I think I finally understand what I have been fighting
for a number of years. When I have been grousing about postgresql
configuration, this has been what I have been fighting.
One of the problems with the current PostgreSQL design is that all the
databases operated by one postmaster server process are interlinked at
some core level. They all share the same system tables. If one database
becomes corrupt because of disk or something, the whole cluster is
affected. If one db is REALLY REALLY huge and doesn't change, and a few
others are small and change often, pg_dumpall will spend most of its time
dumping the unchanging data.
Now, the answer, obviously, is to create multiple postgresql database
clusters and run postmaster for each logical group of databases, right?
That really is a fine idea, but....
Say, in pgsql, I do this: "\c newdb" It will only find the database that I
have in that logical group. If another postmaster is running, obviously,
psql doesn't know anything about it.
From the DB admin perspective, maybe there should be some heirarchical
structure to this. What if there were a program, maybe a special parent
"postmaster" process, I don't know, that started a list of child
postmasters based on some site config? The parent postmaster would hold
all the configuration parameters of the child postmaster processes, so
there would only be on postgresql.conf.
This also answers "how do we get postgresql options in a database,"
because the parent postmaster only needs to bootstrap the others, it can
be configured to run lean and mean, and the "real" settings can be
inspected and changed at will. A trigger will send a HUP to child
postmasters when their settings change. The parent postmaster only needs
one connection for each child and one admin, right?
Does anyone see this as useful?
On Thu, 2 Feb 2006, Mark Woodward wrote:
Now, the answer, obviously, is to create multiple postgresql database
clusters and run postmaster for each logical group of databases, right?
That really is a fine idea, but....Say, in pgsql, I do this: "\c newdb" It will only find the database that I
have in that logical group. If another postmaster is running, obviously,
psql doesn't know anything about it.
From the DB admin perspective, maybe there should be some heirarchical
structure to this. What if there were a program, maybe a special parent
"postmaster" process, I don't know, that started a list of child
postmasters based on some site config? The parent postmaster would hold
all the configuration parameters of the child postmaster processes, so
there would only be on postgresql.conf.
This also answers "how do we get postgresql options in a database,"
because the parent postmaster only needs to bootstrap the others, it can
be configured to run lean and mean, and the "real" settings can be
inspected and changed at will. A trigger will send a HUP to child
postmasters when their settings change. The parent postmaster only needs
one connection for each child and one admin, right?Does anyone see this as useful?
Not as described above, no. Perhaps with a more concrete plan that
actually talks about these things in more details. For example, you posit
the \c thing as an issue, I don't personally agree, but you also don't
address it with a solution.
On Thu, 2 Feb 2006, Mark Woodward wrote:
Now, the answer, obviously, is to create multiple postgresql database
clusters and run postmaster for each logical group of databases, right?
That really is a fine idea, but....Say, in pgsql, I do this: "\c newdb" It will only find the database that
I
have in that logical group. If another postmaster is running, obviously,
psql doesn't know anything about it.From the DB admin perspective, maybe there should be some heirarchical
structure to this. What if there were a program, maybe a special parent
"postmaster" process, I don't know, that started a list of child
postmasters based on some site config? The parent postmaster would hold
all the configuration parameters of the child postmaster processes, so
there would only be on postgresql.conf.This also answers "how do we get postgresql options in a database,"
because the parent postmaster only needs to bootstrap the others, it can
be configured to run lean and mean, and the "real" settings can be
inspected and changed at will. A trigger will send a HUP to child
postmasters when their settings change. The parent postmaster only needs
one connection for each child and one admin, right?Does anyone see this as useful?
Not as described above, no. Perhaps with a more concrete plan that
actually talks about these things in more details. For example, you posit
the \c thing as an issue, I don't personally agree, but you also don't
address it with a solution.
While I understand that it is quite a vague suggestion, I guess I was
brainstorming more than detailing an actual set of features.
My issue is this, (and this is NOT a slam on PostgreSQL), I have a number
of physical databases on one machine on ports 5432, 5433, 5434. All
running the same version and in fact, installation of PostgreSQL.
Even though they run on the same machine, run the same version of the
software, and are used by the same applications, they have NO
interoperability. For now, lets just accept that they need to be on
separate physical clusters because some need to be able to started and
stopped while others need to remain running, there are other reasons, but
one reason will suffice for the discussion.
From an administration perspective, a single point of admin would seem
like a logical and valuable objective, no?
Beyond just the admin advanatges, the utilities could be modified to
handle a root server that redirects to child servers. The psql program,
when handling a "\c" command, queries the root server to find the child
server and then connects to that.
libpq could also be modified to handle this without changing the
applications.
The child postmasters will query the root postmaster when a DB is created
and deleted to keep it up to date. Conflicts between two children can be
managed by either some sort of first come first serve or disallow creating
of a duplicate name, or some other method.
So, conn = connect("host=localhost dbname=mydb"); Will connect to the root
server, find the actual server, and then connect to it, completely hiding
the different physical databases, and creating one very large logical
install.
Perhaps this can even be written to include large scale clustering. Who
knows?
On Thu, Feb 02, 2006 at 02:05:03PM -0500, Mark Woodward wrote:
My issue is this, (and this is NOT a slam on PostgreSQL), I have a number
of physical databases on one machine on ports 5432, 5433, 5434. All
running the same version and in fact, installation of PostgreSQL.
One way of acheiving this would be to allow the PGHOST and/or PGPORT
variables to be lists and when you connect it tries each combination
until it finds on that works. Maybe not as clean but a lot easier to
implement.
Unless ofcourse you want "psql -l" to list all databases in all
clusters...
I think it would be better to put the intelligence into libpq rather
than trying to create more servers...
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
tool for doing 5% of the work and then sitting around waiting for someone
else to do the other 95% so you can sue them.
Mark,
Even though they run on the same machine, run the same version of the
software, and are used by the same applications, they have NO
interoperability. For now, lets just accept that they need to be on
separate physical clusters because some need to be able to started and
stopped while others need to remain running, there are other reasons,
but one reason will suffice for the discussion.
Well, to answer your original question, I personally would not see your
general idea as useful at all. I admin 9 or 10 PostgreSQL servers
currently and have never run across a need, or even a desire, to do what
you are doing.
In fact, if there's any general demand, it's to go the opposite way:
patches to lock down the system tables and prevent switching databases to
support ISPs and other shared-hosting situations.
For an immediate solution to what you are encountering, have you looked at
pgPool?
--
--Josh
Josh Berkus
Aglio Database Solutions
San Francisco
Mark Woodward wrote:
My issue is this, (and this is NOT a slam on PostgreSQL), I have a number
of physical databases on one machine on ports 5432, 5433, 5434. All
running the same version and in fact, installation of PostgreSQL.Even though they run on the same machine, run the same version of the
software, and are used by the same applications, they have NO
interoperability. For now, lets just accept that they need to be on
separate physical clusters because some need to be able to started and
stopped while others need to remain running, there are other reasons, but
one reason will suffice for the discussion.
Hmmm - do you really need to start and stop them? or are you just doing
that to forbid user access whilst doing data loads etc?
If so, then you might get more buy-in by requesting enhancements that
work with the design of Pg a little more (or I hope they do anyway....) e.g:
1/ Enable/disable (temporarily) user access to individual databases via
a simple admin command (tho 'ALTER DATABASE xxx CONNECTION LIMIT 0' will
suffice if you do loads with a superuser role).
2/ Restrict certain users to certain databases via simple admin commands
(editing pg_hba.conf is not always convenient or possible).
3/ Make cross db relation references a little more transparent (e.g
maybe introduce SYNONYM for this).
Other related possibilities come to mind, like being able to segment the
buffer cache on a database level (e.g: bigdb gets 90% of the shared
buffers.... not 100%, as I want to keep smalldb's tables cached always....).
Cheers
Mark
Mark Woodward wrote:
From an administration perspective, a single point of admin would
seem like a logical and valuable objective, no?
I don't understand why you are going out of your way to separate your
databases (for misinformed reasons, it appears) and then want to design
a way to centrally control them so they can all fail together.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
Mark Woodward wrote:
From an administration perspective, a single point of admin would
seem like a logical and valuable objective, no?I don't understand why you are going out of your way to separate your
databases (for misinformed reasons, it appears) and then want to design
a way to centrally control them so they can all fail together.
Oh come on, "misinformed?" is that really called for?
Think about a website that (and I have one) has the U.S.A. Streetmap
database, the freedb CD database, and a slew of sites based on phpbb and
drupal.
Maybe one should put them all in one database cluster, but...
The street database is typically generated and QAed in the lab. It is then
uploaded to the server. It has many millions of rows and about a half
dozen indexes. To dump and reload takes almost a day.
Compressing the DB and uploading it into the site, uncompressing it,
stoping the current postgresql process, swapping the data directory, and
restarting it can be done in about an hour. One can not do this if the
street map database is part of the standard database cluster. The same
thing happens with the freedb database.
Unless you can tell me how to insert live data and indexes to a cluster
without having to reload the data and recreate the indexes, then I hardly
think I am "misinformed." The ad hominem attack wasn't nessisary.
I have no problem with disagreement, but I take exception to insult.
If no one sees a way to manage multiple physical database clusters as one
logical cluster as something worth doing, then so be it. I have a
practical example of a valid reason how this would make PostgreSQL easier
to work with. Yes there are work arounds. Yes it is not currently
unworkable.
It is just that it could be better. As I mentioned earlier, I have been
dealing with this sort of problem for a number of years now, and I think
this is the "cool" solution to the problem.
Mark Woodward schrieb:
...
Unless you can tell me how to insert live data and indexes to a cluster
without having to reload the data and recreate the indexes, then I hardly
think I am "misinformed." The ad hominem attack wasn't nessisary.
I see you had a usecase for something like pg_diff and pg_patch ;)
...
If no one sees a way to manage multiple physical database clusters as one
logical cluster as something worth doing, then so be it. I have a
practical example of a valid reason how this would make PostgreSQL easier
to work with. Yes there are work arounds. Yes it is not currently
unworkable.
I dont see your problem, really ;)
1) if you have very big and very workloaded databases, you often have
them on different physically boxes anyway
2) you can run any number of postmasters on the same box - just put
them to listen on different ip:port.
Now to the management - you say cddb and geodb are managed off host.
So they are not managed on the life server and so you dont need to
switch your psql console to them.
And yeah, its really not a problem, to quit psql and connect
to a different server anyway :-)
If you dont like to type -p otherport, you can either create
aliases with all the arguments or use something like pgadmin3
which enables you to easy switch from database to database,
from host to host as you like.
Now is there any usecase I have missed which you still would
like to have addressed?
Kind regards
Tino Wildenhain
Josh Berkus wrote:
Mark,
Even though they run on the same machine, run the same version of the
software, and are used by the same applications, they have NO
interoperability. For now, lets just accept that they need to be on
separate physical clusters because some need to be able to started and
stopped while others need to remain running, there are other reasons,
but one reason will suffice for the discussion.For an immediate solution to what you are encountering, have you looked at
pgPool?
I agree with Josh - pgpool sounds like the place to start with this.
That's got to be the easiest place to add some sort of "listall"/"switch
todb" functionality. It also means you're not *forced* to have only one
version of PG, or have them all on the same machine.
--
Richard Huxton
Archonet Ltd
Mark Woodward schrieb:
...Unless you can tell me how to insert live data and indexes to a cluster
without having to reload the data and recreate the indexes, then I
hardly
think I am "misinformed." The ad hominem attack wasn't nessisary.I see you had a usecase for something like pg_diff and pg_patch ;)
...If no one sees a way to manage multiple physical database clusters as
one
logical cluster as something worth doing, then so be it. I have a
practical example of a valid reason how this would make PostgreSQL
easier
to work with. Yes there are work arounds. Yes it is not currently
unworkable.I dont see your problem, really ;)
1) if you have very big and very workloaded databases, you often have
them on different physically boxes anyway
2) you can run any number of postmasters on the same box - just put
them to listen on different ip:port.Now to the management - you say cddb and geodb are managed off host.
So they are not managed on the life server and so you dont need to
switch your psql console to them.And yeah, its really not a problem, to quit psql and connect
to a different server anyway :-)If you dont like to type -p otherport, you can either create
aliases with all the arguments or use something like pgadmin3
which enables you to easy switch from database to database,
from host to host as you like.Now is there any usecase I have missed which you still would
like to have addressed?
I don't, as it happens, have these databases on different machines, but
come to think about it, maybe it doesn't matter.
The "port" aspect is troubling, it isn't really self documenting. The
application isn't psql, the applications are custom code written in PHP
and C/C++.
Like I said, in this thread of posts, yes there are ways of doing this,
and I've been doing it for years. It is just one of the rough eges that I
think could be smoother.
(in php)
pg_connect("dbname=geo host=dbserver");
Could connect and query the dbserver, if the db is not on it, connect to a
database of known servers, find geo, and use that information to connect.
It sounds like a simple thing, for sure, but to be useful, there needs to
be buy in from the group otherwise it is just some esoteric hack.
The point is, that I have been working with this sort of "use case" for a
number of years, and being able to represent multiple physical databases
as one logical db server would make life easier. It was a brainstorm I had
while I was setting this sort of system for the [n]th time.
For my part, I have tried to maintain my own change list for PostgreSQL in
the past, but it is a pain. The main source changes too frequently to keep
up and in the end is just another project to maintain.
Using the "/etc/hosts" file or DNS to maintain host locations for is a
fairly common and well known practice, but there is no such mechanism for
"ports." The problem now becomes a code issue, not a system administration
issue.
If one writes the code to their website to use a generic host name, say,
"dbserver," then one can easily test system changes locally and push the
code to a live site. The only difference is the host name. When a port is
involved, there is no systemic way to represent that to the operating
system, and must therefor be part of the code. As part of the code, it
must reside in a place where code has access, and must NOT be pushed with
the rest of the site.
Having some mechanism to deal with this would be cleaner IMHO.
"Mark Woodward" <pgsql@mohawksoft.com> writes:
The point is, that I have been working with this sort of "use case" for a
number of years, and being able to represent multiple physical databases
as one logical db server would make life easier. It was a brainstorm I had
while I was setting this sort of system for the [n]th time.
It sounds like all that would be needed is a kind of "smart
proxy"--has a list of database clusters on the machine and the
databases they contain, and speaks enough of the protocol to recognize
the startup packet and reroute it internally to the right cluster.
I've heard 'pgpool' mentioned here; from a quick look at the docs it
looks similar but not quite what you want.
So your databases would listen on 5433, 5434, etc and the proxy would
listen on 5432 and route everything properly. If a particular cluster
is not up, the proxy could just error out the connection.
Hmm, that'd be fun to write if I ever find the time...
-Doug
On Fri, Feb 03, 2006 at 08:05:48AM -0500, Mark Woodward wrote:
Using the "/etc/hosts" file or DNS to maintain host locations for is a
fairly common and well known practice, but there is no such mechanism for
"ports." The problem now becomes a code issue, not a system administration
issue.
Actually, there is, it's in /etc/services and the functions are
getservbyname and getservbyport. I wonder if it'd be possible to have
psql use this if you put a string in the port part of the connect
string.
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
tool for doing 5% of the work and then sitting around waiting for someone
else to do the other 95% so you can sue them.
On Feb 3, 2006, at 08:05, Mark Woodward wrote:
Using the "/etc/hosts" file or DNS to maintain host locations for is a
fairly common and well known practice, but there is no such
mechanism for
"ports." The problem now becomes a code issue, not a system
administration
issue.
What if you assigned multiple IPs to a machine, then used ipfw (or
something) to forward connections to port 5432 for each IP to the
proper IP and port?
You could use /etc/hosts or DNS to give each IP a host name, and use
it in your code.
For example (this only does forwarding for clients on localhost, but
you get the idea), you could set up:
Host IP:port Forwards to
-------- --------------- -----------------
db_one 127.0.1.1:5432 192.168.1.5:5432
db_two 127.0.1.2:5432 192.168.1.6:5432
db_three 127.0.1.3:5432 192.168.1.6:5433
fb_four 127.0.1.4:5432 16.51.209.8:8865
You could reconfigure the redirection by changing the ipfw
configuration -- you wouldn't change your client code at all. It
would continue to use a connection string of "... host=db_one", but
you'd change 127.0.1.1:5432 to forward to the new IP/port.
Or use pgpool. :)
- Chris
Mark Woodward wrote:
Oh come on, "misinformed?" is that really called for?
Claiming that all databases share the same system tables is misinformed,
with no judgement passed.
The street database is typically generated and QAed in the lab. It is
then uploaded to the server. It has many millions of rows and about a
half dozen indexes. To dump and reload takes almost a day.
There is work happening on speeding up bulk loads.
Unless you can tell me how to insert live data and indexes to a
cluster without having to reload the data and recreate the indexes,
I think this sort of thing can be worked on. VACUUM FREEZE and some
tool support could make this happen.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
"Mark Woodward" <pgsql@mohawksoft.com> writes:
The point is, that I have been working with this sort of "use case" for
a
number of years, and being able to represent multiple physical databases
as one logical db server would make life easier. It was a brainstorm I
had
while I was setting this sort of system for the [n]th time.It sounds like all that would be needed is a kind of "smart
proxy"--has a list of database clusters on the machine and the
databases they contain, and speaks enough of the protocol to recognize
the startup packet and reroute it internally to the right cluster.
I've heard 'pgpool' mentioned here; from a quick look at the docs it
looks similar but not quite what you want.So your databases would listen on 5433, 5434, etc and the proxy would
listen on 5432 and route everything properly. If a particular cluster
is not up, the proxy could just error out the connection.Hmm, that'd be fun to write if I ever find the time...
It is similar to a proxy, yes, but that is just part of it. The setup and
running of these systems should all be managed.
"Mark Woodward" <pgsql@mohawksoft.com> writes:
It is similar to a proxy, yes, but that is just part of it. The setup and
running of these systems should all be managed.
All that requires is some scripts that wrap pg_ctl and bring the right
instances up and down, perhaps with a web interface on top of them. I
don't see any need to put that functionality in the proxy.
-Doug
pgsql@mohawksoft.com ("Mark Woodward") writes:
The "port" aspect is troubling, it isn't really self
documenting. The application isn't psql, the applications are custom
code written in PHP and C/C++.
Nonsense. See /etc/services
Using the "/etc/hosts" file or DNS to maintain host locations for is
a fairly common and well known practice, but there is no such
mechanism for "ports." The problem now becomes a code issue, not a
system administration issue.
Nonsense. See /etc/services
If one writes the code to their website to use a generic host name,
say, "dbserver," then one can easily test system changes locally and
push the code to a live site. The only difference is the host
name. When a port is involved, there is no systemic way to represent
that to the operating system, and must therefor be part of the
code. As part of the code, it must reside in a place where code has
access, and must NOT be pushed with the rest of the site.Having some mechanism to deal with this would be cleaner IMHO.
I'm sure it would be, that's why there has been one, which has been in
use since the issuance of RFC 349 by Jon Postel back in May of 1972.
The mechanism is nearly 34 years old.
Note that RFCs are no longer used to issue port listings, as per RFC
3232, back in 2002. Now, IANA manages a repository of standard port
numbers, commonly populated into /etc/services.
<http://www.iana.org/assignments/port-numbers>
For customizations, see:
% man 5 services
--
(format nil "~S@~S" "cbbrowne" "acm.org")
http://www.ntlug.org/~cbbrowne/sgml.html
"Motto for a research laboratory: What we work on today, others will
first think of tomorrow." -- Alan J. Perlis
On Feb 3, 2006, at 6:47 AM, Chris Campbell wrote:
On Feb 3, 2006, at 08:05, Mark Woodward wrote:
Using the "/etc/hosts" file or DNS to maintain host locations for
is a
fairly common and well known practice, but there is no such
mechanism for
"ports." The problem now becomes a code issue, not a system
administration
issue.What if you assigned multiple IPs to a machine, then used ipfw (or
something) to forward connections to port 5432 for each IP to the
proper IP and port?
If he had multiple ips couldn't he just make them all listen only on
one specific ip (instead of '*') and just use the default port?
On Feb 3, 2006, at 12:43, Rick Gigger wrote:
If he had multiple ips couldn't he just make them all listen only
on one specific ip (instead of '*') and just use the default port?
Yeah, but the main idea here is that you could use ipfw to forward
connections *to other hosts* if you wanted to. Basically working like
a proxy.
- Chris