How ugly would this be? (ALTER DATABASE)
Hello,
One of the things we run into quite a bit is customers who are using
multiple databases when they should be using multiple schemas. I am sure
other consultants run into this as well. This gets even more difficult
as uptime requirements have become all but 100%. So my question is, what
would this take?
ALTER DATABASE foo LOCATION DATABASE bar SCHEMA baz?
Where if we execute that command, database foo would move to schema baz
within database bar?
I am fully aware of what it takes on the client side but structurally
within postgres what would it take? Is it even reasonable?
JD
--
Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
"If we send our children to Caesar for their education, we should
not be surprised when they come back as Romans."
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Oct 24, 2014 at 2:06 PM, Joshua D. Drake <jd@commandprompt.com> wrote:
One of the things we run into quite a bit is customers who are using
multiple databases when they should be using multiple schemas. I am sure
other consultants run into this as well. This gets even more difficult as
uptime requirements have become all but 100%. So my question is, what would
this take?ALTER DATABASE foo LOCATION DATABASE bar SCHEMA baz?
Where if we execute that command, database foo would move to schema baz
within database bar?I am fully aware of what it takes on the client side but structurally within
postgres what would it take? Is it even reasonable?
What if the database contains more than one schema?
You could perhaps try to create a command that would move a schema
between two databases in the same cluster. It's fraught with
practical difficulties because a single backend can't be connected to
both databases at the same time, so how exactly do you make the
required catalog changes all in a single transaction? But if you
imagine that you have an infinite pool of top-notch PostgreSQL talent
with unbounded time to work on this problem and no other, I bet
somebody could engineer a solution.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 10/24/14, 1:28 PM, Robert Haas wrote:
On Fri, Oct 24, 2014 at 2:06 PM, Joshua D. Drake <jd@commandprompt.com> wrote:
One of the things we run into quite a bit is customers who are using
multiple databases when they should be using multiple schemas. I am sure
other consultants run into this as well. This gets even more difficult as
uptime requirements have become all but 100%. So my question is, what would
this take?ALTER DATABASE foo LOCATION DATABASE bar SCHEMA baz?
Where if we execute that command, database foo would move to schema baz
within database bar?I am fully aware of what it takes on the client side but structurally within
postgres what would it take? Is it even reasonable?What if the database contains more than one schema?
You could perhaps try to create a command that would move a schema
between two databases in the same cluster. It's fraught with
practical difficulties because a single backend can't be connected to
both databases at the same time, so how exactly do you make the
required catalog changes all in a single transaction? But if you
imagine that you have an infinite pool of top-notch PostgreSQL talent
with unbounded time to work on this problem and no other, I bet
somebody could engineer a solution.
ISTM that the multiple-databases-per-backend issue is the huge hang-up here. Maybe there's some way that could be hacked around if you're just re-jiggering a bunch of catalog stuff (assuming you lock users out of both databases while you're doing that), but if you were going to go to that extent perhaps it'd be better to just support cross-database access in a single backend...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Oct 24, 2014 at 7:37 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
ISTM that the multiple-databases-per-backend issue is the huge hang-up here.
Maybe there's some way that could be hacked around if you're just
re-jiggering a bunch of catalog stuff (assuming you lock users out of both
databases while you're doing that), but if you were going to go to that
extent perhaps it'd be better to just support cross-database access in a
single backend...
Good luck with that. It's probably as hard or harder than making the
backend multi-threaded, which is itself harder than any project a
reasonable person will undertake any time in the forseeable future.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 24 Oct 2014 20:28, "Robert Haas" <robertmhaas@gmail.com> wrote:
You could perhaps try to create a command that would move a schema
between two databases in the same cluster. It's fraught with
practical difficulties because a single backend can't be connected to
both databases at the same time, so how exactly do you make the
required catalog changes all in a single transaction? But if you
imagine that you have an infinite pool of top-notch PostgreSQL talent
with unbounded time to work on this problem and no other, I bet
somebody could engineer a solution.
I think the bigger problem is the dependent objects. Things like data types
which might exist in both databases but with different oids.
If you simplify the problem to only handle tables and indexes and only if
they only use system types and other objects then it seems doable but that
creates a lot of pitfalls for users.
I would do it in three steps more like pg_upgrade. 1) copy the schema to
the new database and note the new oids and relfilenodes. 2) copy or move
the data files over. 3) drop the old schema. Each of those can be done in
separate transactions or even backends -- the intermediate states just have
some empty tables in one of the schemas.
It's not clear to me what state the databases should be in during step 2
though. A simple lock would not be sufficient. Perhaps there needs to be
something like indisvalid for tables. That might be handy for pg_bulkload
as well.