Any Plans for cross database queries on the same server?
Dblink is nice, but should it really be needed for databases on the same
physical server?
What would be cool is to allow a double dot notation i.e.
database1..schema1.table1
Just a idea. Comments?
--
Tony
This has been discussed about ten thousand times, and the answer is
still no.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
Peter Eisentraut wrote:
This has been discussed about ten thousand times, and the answer is
still no.
Why? Seems to me if it was discussed that much it must be a very sought
after feature.
How come it's not on the TO Do list for the future at least?
Is it because of some limitation of the core engine or something?
--
Tony
Am 30.01.2007 um 12:11 schrieb Tony Caduto:
Why? Seems to me if it was discussed that much it must be a very
sought after feature.
How come it's not on the TO Do list for the future at least?
Is it because of some limitation of the core engine or something?
http://www.postgresql.org/docs/faqs.FAQ.html#item4.17
I guess, nobody has a real interest on that because, if you really
need that, there are work arounds ... E.g. I do a lot of cross
database queries all the day with my apps. It's just handled by the
app server ...
cug
I always assumed the general argument is if you need to query different
databases on the same server with the same application, they ought not
to be separate databases because they're clearly related data.
It's kinda like "why isn't there a way to do an exactly one to exactly
one relationship between tables?". Well, because if one A always means
one B and one B always means one A, shouldn't they ought to be in the
same table already?
--
Brandon Aiken
CS/IT Systems Engineer
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Guido Neitzer
Sent: Tuesday, January 30, 2007 2:22 PM
To: Postgresql General
Subject: Re: [GENERAL] Any Plans for cross database queries on the same
server?
Am 30.01.2007 um 12:11 schrieb Tony Caduto:
Why? Seems to me if it was discussed that much it must be a very
sought after feature.
How come it's not on the TO Do list for the future at least?
Is it because of some limitation of the core engine or something?
http://www.postgresql.org/docs/faqs.FAQ.html#item4.17
I guess, nobody has a real interest on that because, if you really
need that, there are work arounds ... E.g. I do a lot of cross
database queries all the day with my apps. It's just handled by the
app server ...
cug
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
--------------------------------------------------------------------
** LEGAL DISCLAIMER **
Statements made in this email may or may not reflect the views and opinions of Wineman Technology, Inc.
This E-mail message and any attachments may contain legally privileged, confidential or proprietary information. If you are not the intended recipient(s), or the employee or agent responsible for delivery of this message to the intended recipient(s), you are hereby notified that any dissemination, distribution or copying of this E-mail message is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete this E-mail message from your computer.
QS Disclaimer Demo. Copyright (C) Pa-software.
Visit www.pa-software.com for more information.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 01/30/07 13:33, Brandon Aiken wrote:
I always assumed the general argument is if you need to query different
databases on the same server with the same application, they ought not
to be separate databases because they're clearly related data.
Just because they are related, doesn't mean that it's always wise to
lump it all in the same database. Mainly for scalability and
performance reasons.
Our system looks like this set of databases:
REF database has all customer "low-volatility" data.
TRANS1 - Transaction data for accounts 0 - 999999.
TRANS2 - Transaction data for accounts 1000000 - 1999999.
TRANS3 - Transaction data for accounts 2000000 - 2999999.
etc.
This allows us to backup all the databases at the same time, and
horizontally scale as systems reach capacity
Currently, "joins" between REF & the TRANSx databases are handled by
the app server, but being able to attach to both databases and being
able to do distributed joins would really simplify our apps.
Of course, reducing the complexity of our apps means increasing the
complexity down to the RDBMS... And thus the balance of the
Universe is maintained.
It's kinda like "why isn't there a way to do an exactly one to exactly
one relationship between tables?". Well, because if one A always means
one B and one B always means one A, shouldn't they ought to be in the
same table already?
Vertical partitioning. If it's a large table, and certain columns
are "hot", while others not accessed so much, then partitioning the
table would speed up access to the hot column.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFFv6G1S9HxQb37XmcRAumFAKCghFl/ryKtLQ+nlyP+jMRF3NJj1ACgruEU
wok9v3BkB6EFlJ01i/nYDLI=
=gzzP
-----END PGP SIGNATURE-----
Ron Johnson wrote:
be separate databases because they're clearly related data.
Just because they are related, doesn't mean that it's always wise to
lump it all in the same database. Mainly for scalability and
performance reasons.
I would tend to agree, there are numerous times being able to do a cross
database query without the hassle of DBlink
would be extremely handy. I could also see it being valuable in a data
warehouse type situation.
I know it can be done in M$ SQL server using .. notation and I bet you
can do it in DB2 and Oracle.
you can even do it in MySQL, in MySQL it's their way of implementing
schemas.
Considering all these other DBs can do it, doesn't it make sense to at
least put it on the radar for Postgresql?
Just my 2 cents....
--
Tony
Tony Caduto <tony_caduto@amsoftwaredesign.com> writes:
Considering all these other DBs can do it, doesn't it make sense to at
least put it on the radar for Postgresql?
It's already in the TODO list.
regards, tom lane
It's interesting that this is yet another issue of where exactly you
want to place your business logic. Do you do it as much as you can on
your sql server or do you bias it towards your client application. It's
obvious that you can do cross database linking in your application
layer, but if you want to keep your logic on your server as much as
possible, you need some form of cross database linking at the server
level. It's sort of a matter of taste, but there are lots of people who
like to keep there logic on the server or at least within sql
statements, so there's probably a good sized market that your not
reaching if you ignore it.
Tony Caduto wrote:
Show quoted text
Ron Johnson wrote:
be separate databases because they're clearly related data.
Just because they are related, doesn't mean that it's always wise to
lump it all in the same database. Mainly for scalability and
performance reasons.I would tend to agree, there are numerous times being able to do a
cross database query without the hassle of DBlink
would be extremely handy. I could also see it being valuable in a
data warehouse type situation.I know it can be done in M$ SQL server using .. notation and I bet you
can do it in DB2 and Oracle.
you can even do it in MySQL, in MySQL it's their way of implementing
schemas.Considering all these other DBs can do it, doesn't it make sense to at
least put it on the radar for Postgresql?Just my 2 cents....
Mark Walker wrote:
It's sort of a matter of taste, but there are lots of people who like
to keep there logic on the server or at least within sql statements,
so there's probably a good sized market that your not reaching if you
ignore it.
That is a good point, I and many developers I know like to keep all the
business logic on the server in stored procedure and functions and
having this ability as a native part of Postgresql would be a huge
advantage.
DBlink is a decent workaround,but it becomes a pain having to wrap
everything with the dblink syntax, plus there is a little bit of
overhead involved creating a connection over TCP/IP to run a query on
the same server. DBlink is great when you need to connect to a
different server though.
Later,
--
Tony
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 01/30/07 14:41, Tony Caduto wrote:
Mark Walker wrote:
It's sort of a matter of taste, but there are lots of people who like
to keep there logic on the server or at least within sql statements,
so there's probably a good sized market that your not reaching if you
ignore it.That is a good point, I and many developers I know like to keep all the
business logic on the server in stored procedure and functions and
having this ability as a native part of Postgresql would be a huge
advantage.
DBlink is a decent workaround,but it becomes a pain having to wrap
everything with the dblink syntax, plus there is a little bit of
overhead involved creating a connection over TCP/IP to run a query on
the same server. DBlink is great when you need to connect to a
different server though.
True distributed transactions would be handy, though.
Many of our design decisions might have been different had they been
available.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFFv7FZS9HxQb37XmcRAjtPAJ9/eXPPp7xcOWYPhfkNVYudAeXB+wCfU96x
9ENICnZnVmYj59GB7niatyY=
=aQsm
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 01/30/07 15:55, Richard Troy wrote:
On 01/30/07 14:41, Tony Caduto wrote:
Mark Walker wrote:
[snip]
These days with good open source choices, things are a bit
different, but that doesn't mean it's always good to go hog wild
with any particular tool set just because you can. Sometimes
people over-use engine-side features, forgetting that there are
nearly always more cycles available on clients than servers...
Call me Dinosaur Ron, but I think that CICS and the 3270 terminal
were a pair of the best IT inventions ever. With them, a modern
system can support 10s of thousands of concurrent online users.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFFv7x4S9HxQb37XmcRAk6dAJ0UoeQSOIajiw/BXdbTeJ53VHWuIwCghpc+
X4RYf7W8kWrmZ1V9vlkvFgE=
=uKXe
-----END PGP SIGNATURE-----
Import Notes
Reply to msg id not found: Pine.LNX.4.33.0701301328451.30496-100000@denzel.inReference msg id not found: Pine.LNX.4.33.0701301328451.30496-100000@denzel.in | Resolved by subject fallback
On 01/30/07 14:41, Tony Caduto wrote:
Mark Walker wrote:
It's sort of a matter of taste, but there are lots of people who like
to keep there logic on the server or at least within sql statements,
so there's probably a good sized market that your not reaching if you
ignore it.That is a good point, I and many developers I know like to keep all the
business logic on the server in stored procedure and functions and
having this ability as a native part of Postgresql would be a huge
advantage.
DBlink is a decent workaround,but it becomes a pain having to wrap
everything with the dblink syntax, plus there is a little bit of
overhead involved creating a connection over TCP/IP to run a query on
the same server. DBlink is great when you need to connect to a
different server though.
"Business logic on the server" - heh. If you want to know where all such
ideas came from, think cynically.
...Originally, back in the day, the goal of creating "stored procedures",
in particular, and aiding and abeting features like "triggers", had
nothing to do with performance or clean architectures, rather the DBMS
vendors figured this was a good way to trap customers into only being able
to use their database engine. The more DBMS-engine-unique features a
customer used the more expensive it would be for them to switch to another
DBMS engine. The lack of standards regarding such "business logic" is
precisely because the vendors didn't want portability.
'Course, in a mature market (like the RDBMS) this mostly benefits the
largest vendors as gaining market share requires customers to convert. And
so today we have companies like ANTs Software absorbing the costs of
porting such features to their ANTs Data Server - last I heard they'd do
the conversion for free.
These days with good open source choices, things are a bit different, but
that doesn't mean it's always good to go hog wild with any particular tool
set just because you can. Sometimes people over-use engine-side features,
forgetting that there are nearly always more cycles available on clients
than servers...
Richard
--
Richard Troy, Chief Scientist
Science Tools Corporation
510-924-1363 or 202-747-1263
rtroy@ScienceTools.com, http://ScienceTools.com/
LOL, I remember those days. "Uh, can you hold on? My computer just
went down." or "you need to fill out form 1203-B, send us $25 and we'll
get you the information you need in six weeks." Just kidding, but
certainly reliability standards and information demands are much higher
these days, aren't they?
Ron Johnson wrote:
Show quoted text
-----BEGIN PGP SIGNED MESSAGE-----
Call me Dinosaur Ron, but I think that CICS and the 3270 terminal
were a pair of the best IT inventions ever. With them, a modern
system can support 10s of thousands of concurrent online users.-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)iD8DBQFFv7x4S9HxQb37XmcRAk6dAJ0UoeQSOIajiw/BXdbTeJ53VHWuIwCghpc+
X4RYf7W8kWrmZ1V9vlkvFgE=
=uKXe
-----END PGP SIGNATURE--------------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 01/30/07 16:35, Mark Walker wrote:
LOL, I remember those days. "Uh, can you hold on? My computer just
went down." or "you need to fill out form 1203-B, send us $25 and we'll
get you the information you need in six weeks." Just kidding, but
certainly reliability standards and information demands are much higher
these days, aren't they?
Which is more reliable, a mainframe where the app and database runs
on the host, or an N-tier client-server system where the RDBMS and
each tier are created by a different company, and the client is
thick and MS Windows?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFFv8xNS9HxQb37XmcRAmEFAJ91viT3qgEJNOT2A0nc9kL+xHSRSwCfYwYU
fuHBvnxFbV9tptbADVfwPmA=
=K7nY
-----END PGP SIGNATURE-----
On 1/30/07, Peter Eisentraut <peter_e@gmx.net> wrote:
This has been discussed about ten thousand times, and the answer is
still no.
How did we go from this?
To this:
It's already in the TODO list.
regards, tom lane
Perhaps we should be more diplomatic in our approach to responding?
I'm going to assume that, at some point, I can look forward to this feature
in PostgreSQL.
-- Jeff W.
On Tue, 30 Jan 2007, Mark Walker wrote:
LOL, I remember those days. "Uh, can you hold on? My computer just
went down." or "you need to fill out form 1203-B, send us $25 and we'll
get you the information you need in six weeks." Just kidding, but
certainly reliability standards and information demands are much higher
these days, aren't they?
"Reliability standards ... higher these days?"
-har-har-har!- That's a good one!
Sure, in terms of bits moved/processed between hardware failures, things
have much improved, but I can't help but think if what a joke it is that
favored operating systems think it's OK to run out of memory for their own
activity and randomly kill processes so they don't hang! HAH! Some
Reilability. And people think this is a Good Thing (tm) because 1% of
overhead was saved!
<rant>
Sure wish the Open Source OS people would get a clue; paying a percent or
so for reliability pays for itself thousands of times over and most
people, if knowledgeable, would choose to spend the overhead to have a
system that really is reliable.
</rant>
Richard
--
Richard Troy, Chief Scientist
Science Tools Corporation
510-924-1363 or 202-747-1263
rtroy@ScienceTools.com, http://ScienceTools.com/
I don't know. My customers expect 24/7 reliability. They expect to be
able to access their info anywhere in the world over a variety of
different devices. I can remember times when people would just go home
because computer networks were down. I haven't seen that happen in a
long time.
Maybe that's just my experience with my customers. I have seen signs of
dysfunctional computer systems lately. I was in a fast food restaurant
in San Francisco a few months back and they were manually taking
orders. I think the only reason they stayed open was because the owner
was there. Last summer a McDonald's in Paris next to the hotel my
family was staying at shut down because their computer system was down.
It ticked me off because we ended up eating at some pricey cafe next
door. I guess I'm a typical dumb American, traveling all the way to
Paris to eat at McDonald's.
Richard Troy wrote:
Show quoted text
On Tue, 30 Jan 2007, Mark Walker wrote:
LOL, I remember those days. "Uh, can you hold on? My computer just
went down." or "you need to fill out form 1203-B, send us $25 and we'll
get you the information you need in six weeks." Just kidding, but
certainly reliability standards and information demands are much higher
these days, aren't they?"Reliability standards ... higher these days?"
-har-har-har!- That's a good one!
Sure, in terms of bits moved/processed between hardware failures, things
have much improved, but I can't help but think if what a joke it is that
favored operating systems think it's OK to run out of memory for their own
activity and randomly kill processes so they don't hang! HAH! Some
Reilability. And people think this is a Good Thing (tm) because 1% of
overhead was saved!<rant>
Sure wish the Open Source OS people would get a clue; paying a percent or
so for reliability pays for itself thousands of times over and most
people, if knowledgeable, would choose to spend the overhead to have a
system that really is reliable.
</rant>Richard
Mark Walker <furface@omnicode.com> writes:
Maybe that's just my experience with my customers. I have seen signs of
dysfunctional computer systems lately. I was in a fast food restaurant
in San Francisco a few months back and they were manually taking
orders. I think the only reason they stayed open was because the owner
was there.
The local McD's seemed completely unable to cope the other day when
their computer was down. If they had any manual procedures in place,
the cashier my wife was dealing with was unaware of them ... she ended
up going somewhere else.
regards, tom lane
On Tue, 30 Jan 2007, Mark Walker wrote:
I don't know. My customers expect 24/7 reliability. They expect to be
able to access their info anywhere in the world over a variety of
different devices. I can remember times when people would just go home
because computer networks were down. I haven't seen that happen in a
long time.
...Back in 1986, Cheryl Healy and I took on running Polaroid's corporate
systems "24 X 7" - and we worked hard to make it "24 X 7 X 365.24".
Shortly thereafter - while still working with Cheryl, Angel Vila, Chris
Boerner and I took on running Bellcore's 800 telephone network full time
also - our success was measured how few minutes/seconds there was any lost
business at all on an annual basis. (Bellcore was previously known as AT&T
Bell Laboratories.) If you made an 800 number based call from '86 to '89,
the systems I managed for Bellcore helped place that call. ... I could go
on. I've worked in the "always up" community a long time now and have
worked with/for more corporations in this capacity than nearly anyone you
might find - mostly very large, well known companies.
My observation is that we have a real shortage of quality operating
systems today, and what few exist/remain don't enjoy much market share
because they're not based on Unix, so they're largely missing out on the
Open Source activity. What may be worse, young people who don't know any
better are sometimes told/taught not to bother with anything over five
years old as it's antiquated so they don't ever find out that things could
be better - and once were. (Example, anyone who thinks "man pages" are
great has obviously got a very limited experience from which to base their
opinion!) ... As a practical matter today we mostly have a choice of
Windows or some flavor of unix, neither of which are great. That would be
very different in my opinion if only Unix didn't have this asenine view
that the choice between a memory management strategy that kills random
processes and turning that off and accepting that your system hangs is a
reasonable choice and that spending a measily % of performance in overhead
to eliminate the problem is out of the question. Asenine, I tell you.
Meanwhile, what Operating Systems ARE _today_ reliable choices upon which
to run your Postgres datababse engine?
...BTW, McDonalds in Paris?! -smile- Just make sure you order Freedom
Fries!
Richard
Maybe that's just my experience with my customers. I have seen signs of
dysfunctional computer systems lately. I was in a fast food restaurant
in San Francisco a few months back and they were manually taking
orders. I think the only reason they stayed open was because the owner
was there. Last summer a McDonald's in Paris next to the hotel my
family was staying at shut down because their computer system was down.
It ticked me off because we ended up eating at some pricey cafe next
door. I guess I'm a typical dumb American, traveling all the way to
Paris to eat at McDonald's.Richard Troy wrote:
On Tue, 30 Jan 2007, Mark Walker wrote:
LOL, I remember those days. "Uh, can you hold on? My computer just
went down." or "you need to fill out form 1203-B, send us $25 and we'll
get you the information you need in six weeks." Just kidding, but
certainly reliability standards and information demands are much higher
these days, aren't they?"Reliability standards ... higher these days?"
-har-har-har!- That's a good one!
Sure, in terms of bits moved/processed between hardware failures, things
have much improved, but I can't help but think if what a joke it is that
favored operating systems think it's OK to run out of memory for their own
activity and randomly kill processes so they don't hang! HAH! Some
Reilability. And people think this is a Good Thing (tm) because 1% of
overhead was saved!<rant>
Sure wish the Open Source OS people would get a clue; paying a percent or
so for reliability pays for itself thousands of times over and most
people, if knowledgeable, would choose to spend the overhead to have a
system that really is reliable.
</rant>Richard
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
--
Richard Troy, Chief Scientist
Science Tools Corporation
510-924-1363 or 202-747-1263
rtroy@ScienceTools.com, http://ScienceTools.com/