Question: Multiple pg clusters on one server can be reached with the standard port.
Hello all,
I am currently looking for a solution similar to Oracle Listener.
Current situation:
I have a system with multiple PostgreSQL clusters for different databases.
Each cluster is running on the same physical machine and is accessed through its own DNS alias and corresponding port.
I only have one single IP address available, and there is no possibility of obtaining additional addresses.
Here's an example:
DNS ALIAS Host
─> pgs1.server.net:5432 ─> PG_Cluster1@192.168.0.1:5432
─> pgs2.server.net:5433 ─> PG_Cluster2@192.168.0.1:5433
─> pgs3.server.net:5434 ─> PG_Cluster3@192.168.0.1:5434
─> pgs4.server.net:5435 ─> PG_Cluster4@192.168.0.1:5435
─> pgs5.server.net:5436 ─> PG_Cluster5@192.168.0.1:5436
Desired solution:
I still want to use the same system with different PostgreSQL clusters for the various databases.
These clusters are internally accessible through different ports.
However, I would like a service on the server to receive all external requests on port 5432 and forward them to the corresponding internal cluster based on the DNS alias.
It would also be desirable if this service could provide additional features like connection pooling or other functionalities.
Similar to a reverse proxy.
Here's an example:
DNS ALIAS HOST
─> pgs1.server.net:5432 ─┐ ┌──────────────────┬──> PG_Cluster1@localhost:5433
─> pgs2.server.net:5432 ─┤ │ ├──> PG_Cluster2@localhost:5434
─> pgs3.server.net:5432 ─┼─>─┤ 192.168.0.1:5432 ├──> PG_Cluster3@localhost:5435
─> pgs4.server.net:5432 ─┤ │ ├──> PG_Cluster4@localhost:5436
─> pgs5.server.net:5432 ─┘ └──────────────────┴──> PG_Cluster5@localhost:5437
Is there a solution for this, and what are the advantages or limitations that arise from it?
Thank you in advance for your suggestions and help.
Regards,
Michael
Hi
Am 16.06.23 um 11:40 schrieb Brainmue:
Hello all,
I am currently looking for a solution similar to Oracle Listener.
Current situation:
I have a system with multiple PostgreSQL clusters for different databases.
Each cluster is running on the same physical machine and is accessed through its own DNS alias and corresponding port.
I only have one single IP address available, and there is no possibility of obtaining additional addresses.Here's an example:
DNS ALIAS Host
─> pgs1.server.net:5432 ─> PG_Cluster1@192.168.0.1:5432
─> pgs2.server.net:5433 ─> PG_Cluster2@192.168.0.1:5433
─> pgs3.server.net:5434 ─> PG_Cluster3@192.168.0.1:5434
─> pgs4.server.net:5435 ─> PG_Cluster4@192.168.0.1:5435
─> pgs5.server.net:5436 ─> PG_Cluster5@192.168.0.1:5436Desired solution:
I still want to use the same system with different PostgreSQL clusters for the various databases.
These clusters are internally accessible through different ports.
However, I would like a service on the server to receive all external requests on port 5432 and forward them to the corresponding internal cluster based on the DNS alias.
It would also be desirable if this service could provide additional features like connection pooling or other functionalities.
Similar to a reverse proxy.Here's an example:
DNS ALIAS HOST
─> pgs1.server.net:5432 ─┐ ┌──────────────────┬──> PG_Cluster1@localhost:5433
─> pgs2.server.net:5432 ─┤ │ ├──> PG_Cluster2@localhost:5434
─> pgs3.server.net:5432 ─┼─>─┤ 192.168.0.1:5432 ├──> PG_Cluster3@localhost:5435
─> pgs4.server.net:5432 ─┤ │ ├──> PG_Cluster4@localhost:5436
─> pgs5.server.net:5432 ─┘ └──────────────────┴──> PG_Cluster5@localhost:5437Is there a solution for this, and what are the advantages or limitations that arise from it?
Thank you in advance for your suggestions and help.
Regards,
Michael
possible solutions:
* set up a firewall rule to forward connection
* use a tcp proxy (nginx can do that)
* check pg_bouncer
best regards
Thomas
Attachments:
16. Juni 2023 12:04, "Thomas Markus" <t.markus@proventis.net> schrieb:
Hi
Am 16.06.23 um 11:40 schrieb Brainmue:
Hello all,
I am currently looking for a solution similar to Oracle Listener.
Current situation:
I have a system with multiple PostgreSQL clusters for different databases.
Each cluster is running on the same physical machine and is accessed through its own DNS alias and
corresponding port.
I only have one single IP address available, and there is no possibility of obtaining additional
addresses.Here's an example:
DNS ALIAS Host
─> pgs1.server.net:5432 ─> PG_Cluster1@192.168.0.1:5432
─> pgs2.server.net:5433 ─> PG_Cluster2@192.168.0.1:5433
─> pgs3.server.net:5434 ─> PG_Cluster3@192.168.0.1:5434
─> pgs4.server.net:5435 ─> PG_Cluster4@192.168.0.1:5435
─> pgs5.server.net:5436 ─> PG_Cluster5@192.168.0.1:5436Desired solution:
I still want to use the same system with different PostgreSQL clusters for the various databases.
These clusters are internally accessible through different ports.
However, I would like a service on the server to receive all external requests on port 5432 and
forward them to the corresponding internal cluster based on the DNS alias.
It would also be desirable if this service could provide additional features like connection
pooling or other functionalities.
Similar to a reverse proxy.Here's an example:
DNS ALIAS HOST
─> pgs1.server.net:5432 ─┐ ┌──────────────────┬──> PG_Cluster1@localhost:5433
─> pgs2.server.net:5432 ─┤ │ ├──> PG_Cluster2@localhost:5434
─> pgs3.server.net:5432 ─┼─>─┤ 192.168.0.1:5432 ├──> PG_Cluster3@localhost:5435
─> pgs4.server.net:5432 ─┤ │ ├──> PG_Cluster4@localhost:5436
─> pgs5.server.net:5432 ─┘ └──────────────────┴──> PG_Cluster5@localhost:5437Is there a solution for this, and what are the advantages or limitations that arise from it?
Thank you in advance for your suggestions and help.
Regards,
Michaelpossible solutions:
* set up a firewall rule to forward connection
* use a tcp proxy (nginx can do that)
* check pg_bouncerbest regards
Thomas
Hello Thomas,
Thank you for your quick reply.
With firewall you mean an additional software, right?
Because with iptables or netfilter I can't forward TCP packets based on the DNS alias name. Or is that possible?
I have the same problem with nginx. I just looked in the documentation again but I can't find a way to distinguish which cluster to forward to based on the DNS alias.
Do you have an example for me?
We have already looked at pgbouncer and it works with that but unfortunately you have to do the authentication in pgbouncer. Which we don't like so much.
Regards,
Michael
16. Juni 2023 12:04, "Thomas Markus" <t.markus@proventis.net> schrieb:
Hi
Am 16.06.23 um 11:40 schrieb Brainmue:
Hello all,
I am currently looking for a solution similar to Oracle Listener.
Current situation:
I have a system with multiple PostgreSQL clusters for different databases.
Each cluster is running on the same physical machine and is accessed through its own DNS alias and
corresponding port.
I only have one single IP address available, and there is no possibility of obtaining additional
addresses.Here's an example:
DNS ALIAS Host
─> pgs1.server.net:5432 ─> PG_Cluster1@192.168.0.1:5432
─> pgs2.server.net:5433 ─> PG_Cluster2@192.168.0.1:5433
─> pgs3.server.net:5434 ─> PG_Cluster3@192.168.0.1:5434
─> pgs4.server.net:5435 ─> PG_Cluster4@192.168.0.1:5435
─> pgs5.server.net:5436 ─> PG_Cluster5@192.168.0.1:5436Desired solution:
I still want to use the same system with different PostgreSQL clusters for the various databases.
These clusters are internally accessible through different ports.
However, I would like a service on the server to receive all external requests on port 5432 and
forward them to the corresponding internal cluster based on the DNS alias.
It would also be desirable if this service could provide additional features like connection
pooling or other functionalities.
Similar to a reverse proxy.Here's an example:
DNS ALIAS HOST
─> pgs1.server.net:5432 ─┐ ┌──────────────────┬──> PG_Cluster1@localhost:5433
─> pgs2.server.net:5432 ─┤ │ ├──> PG_Cluster2@localhost:5434
─> pgs3.server.net:5432 ─┼─>─┤ 192.168.0.1:5432 ├──> PG_Cluster3@localhost:5435
─> pgs4.server.net:5432 ─┤ │ ├──> PG_Cluster4@localhost:5436
─> pgs5.server.net:5432 ─┘ └──────────────────┴──> PG_Cluster5@localhost:5437Is there a solution for this, and what are the advantages or limitations that arise from it?
Thank you in advance for your suggestions and help.
Regards,
Michaelpossible solutions:
* set up a firewall rule to forward connection
* use a tcp proxy (nginx can do that)
* check pg_bouncerbest regards
Thomas
Hello Thomas,
Thank you for your quick reply.
With firewall you mean an additional software, right?
Because with iptables or netfilter I can't forward TCP packets based on the DNS alias name. Or is
that possible?
I have the same problem with nginx. I just looked in the documentation again but I can't find a way
to distinguish which cluster to forward to based on the DNS alias.
Do you have an example for me?
We have already looked at pgbouncer and it works with that but unfortunately you have to do the
authentication in pgbouncer. Which we don't like so much.
Regards,
Michael
On Fri, 2023-06-16 at 09:40 +0000, Brainmue wrote:
I am currently looking for a solution similar to Oracle Listener.
Can you explain why? Perhaps there exists a good solution for the
underlying problem.
Yours,
Laurenz Albe
16. Juni 2023 14:13, "Laurenz Albe" <laurenz.albe@cybertec.at> schrieb:
On Fri, 2023-06-16 at 09:40 +0000, Brainmue wrote:
I am currently looking for a solution similar to Oracle Listener.
Can you explain why? Perhaps there exists a good solution for the
underlying problem.Yours,
Laurenz Albe
Hello Laurenz,
I can try.
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.
We can manage very flexible moves without informing the application or even requiring changes from them.
Of course, there would also be the solution of always assigning a certain fixed port to each application, but we have a lot of network segmentation with many firewalls in between.
This would require a lot of organisational effort.
That is the idea behind it.
Regards,
Michael
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.
There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.html
If you want to manage the connection strings centrally, you can use LDAP lookup:
https://www.postgresql.org/docs/current/libpq-ldap.html
Yours,
Laurenz Albe
On Fri, Jun 16, 2023 at 2:50 PM Laurenz Albe <laurenz.albe@cybertec.at>
wrote:
There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.html
The problem with the service and password files is that passwords are in
plain text though.
Yes there are restrictions on permissions of the password file (on Linux),
still having some
kind of encrypted token with an expiration date would be better IMHO.
On 6/16/23 07:50, Laurenz Albe wrote:
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.html
Do JDBC and ODBC recognize .pg_service.conf? I know they don't recognize
.pgpass.
If you want to manage the connection strings centrally, you can use LDAP lookup:
https://www.postgresql.org/docs/current/libpq-ldap.htmlYours,
Laurenz Albe
--
Born in Arizona, moved to Babylonia.
16. Juni 2023 14:50, "Laurenz Albe" <laurenz.albe@cybertec.at> schrieb:
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.htmlIf you want to manage the connection strings centrally, you can use LDAP lookup:
https://www.postgresql.org/docs/current/libpq-ldap.htmlYours,
Laurenz Albe
Thank you, I already know this solution, but the LDAP solution is out of the question for us and the file again means an intervention on the client. And that's exactly what we don't want.
On Fri, 2023-06-16 at 14:49 +0000, Brainmue wrote:
16. Juni 2023 14:50, "Laurenz Albe" <laurenz.albe@cybertec.at> schrieb:
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.htmlIf you want to manage the connection strings centrally, you can use LDAP lookup:
https://www.postgresql.org/docs/current/libpq-ldap.htmlThank you, I already know this solution, but the LDAP solution is out of the question for us and
the file again means an intervention on the client. And that's exactly what we don't want.
Okay.
Then why don't you go with your original solution, but use a unique TCP port number
for each database? There are enough port numbers available. That way, there is no
collision and no need for a proxy to map port numbers.
Yours,
Laurenz Albe
On Fri, 2023-06-16 at 09:04 -0500, Ron wrote:
On 6/16/23 07:50, Laurenz Albe wrote:
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.htmlDo JDBC and ODBC recognize .pg_service.conf? I know they don't recognize
.pgpass.
JDBC doesn't, because it does not use libpq.
ODBC can use both .pgpass and .pg_service.conf, since it uses libpq.
Yours,
Laurenz Albe
On Fri, 2023-06-16 at 15:25 +0200, Dominique Devienne wrote:
On Fri, Jun 16, 2023 at 2:50 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:
There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.htmlThe problem with the service and password files is that passwords are in plain text though.
Yes there are restrictions on permissions of the password file (on Linux), still having some
kind of encrypted token with an expiration date would be better IMHO.
You don't store a password in the connection service file.
Yours,
Laurenz Albe
On 6/16/23 10:19, Laurenz Albe wrote:
On Fri, 2023-06-16 at 09:04 -0500, Ron wrote:
On 6/16/23 07:50, Laurenz Albe wrote:
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.htmlDo JDBC and ODBC recognize .pg_service.conf? I know they don't recognize
.pgpass.JDBC doesn't, because it does not use libpq.
ODBC can use both .pgpass and .pg_service.conf, since it uses libpq.
All of the applications that we use are written in Java, so that wouldn't work.
--
Born in Arizona, moved to Babylonia.
16. Juni 2023 17:18, "Laurenz Albe" <laurenz.albe@cybertec.at> schrieb:
On Fri, 2023-06-16 at 14:49 +0000, Brainmue wrote:
16. Juni 2023 14:50, "Laurenz Albe" <laurenz.albe@cybertec.at> schrieb:
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.htmlIf you want to manage the connection strings centrally, you can use LDAP lookup:
https://www.postgresql.org/docs/current/libpq-ldap.htmlThank you, I already know this solution, but the LDAP solution is out of the question for us and
the file again means an intervention on the client. And that's exactly what we don't want.Okay.
Then why don't you go with your original solution, but use a unique TCP port number
for each database? There are enough port numbers available. That way, there is no
collision and no need for a proxy to map port numbers.Yours,
Laurenz Albe
Thank you for dealing with our wishes.
Because we are growing more and more and we have many databases in different networks.
Therefore, we are looking for a solution that will make the firewall problem more manageable for the future.
And currently I believe that managing one more service in automation would be the lesser of two evils for us.
But that's exactly why we're looking for a service that does that at all.
Regards
Michael
On 6/16/23 10:18, Laurenz Albe wrote:
On Fri, 2023-06-16 at 14:49 +0000, Brainmue wrote:
16. Juni 2023 14:50, "Laurenz Albe" <laurenz.albe@cybertec.at> schrieb:
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.htmlIf you want to manage the connection strings centrally, you can use LDAP lookup:
https://www.postgresql.org/docs/current/libpq-ldap.htmlThank you, I already know this solution, but the LDAP solution is out of the question for us and
the file again means an intervention on the client. And that's exactly what we don't want.Okay.
Then why don't you go with your original solution, but use a unique TCP port number
for each database? There are enough port numbers available. That way, there is no
collision and no need for a proxy to map port numbers.
In practice, that gets very complicated is large organizations: every time
you add another database, you must file another request with the CISO RISK
office to get yet another non-standard port open from dozens of machines,
and the network team implement them.
Operationally much simpler to have a listener handle that.
--
Born in Arizona, moved to Babylonia.
16. Juni 2023 17:41, "Ron" <ronljohnsonjr@gmail.com> schrieb:
On 6/16/23 10:18, Laurenz Albe wrote:
On Fri, 2023-06-16 at 14:49 +0000, Brainmue wrote:
16. Juni 2023 14:50, "Laurenz Albe" <laurenz.albe@cybertec.at> schrieb:
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.
There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.htmlIf you want to manage the connection strings centrally, you can use LDAP lookup:
https://www.postgresql.org/docs/current/libpq-ldap.htmlThank you, I already know this solution, but the LDAP solution is out of the question for us and
the file again means an intervention on the client. And that's exactly what we don't want.Okay.
Then why don't you go with your original solution, but use a unique TCP port number
for each database? There are enough port numbers available. That way, there is no
collision and no need for a proxy to map port numbers.In practice, that gets very complicated is large organizations: every time you add another
database, you must file another request with the CISO RISK office to get yet another non-standard
port open from dozens of machines, and the network team implement them.Operationally much simpler to have a listener handle that.
-- Born in Arizona, moved to Babylonia.
Hello Ron,
I have to agree with you there as well. The workflow you have to go through is also often a time issue.
There are many places that have to agree and then application owners still have to provide justifications.
At the same time, we have to be flexible and fast and allocate the resources well at any time and provide the application with the maximum possible performance.
Regards
Michael
16. Juni 2023 17:41, "Ron" <ronljohnsonjr@gmail.com> schrieb:
On 6/16/23 10:18, Laurenz Albe wrote:
On Fri, 2023-06-16 at 14:49 +0000, Brainmue wrote:
16. Juni 2023 14:50, "Laurenz Albe" <laurenz.albe@cybertec.at> schrieb:
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.
There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.htmlIf you want to manage the connection strings centrally, you can use LDAP lookup:
https://www.postgresql.org/docs/current/libpq-ldap.htmlThank you, I already know this solution, but the LDAP solution is out of the question for us and
the file again means an intervention on the client. And that's exactly what we don't want.Okay.
Then why don't you go with your original solution, but use a unique TCP port number
for each database? There are enough port numbers available. That way, there is no
collision and no need for a proxy to map port numbers.In practice, that gets very complicated is large organizations: every time you add another
database, you must file another request with the CISO RISK office to get yet another non-standard
port open from dozens of machines, and the network team implement them.Operationally much simpler to have a listener handle that.
-- Born in Arizona, moved to Babylonia.
Hello Ron,
I have to agree with you there as well. The workflow you have to go through is also often a time
issue.
There are many places that have to agree and then application owners still have to provide
justifications.
At the same time, we have to be flexible and fast and allocate the resources well at any time and
provide the application with the maximum possible performance.
Regards
Michael
On 6/16/23 10:54, Brainmue wrote:
16. Juni 2023 17:41, "Ron" <ronljohnsonjr@gmail.com> schrieb:
On 6/16/23 10:18, Laurenz Albe wrote:
On Fri, 2023-06-16 at 14:49 +0000, Brainmue wrote:
16. Juni 2023 14:50, "Laurenz Albe" <laurenz.albe@cybertec.at> schrieb:
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.
There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.htmlIf you want to manage the connection strings centrally, you can use LDAP lookup:
https://www.postgresql.org/docs/current/libpq-ldap.htmlThank you, I already know this solution, but the LDAP solution is out of the question for us and
the file again means an intervention on the client. And that's exactly what we don't want.Okay.
Then why don't you go with your original solution, but use a unique TCP port number
for each database? There are enough port numbers available. That way, there is no
collision and no need for a proxy to map port numbers.In practice, that gets very complicated is large organizations: every time you add another
database, you must file another request with the CISO RISK office to get yet another non-standard
port open from dozens of machines, and the network team implement them.Operationally much simpler to have a listener handle that.
-- Born in Arizona, moved to Babylonia.
Hello Ron,
I have to agree with you there as well. The workflow you have to go through is also often a time
issue.
There are many places that have to agree and then application owners still have to provide
justifications.
At the same time, we have to be flexible and fast and allocate the resources well at any time and
provide the application with the maximum possible performance.
There's always The Cloud... spinning up a new AWS RDS Postgresql is fast
and simple. (Costly, though.)
--
Born in Arizona, moved to Babylonia.
16. Juni 2023 17:59, "Ron" <ronljohnsonjr@gmail.com> schrieb:
On 6/16/23 10:54, Brainmue wrote:
16. Juni 2023 17:41, "Ron" <ronljohnsonjr@gmail.com> schrieb:
On 6/16/23 10:18, Laurenz Albe wrote:
On Fri, 2023-06-16 at 14:49 +0000, Brainmue wrote:
16. Juni 2023 14:50, "Laurenz Albe" <laurenz.albe@cybertec.at> schrieb:
On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
We want to minimise dependencies between the application and the associated PostgreSQL DB.
The idea is that the application gets its DB alias and this is then used as a connection string.
This way we can decide in the backend on which server the PostgreSQL DB is running.
There is an existing solution for that: the libpq connection service file:
https://www.postgresql.org/docs/current/libpq-pgservice.htmlIf you want to manage the connection strings centrally, you can use LDAP lookup:
https://www.postgresql.org/docs/current/libpq-ldap.htmlThank you, I already know this solution, but the LDAP solution is out of the question for us and
the file again means an intervention on the client. And that's exactly what we don't want.Okay.
Then why don't you go with your original solution, but use a unique TCP port number
for each database? There are enough port numbers available. That way, there is no
collision and no need for a proxy to map port numbers.In practice, that gets very complicated is large organizations: every time you add another
database, you must file another request with the CISO RISK office to get yet another non-standard
port open from dozens of machines, and the network team implement them.Operationally much simpler to have a listener handle that.
-- Born in Arizona, moved to Babylonia.
Hello Ron,
I have to agree with you there as well. The workflow you have to go through is also often a time
issue.
There are many places that have to agree and then application owners still have to provide
justifications.
At the same time, we have to be flexible and fast and allocate the resources well at any time and
provide the application with the maximum possible performance.There's always The Cloud... spinning up a new AWS RDS Postgresql is fast and simple. (Costly,
though.)-- Born in Arizona, moved to Babylonia.
We know that too, but our data should/must currently remain in-house on our own hardware.
That is why we need a solution at our company.
Regards
Michael