Advice on binary installation

Started by Zwettler Markus (OIZ)almost 5 years ago4 messagesgeneral
Jump to latest
#1Zwettler Markus (OIZ)
Markus.Zwettler@zuerich.ch

Hi,

We assume to get more than 100 Postgres clusters in the future.

We will get a very heterogeneous binary installation basis if we install needed extensions (e.g. ip4r) or software (e.g. patroni) on a per project basis.

There could be even more incompatibility problems otherwise if we install all global needed extensions or software with every project to get a homogenous binary installation.

Software installation is done with yum using PGDG downstream channels.

Any recommendations?

Thanks, Markus

#2Paul Förster
paul.foerster@gmail.com
In reply to: Zwettler Markus (OIZ) (#1)
Re: Advice on binary installation

Hi Markus,

On 13. Apr, 2021, at 14:43, Zwettler Markus (OIZ) <Markus.Zwettler@zuerich.ch> wrote:

We assume to get more than 100 Postgres clusters in the future.

We will get a very heterogeneous binary installation basis if we install needed extensions (e.g. ip4r) or software (e.g. patroni) on a per project basis.

There could be even more incompatibility problems otherwise if we install all global needed extensions or software with every project to get a homogenous binary installation.

Software installation is done with yum using PGDG downstream channels.

Any recommendations?

I don't know how you plan to do it, but I can only say how we do it:

- Compile from source including all extensions needed (make install-world).
Create a binary only directory this way, i.e. compile source to
/data/postgres/xx.x
- Create a tar file of said directory.
- Distribute that via Ansible untaring it on the destination servers.
- Have a standard postgresql.conf ready which includes a
shared_preload_libraries = 'pg_stat_statements'
(or whatever you need) for use with initdb.

This way, we make sure that all servers get the same new software directory as needed in a separate directory including all necessary extensions, which means we can then delete the old directory if it is not longer needed (i.e. after all databases have been upgraded). Also, this makes sure, everything is loaded properly.

With individual and only some few cases, we then use "create extension", but only extensions which we deliver with the tar via Ansible. If there is doing to be a new extension (which we avaoid if we can), then we put it into the tar Archive and nowhere else. So it's on all servers, but only a few databases use it then.

Hope this helps.

Paul

#3Zwettler Markus (OIZ)
Markus.Zwettler@zuerich.ch
In reply to: Paul Förster (#2)
AW: [Extern] Re: Advice on binary installation

-----Ursprüngliche Nachricht-----
Von: Paul Förster <paul.foerster@gmail.com>
Gesendet: Dienstag, 13. April 2021 15:02
An: Zwettler Markus (OIZ) <Markus.Zwettler@zuerich.ch>
Cc: pgsql-general@postgresql.org
Betreff: [Extern] Re: Advice on binary installation

Hi Markus,

On 13. Apr, 2021, at 14:43, Zwettler Markus (OIZ) <Markus.Zwettler@zuerich.ch>
wrote:

We assume to get more than 100 Postgres clusters in the future.

We will get a very heterogeneous binary installation basis if we install needed

extensions (e.g. ip4r) or software (e.g. patroni) on a per project basis.

There could be even more incompatibility problems otherwise if we install all

global needed extensions or software with every project to get a homogenous
binary installation.

Software installation is done with yum using PGDG downstream channels.

Any recommendations?

I don't know how you plan to do it, but I can only say how we do it:

- Compile from source including all extensions needed (make install-world).
Create a binary only directory this way, i.e. compile source to
/data/postgres/xx.x
- Create a tar file of said directory.
- Distribute that via Ansible untaring it on the destination servers.
- Have a standard postgresql.conf ready which includes a
shared_preload_libraries = 'pg_stat_statements'
(or whatever you need) for use with initdb.

This way, we make sure that all servers get the same new software directory as
needed in a separate directory including all necessary extensions, which means we
can then delete the old directory if it is not longer needed (i.e. after all databases
have been upgraded). Also, this makes sure, everything is loaded properly.

With individual and only some few cases, we then use "create extension", but only
extensions which we deliver with the tar via Ansible. If there is doing to be a new
extension (which we avaoid if we can), then we put it into the tar Archive and
nowhere else. So it's on all servers, but only a few databases use it then.

Hope this helps.

Paul

Out IT Sec requires to do software patching at least every 3 months.

How to you do software patching with your method? Creating + deploy new tarballs every 3 month?

Thanks, Markus

#4Kevin Brannen
KBrannen@efji.com
In reply to: Zwettler Markus (OIZ) (#3)
RE: [Extern] Re: Advice on binary installation

-----Original Message-----
From: Zwettler Markus (OIZ) <Markus.Zwettler@zuerich.ch>
Sent: Tuesday, April 13, 2021 8:08 AM

-----Ursprüngliche Nachricht-----

Von: Paul Förster <paul.foerster@gmail.com<mailto:paul.foerster@gmail.com>>

Gesendet: Dienstag, 13. April 2021 15:02

Betreff: [Extern] Re: Advice on binary installation

Hi Markus,

On 13. Apr, 2021, at 14:43, Zwettler Markus (OIZ)

<Markus.Zwettler@zuerich.ch<mailto:Markus.Zwettler@zuerich.ch>>

wrote:

We assume to get more than 100 Postgres clusters in the future.

We will get a very heterogeneous binary installation basis if we

install needed

extensions (e.g. ip4r) or software (e.g. patroni) on a per project basis.

There could be even more incompatibility problems otherwise if we

install all

global needed extensions or software with every project to get a

homogenous binary installation.

Software installation is done with yum using PGDG downstream channels.

Any recommendations?

I don't know how you plan to do it, but I can only say how we do it:

- Compile from source including all extensions needed (make install-world).

Create a binary only directory this way, i.e. compile source to

/data/postgres/xx.x

- Create a tar file of said directory.

- Distribute that via Ansible untaring it on the destination servers.

- Have a standard postgresql.conf ready which includes a

shared_preload_libraries = 'pg_stat_statements'

(or whatever you need) for use with initdb.

This way, we make sure that all servers get the same new software

directory as needed in a separate directory including all necessary

extensions, which means we can then delete the old directory if it is

not longer needed (i.e. after all databases have been upgraded). Also, this makes sure, everything is loaded properly.

With individual and only some few cases, we then use "create

extension", but only extensions which we deliver with the tar via

Ansible. If there is doing to be a new extension (which we avaoid if

we can), then we put it into the tar Archive and nowhere else. So it's on all servers, but only a few databases use it then.

Hope this helps.

Paul

Out IT Sec requires to do software patching at least every 3 months.

How to you do software patching with your method? Creating + deploy new tarballs every 3 month?

Thanks, Markus

We used compile our own to do something very similar until we got rid of the need for 1 extension with an external dependency that was holding us back. After that, we just use the community packages and install the extensions package too. That allows us to do "create extension X" to load the ones we need in our application.

When we need to upgrade, we just get the newer community packages and install them with the OS package manager ("rpm" in our case). As long as you stay within the same major version it's all easy. If you cross major versions then "pg_upgrade" is required (be sure to use the -k option).

With a little effort, all of that is scriptable so you can run it on any servers as required when it's time for that server to upgrade. Ansible, Puppet, Chef, or your own distribution software can be useful to get it there and put into service.

To us, the above is the easy part. The hard part is certifying that the our software works with the newer version of Pg. It's rare we have to change things, but Pg has "tightened up" a little over time, showing us where we were a little "loose and carefree" in our SQL. That hasn't hit us in the last couple of releases, so hopefully we're beyond that and now we left with "what new features look useful to take advantage of?". 😊

HTH,

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.