BUG #17224: Postgres Yum repo mirror has expired SSL certificate
The following bug has been logged on the website:
Bug reference: 17224
Logged by: Matt Bush
Email address: postgres@netlag.com
PostgreSQL version: 13.3
Operating system: Linux (CentOS)
Description:
In our automation we first install the PGDG Yum repo
pgdg-redhat-repo-latest.noarch.rpm and then install the individual
components needed by our applications and servers. Starting about a week
ago, with the expiration of the Let's Encrypt! CA cert, we've been
experiencing intermittent repo failures due to an expired SSL cert on one of
the repo mirrors.
$ curl -v
https://download.postgresql.org/pub/repos/yum/14/redhat/rhel-7-x86_64/repodata/repomd.xml
* Trying 217.196.149.55...
* TCP_NODELAY set
* Connected to download.postgresql.org (217.196.149.55) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection:
ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, Server hello (2):
* SSL certificate problem: certificate has expired
* stopped the pause stream!
* Closing connection 0
curl: (60) SSL certificate problem: certificate has expired
More details here: https://curl.haxx.se/docs/sslcerts.html
PG Bug reporting form <noreply@postgresql.org> writes:
In our automation we first install the PGDG Yum repo
pgdg-redhat-repo-latest.noarch.rpm and then install the individual
components needed by our applications and servers. Starting about a week
ago, with the expiration of the Let's Encrypt! CA cert, we've been
experiencing intermittent repo failures due to an expired SSL cert on one of
the repo mirrors.
This indicates out-of-date software on your end.
We are aware of two possible sources of trouble:
* You might have a very out-of-date system trust store that
doesn't list the "ISRG Root X1" root certificate as trusted.
* Versions of OpenSSL up through 1.0.2 or so won't believe
that ISRG Root X1 is the cert to check for, as a result of
a hack that Let's Encrypt are using to preserve compatibility
with equally ancient Android installations. Details and
possible workarounds are mentioned at [1]https://www.openssl.org/blog/blog/2021/09/13/LetsEncryptRootCertExpire/.
regards, tom lane
[1]: https://www.openssl.org/blog/blog/2021/09/13/LetsEncryptRootCertExpire/
As mentioned, it's entirely intermittent. The playbook action immediately
prior to the failing step is to verify that the installed ca-certificates
us up-to-date, which it is:
$ rpm -qa | grep ca-certificates
ca-certificates-2021.2.50-72.el7_9.noarch
Rerunning the playbook more often than gets past the issue, but this is
obviously not ideal for an automated environment.
On Tue, Oct 12, 2021, 10:52 Tom Lane <tgl@sss.pgh.pa.us> wrote:
Show quoted text
PG Bug reporting form <noreply@postgresql.org> writes:
In our automation we first install the PGDG Yum repo
pgdg-redhat-repo-latest.noarch.rpm and then install the individual
components needed by our applications and servers. Starting about a week
ago, with the expiration of the Let's Encrypt! CA cert, we've been
experiencing intermittent repo failures due to an expired SSL cert onone of
the repo mirrors.
This indicates out-of-date software on your end.
We are aware of two possible sources of trouble:* You might have a very out-of-date system trust store that
doesn't list the "ISRG Root X1" root certificate as trusted.* Versions of OpenSSL up through 1.0.2 or so won't believe
that ISRG Root X1 is the cert to check for, as a result of
a hack that Let's Encrypt are using to preserve compatibility
with equally ancient Android installations. Details and
possible workarounds are mentioned at [1].regards, tom lane
[1]
https://www.openssl.org/blog/blog/2021/09/13/LetsEncryptRootCertExpire/
Matt Bush <mattpbush@gmail.com> writes:
As mentioned, it's entirely intermittent. The playbook action immediately
prior to the failing step is to verify that the installed ca-certificates
us up-to-date, which it is:
$ rpm -qa | grep ca-certificates
ca-certificates-2021.2.50-72.el7_9.noarch
Okay, but what about your openssl version? (I'd think RHEL7 contains
something reasonably up-to-date, but I might be wrong.) It might be
worth logging the output of "curl -V".
The intermittency might be an artifact of consulting several different
mirrors, only some of which use Let's Encrypt certificates. (Although
I think all of *.postgresql.org do use those.)
You could also investigate by logging the output of
openssl s_client -connect download.postgresql.org:443 </dev/null
If there's a mirror rotation involved this wouldn't necessarily hit
the same server as curl does, though. Anyway I just tried that here,
on an up-to-date RHEL8 installation, and I get a pass on each of the
four IP addresses that we advertise for download.postgresql.org:
$ openssl s_client -connect 217.196.149.55:443 </dev/null
CONNECTED(00000003)
Can't use SSL_get_servername
depth=2 C = US, O = Internet Security Research Group, CN = ISRG Root X1
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = R3
verify return:1
depth=0 CN = ftp.postgresql.org
verify return:1
---
Certificate chain
0 s:CN = ftp.postgresql.org
i:C = US, O = Let's Encrypt, CN = R3
1 s:C = US, O = Let's Encrypt, CN = R3
i:C = US, O = Internet Security Research Group, CN = ISRG Root X1
2 s:C = US, O = Internet Security Research Group, CN = ISRG Root X1
i:O = Digital Signature Trust Co., CN = DST Root CA X3
---
Server certificate
... blah, blah, blah ...
Verify return code: 0 (ok)
---
DONE
regards, tom lane