Successor of MD5 authentication, let's use SCRAM
The security of MD5 authentication is brought up every now and then,
most recently here:
http://archives.postgresql.org/pgsql-hackers/2012-08/msg00586.php. The
NIST competition mentioned in that thread just finished. MD5 is still
resistent to preimage attacks, which is what matters for our MD5
authentication protocol, but I think we should start thinking about a
replacement, if only to avoid ringing the alarm bells in people's minds
thinking "MD5 = broken"
Perhaps the biggest weakness in the current scheme is that if an
attacker ever sees the contents of pg_shadow, it can use the stored
hashes to authenticate as any user. This might not seems like a big
deal, you have to be a superuser to read pg_shadow after all, but it
makes it a lot more dangerous to e.g leave old backups lying around.
Thers was some talk about avoiding that in this old thread:
http://archives.postgresql.org/pgsql-general/2002-06/msg00553.php.
It turns out that it's possible to do this without the kind of
commutative hash function discussed in that thread. There's a protocol
called Salted Challenge Response Authentication Mechanism (SCRAM) (see
RFC5802), that accomplishes the same with some clever use of a hash
function and XOR. I think we should adopt that.
Thoughts on that?
There are some other minor issues with current md5 authentication. SCRAM
would address these as well, but if we don't adopt SCRAM for some
reason, we should still address these somehow:
1. Salt length. Greg Stark calculated the odds of salt collisions here:
http://archives.postgresql.org/pgsql-hackers/2004-08/msg01540.php. It's
not too bad as it is, and as Greg pointed out, if you can eavesdrop it's
likely you can also hijack an already established connection.
Nevertheless I think we should make the salt longer, say, 16 bytes.
2. Make the calculation more expensive, to make dictionary attacks more
expensive. An eavesdropper can launch a brute-force or dictionary attack
using a captured hash and salt. Similar to the classic crypt(3)
function, it would be good for the calculation to be expensive, although
that naturally makes authentication more expensive too. For
future-proofing, it would be good to send the number of iterations the
hash is applied as part of the protocol, so that it can be configured in
the server or we can just raise the default/hardcoded number without
changing the protocol as computers becomes more powerful (SCRAM does this).
3. Instead of a straightforward hash of (password, salt), use a HMAC
construct to concatenate the password and salt (see RFC2104). This makes
it resistant to length-extension attacks. The current scheme isn't
vulnerable to that, but better safe than sorry.
- Heikki
On 10 October 2012 11:41, Heikki Linnakangas <hlinnakangas@vmware.com> wrote:
Thoughts on that?
I think there has been enough discussion of md5 problems elsewhere
that we should provide an alternative.
If we can agree on that bit first, we can move onto exactly what else
should be available.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Wed, Oct 10, 2012 at 3:36 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
On 10 October 2012 11:41, Heikki Linnakangas <hlinnakangas@vmware.com> wrote:
Thoughts on that?
I think there has been enough discussion of md5 problems elsewhere
that we should provide an alternative.If we can agree on that bit first, we can move onto exactly what else
should be available.
Main weakness in current protocol is that stored value is
plaintext-equivalent - you can use it to log in.
Rest of the problems - use of md5 and how it is used - are relatively minor.
(IOW - they don't cause immediate security incident.)
Which means just slapping SHA1 in place of MD5 and calling it a day
is bad idea.
Another bad idea is to invent our own algorithm - if a security
protocol needs to fulfill more than one requirement, it tends
to get tricky.
I have looked at SRP previously, but it's heavy of complex
bignum math, which makes it problematic to reimplement
in various drivers. Also many versions of it makes me
dubious of the authors..
The SCRAM looks good from the quick glance. It uses only
basic crypto tools - hash, hmac, xor.
The "stored auth info cannot be used to log in" will cause
problems to middleware, but SCRAM defines also
concept of log-in-as-other-user, so poolers can have
their own user that they use to create connections
under another user. As it works only on connect
time, it can actually be secure, unlike user switching
with SET ROLE.
--
marko
Heikki,
Like these proposals in general.
* Heikki Linnakangas (hlinnakangas@vmware.com) wrote:
For future-proofing, it would be good to send the
number of iterations the hash is applied as part of the protocol, so
that it can be configured in the server or we can just raise the
default/hardcoded number without changing the protocol as computers
becomes more powerful (SCRAM does this).
wrt future-proofing, I don't like the "#-of-iterations" approach. There
are a number of examples of how to deal with multiple encryption types
being supported by a protocol, I'd expect hash'ing could be done in the
same way. For example, Negotiate, SSL, Kerberos, GSSAPI, all have ways
of dealing with multiple encryption/hashing options being supported.
Multiple iterations could be supported through that same mechanism (as
des/des3 were both supported by Kerberos for quite some time).
In general, I think it's good to build on existing implementations where
possible. Perhaps we could even consider using something which already
exists for this? Also, how much should we worry about supporting
complicated/strong authentication systems for those who don't actually
encrypt the entire communication, which might reduce the need for this
additional complexity anyway? Don't get me wrong- I really dislike that
we don't have something better today for people who insist on password
based auth, but perhaps we should be pushing harder for people to use
SSL instead?
Thanks,
Stephen
* Marko Kreen (markokr@gmail.com) wrote:
As it works only on connect
time, it can actually be secure, unlike user switching
with SET ROLE.
I'm guessing your issue with SET ROLE is that a RESET ROLE can be issued
later..? If so, I'd suggest that we look at fixing that, but realize it
could break poolers. For that matter, I'm not sure how the proposal to
allow connections to be authenticated as one user but authorized as
another (which we actually already support in some cases, eg: peer)
*wouldn't* break poolers, unless you're suggesting they either use a
separate connection for every user, or reconnect every time, both of
which strike me as defeating a great deal of the point of having a
pooler in the first place...
Thanks,
Stephen
On 10/12/12 12:44 PM, Stephen Frost wrote:
Don't get me wrong- I really dislike that
we don't have something better today for people who insist on password
based auth, but perhaps we should be pushing harder for people to use
SSL instead?
Problem is, the fact that setting up SSL correctly is hard is outside of
our control.
Unless we can give people a "run these three commands on each server and
you're now SSL authenticating" script, we can continue to expect the
majority of users not to use SSL. And I don't think that level of
simplicity is even theoretically possible.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
* Josh Berkus (josh@agliodbs.com) wrote:
Problem is, the fact that setting up SSL correctly is hard is outside of
our control.
Agreed, though the packagers do make it easier..
Unless we can give people a "run these three commands on each server and
you're now SSL authenticating" script, we can continue to expect the
majority of users not to use SSL. And I don't think that level of
simplicity is even theoretically possible.
The Debian-based packages do quite a bit to ease this pain. Do the
other distributions do anything to set up SSL certificates, etc on
install? Perhaps they could be convinced to?
Thanks,
Stephen
On 10/12/12 4:25 PM, Stephen Frost wrote:
* Josh Berkus (josh@agliodbs.com) wrote:
Unless we can give people a "run these three commands on each server and
you're now SSL authenticating" script, we can continue to expect the
majority of users not to use SSL. And I don't think that level of
simplicity is even theoretically possible.The Debian-based packages do quite a bit to ease this pain. Do the
other distributions do anything to set up SSL certificates, etc on
install? Perhaps they could be convinced to?
don't forget, there's OS's other than Linux to consider too... the
various BSD's, Solaris, AIX, OSX, and MS Windows are all platforms
PostgreSQL runs on.
--
john r pierce N 37, W 122
santa cruz ca mid-left coast
Stephen Frost wrote:
* Josh Berkus (josh@agliodbs.com) wrote:
Problem is, the fact that setting up SSL correctly is hard is outside of
our control.Agreed, though the packagers do make it easier..
Unless we can give people a "run these three commands on each server and
you're now SSL authenticating" script, we can continue to expect the
majority of users not to use SSL. And I don't think that level of
simplicity is even theoretically possible.The Debian-based packages do quite a bit to ease this pain. Do the
other distributions do anything to set up SSL certificates, etc on
install? Perhaps they could be convinced to?
This has bit me.
At my work we started a project on Debian, using the
http://packages.debian.org/squeeze-backports/ version of Postgres 9.1, and it
included the SSL out of the box, just install that regular Postgres or Pg client
package and SSL was ready to go.
And now we're migrating to Red Hat for the production launch, using the
http://www.postgresql.org/download/linux/redhat/ packages for Postgres 9.1, and
these do *not* include the SSL.
This change has been a pain, as we then disabled SSL when we otherwise would
have used it.
(Though all database access would be over a private server-server network, so
the situation isn't as bad as going over the public internet.)
How much trouble would it be to make the
http://www.postgresql.org/download/linux/redhat/ packages include SSL?
-- Darren Duncan
On 10/12/12 9:00 PM, Darren Duncan wrote:
And now we're migrating to Red Hat for the production launch, using
the http://www.postgresql.org/download/linux/redhat/ packages for
Postgres 9.1, and these do *not* include the SSL.
hmm? I'm using the 9.1 for CentOS 6(RHEL 6) and libpq.so certainly has
libssl3.so, etc as references. ditto the postmaster/postgres main
program has libssl3.so too. maybe your certificate chains don't come
pre-built, I dunno, I haven't dealt with that end of things.
--
john r pierce N 37, W 122
santa cruz ca mid-left coast
John R Pierce wrote:
On 10/12/12 9:00 PM, Darren Duncan wrote:
And now we're migrating to Red Hat for the production launch, using
the http://www.postgresql.org/download/linux/redhat/ packages for
Postgres 9.1, and these do *not* include the SSL.hmm? I'm using the 9.1 for CentOS 6(RHEL 6) and libpq.so certainly has
libssl3.so, etc as references. ditto the postmaster/postgres main
program has libssl3.so too. maybe your certificate chains don't come
pre-built, I dunno, I haven't dealt with that end of things.
Okay, I'll have to look into that. All I know is out of the box SSL just worked
on Debian and it didn't on Red Hat; trying to enable SSL on out of the box
Postgres on Red Hat gave a fatal error on server start, at the very least
needing the installation of SSL keys/certs, which I didn't have to do on Debian.
-- Darren Duncan
On 10/13/2012 01:55 AM, Darren Duncan wrote:
John R Pierce wrote:
On 10/12/12 9:00 PM, Darren Duncan wrote:
And now we're migrating to Red Hat for the production launch, using
the http://www.postgresql.org/download/linux/redhat/ packages for
Postgres 9.1, and these do *not* include the SSL.hmm? I'm using the 9.1 for CentOS 6(RHEL 6) and libpq.so certainly
has libssl3.so, etc as references. ditto the postmaster/postgres
main program has libssl3.so too. maybe your certificate chains
don't come pre-built, I dunno, I haven't dealt with that end of things.Okay, I'll have to look into that. All I know is out of the box SSL
just worked on Debian and it didn't on Red Hat; trying to enable SSL
on out of the box Postgres on Red Hat gave a fatal error on server
start, at the very least needing the installation of SSL keys/certs,
which I didn't have to do on Debian. -- Darren Duncan
.
Of course RedHat RPMs are build with SSL.
Does Debian they create a self-signed certificate? If so, count me as
unimpressed. I'd argue that's worse than doing nothing. Here's what the
docs say (rightly) about such certificates:
A self-signed certificate can be used for testing, but a certificate
signed by a certificate authority (CA) (either one of the global CAs
or a local one) should be used in production so that clients can
verify the server's identity. If all the clients are local to the
organization, using a local CA is recommended.
Creation of properly signed certificates is entirely outside the scope
of Postgres, and I would not expect packagers to do it. I have created a
local CA for RedHat and friends any number of times, and created signed
certs for Postgres, both server and client, using them. It's not
terribly hard.
cheers
andrew
* Andrew Dunstan (andrew@dunslane.net) wrote:
Does Debian they create a self-signed certificate? If so, count me
as unimpressed. I'd argue that's worse than doing nothing. Here's
what the docs say (rightly) about such certificates:
Self-signed certificates do provide for in-transit encryption. I agree
that they don't provide a guarantee of the remote side being who you
think it is, but setting up a MITA attack is more difficult than
eavesdropping on a connection and more likely to be noticed.
You can, of course, set up your own CA and sign certs off of it under
Debian as well. Unfortunately, most end users aren't going to do that.
Many of those same do benefit from at least having an encrypted
connection when it's all done for them.
Thanks,
Stephen
On Wed, Oct 10, 2012 at 11:41 AM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:
1. Salt length. Greg Stark calculated the odds of salt collisions here:
http://archives.postgresql.org/pgsql-hackers/2004-08/msg01540.php. It's not
too bad as it is, and as Greg pointed out, if you can eavesdrop it's likely
you can also hijack an already established connection. Nevertheless I think
we should make the salt longer, say, 16 bytes.
Fwiw that calculation was based on the rule of thumb that a collision
is likely when you have sqrt(hash space) elements. Wikipedia has a
better formula which comes up with 77,163.
For 16 bytes that formula gives 2,171,938,135,516,356,249 salts before
you expect a collision.
--
greg
On Sat, Oct 13, 2012 at 7:00 AM, Andrew Dunstan <andrew@dunslane.net> wrote:
Does Debian they create a self-signed certificate? If so, count me as
unimpressed. I'd argue that's worse than doing nothing. Here's what the docs
say (rightly) about such certificates:
Debian will give you a self signed certificate by default. Protecting
against passive eavesdroppers is not an inconsiderable benefit to get
for "free", and definitely not a marginal attack technique: it's
probably the most common.
For what they can possibly know about the end user, Debian has it right here.
--
fdr
On Sun, Oct 14, 2012 at 5:59 AM, Daniel Farina <daniel@heroku.com> wrote:
On Sat, Oct 13, 2012 at 7:00 AM, Andrew Dunstan <andrew@dunslane.net> wrote:
Does Debian they create a self-signed certificate? If so, count me as
unimpressed. I'd argue that's worse than doing nothing. Here's what the docs
say (rightly) about such certificates:Debian will give you a self signed certificate by default. Protecting
against passive eavesdroppers is not an inconsiderable benefit to get
for "free", and definitely not a marginal attack technique: it's
probably the most common.For what they can possibly know about the end user, Debian has it right here.
There's a lot of shades of gray to that one. Way too many to say
they're right *or* wrong, IMHO.
It *does* make people think they have "full ssl security by default",
which they *don't*.They do have partial protection, which helps in
some (fairly common) scenarios. But if you compare it to the
requirements that people *do* have when they use SSL, it usually
*doesn't* protect them the whole way - but they get the illusion that
it does. Sure, they'd have to read up on the details in order to get
secure whether it's on by default or not - that's why I think it's
hard to call it either right or wrong, but it's rather somewhere in
between.
They also enable things like encryption on all localhost connections.
I consider that plain wrong, regardless. Though it provides for some
easy "performance tuning" for consultants...
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Sun, Oct 14, 2012 at 2:04 AM, Magnus Hagander <magnus@hagander.net> wrote:
There's a lot of shades of gray to that one. Way too many to say
they're right *or* wrong, IMHO.
We can agree it is 'sub-ideal', but there is not one doubt in my mind
that it is not 'right' given the scope of Debian's task, which does
*not* include pushing applied cryptography beyond its current pitiful
state.
Debian not making self-signed certs available by default will just
result in a huge amount of plaintext database authentication and
traffic available over the internet, especially when you consider the
sslmode=prefer default, and as a result eliminate protection from the
most common class of attack for users with low-value (or just
low-vigilance) use cases. In aggregate, that is important, because
there are a lot of them.
It would be a net disaster for security.
It *does* make people think they have "full ssl security by default",
which they *don't*.They do have partial protection, which helps in
some (fairly common) scenarios. But if you compare it to the
requirements that people *do* have when they use SSL, it usually
*doesn't* protect them the whole way - but they get the illusion that
it does. Sure, they'd have to read up on the details in order to get
secure whether it's on by default or not - that's why I think it's
hard to call it either right or wrong, but it's rather somewhere in
between.
If there is such blame to go around, I place such blame squarely on
clients. More secure is the JDBC library, which makes you opt into
logging into a server that has no verified identity via configuration.
The problem there is that it's a pain to get signed certs in, say, a
test environment, so "don't check certs" will make its way into the
default configuration, and now you have all pain and no gain.
--
fdr
On 14 October 2012 22:17, Daniel Farina <daniel@heroku.com> wrote:
The problem there is that it's a pain to get signed certs in, say, a
test environment, so "don't check certs" will make its way into the
default configuration, and now you have all pain and no gain.
This is precisely the issue that Debian deals with in providing the
"default Snake Oil" certificate; software development teams -
especially small shops with one or two developers - don't want to
spend time learning about CAs and creating their own, etc, and often
their managers would see this as wasted time for setting up
development environments and staging systems. Not saying they're
right, of course; but it can be an uphill struggle, and as long as you
get a real certificate for your production environment, it's hard to
see what harm this (providing the "snake oil" certificate) actually
causes.
On Mon, Oct 15, 2012 at 1:21 PM, Will Crawford
<billcrawford1970@gmail.com> wrote:
On 14 October 2012 22:17, Daniel Farina <daniel@heroku.com> wrote:
The problem there is that it's a pain to get signed certs in, say, a
test environment, so "don't check certs" will make its way into the
default configuration, and now you have all pain and no gain.This is precisely the issue that Debian deals with in providing the
"default Snake Oil" certificate; software development teams -
especially small shops with one or two developers - don't want to
spend time learning about CAs and creating their own, etc, and often
their managers would see this as wasted time for setting up
development environments and staging systems. Not saying they're
right, of course; but it can be an uphill struggle, and as long as you
get a real certificate for your production environment, it's hard to
see what harm this (providing the "snake oil" certificate) actually
causes.
I don't see a problem at all with providing the snakeoil cert. In
fact, it's quite useful.
I see a problem with enabling it by default. Because it makes people
think they are more secure than they are.
In a browser, they will get a big fat warning every time, so they will
know it. There is no such warning in psql. Actually, maybe we should
*add* such a warning. We could do it in psql. We can't do it in libpq
for everyone, but we can do it in our own tools... Particularly since
we do print the SSL information already - we could just add a
"warning: cert not verified" or something like that to the same piece
of information.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Sun, Oct 21, 2012 at 09:55:50AM +0200, Magnus Hagander wrote:
I don't see a problem at all with providing the snakeoil cert. In
fact, it's quite useful.I see a problem with enabling it by default. Because it makes people
think they are more secure than they are.
So, what you're suggesting is that any use of ssl to a remote machine
without the sslrootcert option should generate a warning. Something
along the lines of "remote server not verified"? For completeness it
should also show this for any non-SSL connection.
libpq should export a "serververified" flag which would be false always
unless the connection is SSL and the CA is verified .
In a browser, they will get a big fat warning every time, so they will
know it. There is no such warning in psql. Actually, maybe we should
*add* such a warning. We could do it in psql. We can't do it in libpq
for everyone, but we can do it in our own tools... Particularly since
we do print the SSL information already - we could just add a
"warning: cert not verified" or something like that to the same piece
of information.
It bugs me every time you have to jump through hoops and get red
warnings for an unknown CA, whereas no encryption whatsoever is treated
as fine while being actually even worse.
Transport encryption is a *good thing*, we should be encouraging it
wherever possible. If it wern't for the performance issues I'd suggest
defaulting to SSL everywhere transparently with ephemeral certs. It
would protect against any number of passive attacks.
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
He who writes carelessly confesses thereby at the very outset that he does
not attach much importance to his own thoughts.
-- Arthur Schopenhauer