Should rolpassword be toastable?
Hello hackers,
When playing with oversized tuples, I've found that it's possible to set
such oversized password for a user, that could not be validated.
For example:
SELECT format('CREATE ROLE test_user LOGIN PASSWORD ''SCRAM-SHA-256$' || repeat('0', 2000000) ||
'4096:NuDacwYSUxeOeFUEf3ivTQ==$Wgvq3OCYrJI6eUfvKlAzn4p/j3mzgWzXbVnWeFK1qhY=:r1qSP0j2QojCjLpFUjI0i6ckInvxJDKoyWnN3zF8WCM='';')
\gexec
-- the password is "pass"
(One can achieve the same result with a large salt size, for example, 2048.)
psql -U "test_user" -c "SELECT 1"
psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: cannot read pg_class without having
selected a database
I've tried to set attstorage = 'p' for the rolpassword attribute forcefully
by dirty hacking genbki.pl, and as a result I get an error on CREATE ROLE:
ERROR: row is too big: size 2000256, maximum size 8160
Best regards,
Alexander
Alexander Lakhin <exclusion@gmail.com> writes:
When playing with oversized tuples, I've found that it's possible to set
such oversized password for a user, that could not be validated.
For example:
...
psql -U "test_user" -c "SELECT 1"
psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: cannot read pg_class without having
selected a database
My inclination is to fix this by removing pg_authid's toast table.
I was not in favor of "let's attach a toast table to every catalog"
to begin with, and I think this failure graphically illustrates
why that was not as great an idea as some people thought.
I also don't think it's worth trying to make it really work.
I'm also now more than just slightly skeptical about whether
pg_database should have a toast table. Has anybody tried,
say, storing a daticurules field wide enough to end up
out-of-line?
regards, tom lane
23.09.2023 17:39, Tom Lane wrote:
I'm also now more than just slightly skeptical about whether
pg_database should have a toast table. Has anybody tried,
say, storing a daticurules field wide enough to end up
out-of-line?
I tried, but failed, because pg_database accessed in InitPostgres() before
assigning MyDatabaseId only via the function GetDatabaseTupleByOid(),
which doesn't unpack the database tuple.
Another access to a system catalog with unassigned MyDatabaseId might occur
in the has_privs_of_role() call, but pg_auth_members contains no toastable
attributes.
So for now only pg_authid is worthy of condemnation, AFAICS.
Best regards,
Alexander
Hello Tom and Nathan,
23.09.2023 21:00, Alexander Lakhin wrote:
23.09.2023 17:39, Tom Lane wrote:
I'm also now more than just slightly skeptical about whether
pg_database should have a toast table. Has anybody tried,
say, storing a daticurules field wide enough to end up
out-of-line?So for now only pg_authid is worthy of condemnation, AFAICS.
Let me remind you of this issue in light of b52c4fc3c.
Yes, it's opposite, but maybe it makes sense to fix it now in the hope that
~1 year of testing will bring something helpful for both changes.
Best regards,
Alexander
On Thu, Sep 19, 2024 at 06:00:00AM +0300, Alexander Lakhin wrote:
23.09.2023 21:00, Alexander Lakhin wrote:
So for now only pg_authid is worthy of condemnation, AFAICS.
Let me remind you of this issue in light of b52c4fc3c.
Yes, it's opposite, but maybe it makes sense to fix it now in the hope that
~1 year of testing will bring something helpful for both changes.
Hm. It does seem like there's little point in giving pg_authid a TOAST
table, as rolpassword is the only varlena column, and it obviously has
problems. But wouldn't removing it just trade one unhelpful internal error
when trying to log in for another when trying to add a really long password
hash (which hopefully nobody is really trying to do in practice)? I wonder
if we could make this a little more user-friendly.
--
nathan
Nathan Bossart <nathandbossart@gmail.com> writes:
Hm. It does seem like there's little point in giving pg_authid a TOAST
table, as rolpassword is the only varlena column, and it obviously has
problems. But wouldn't removing it just trade one unhelpful internal error
when trying to log in for another when trying to add a really long password
hash (which hopefully nobody is really trying to do in practice)? I wonder
if we could make this a little more user-friendly.
We could put an arbitrary limit (say, half of BLCKSZ) on the length of
passwords.
regards, tom lane
On Thu, Sep 19, 2024 at 10:31:15AM -0400, Tom Lane wrote:
Nathan Bossart <nathandbossart@gmail.com> writes:
Hm. It does seem like there's little point in giving pg_authid a TOAST
table, as rolpassword is the only varlena column, and it obviously has
problems. But wouldn't removing it just trade one unhelpful internal error
when trying to log in for another when trying to add a really long password
hash (which hopefully nobody is really trying to do in practice)? I wonder
if we could make this a little more user-friendly.We could put an arbitrary limit (say, half of BLCKSZ) on the length of
passwords.
Something like that could be good enough. I was thinking about actually
validating that the hash had the correct form, but that might be a little
more complex than is warranted here.
--
nathan
On Thu, Sep 19, 2024 at 12:44:32PM -0500, Nathan Bossart wrote:
On Thu, Sep 19, 2024 at 10:31:15AM -0400, Tom Lane wrote:
We could put an arbitrary limit (say, half of BLCKSZ) on the length of
passwords.Something like that could be good enough. I was thinking about actually
validating that the hash had the correct form, but that might be a little
more complex than is warranted here.
Oh, actually, I see that we are already validating the hash, but you can
create valid SCRAM-SHA-256 hashes that are really long. So putting an
arbitrary limit (patch attached) is probably the correct path forward. I'd
also remove pg_authid's TOAST table while at it.
--
nathan
Attachments:
fail_for_long_scram_hash.patchtext/plain; charset=us-asciiDownload+13-0
Nathan Bossart <nathandbossart@gmail.com> writes:
Oh, actually, I see that we are already validating the hash, but you can
create valid SCRAM-SHA-256 hashes that are really long. So putting an
arbitrary limit (patch attached) is probably the correct path forward. I'd
also remove pg_authid's TOAST table while at it.
Shouldn't we enforce the limit in every case in encrypt_password,
not just this one? (I do agree that encrypt_password is an okay
place to enforce it.)
I think you will get pushback from a limit of 256 bytes --- I seem
to recall discussion of actual use-cases where people were using
strings of a couple of kB. Whatever the limit is, the error message
had better cite it explicitly.
Also, the ereport call needs an errcode.
ERRCODE_PROGRAM_LIMIT_EXCEEDED is probably suitable.
regards, tom lane
On 9/19/24 6:14 PM, Tom Lane wrote:
Nathan Bossart <nathandbossart@gmail.com> writes:
Oh, actually, I see that we are already validating the hash, but you can
create valid SCRAM-SHA-256 hashes that are really long.
You _can_, but it's up to a driver or a very determined user to do this,
as it involves creating a very long salt.
So putting an
arbitrary limit (patch attached) is probably the correct path forward. I'd
also remove pg_authid's TOAST table while at it.Shouldn't we enforce the limit in every case in encrypt_password,
not just this one? (I do agree that encrypt_password is an okay
place to enforce it.)
+1; if there's any breakage, my guess is it would be on very long
plaintext passwords, but that would be from a very old upgrade?
I think you will get pushback from a limit of 256 bytes --- I seem
to recall discussion of actual use-cases where people were using
strings of a couple of kB. Whatever the limit is, the error message
had better cite it explicitly.
I think it's OK to be a bit generous with the limit. Also, currently oru
hashes are 256-bit (I know the above says byte), but this could increase
should we support larger hashes.
Also, the ereport call needs an errcode.
ERRCODE_PROGRAM_LIMIT_EXCEEDED is probably suitable.
Jonathan
On Thu, Sep 19, 2024 at 07:37:55PM -0400, Jonathan S. Katz wrote:
On 9/19/24 6:14 PM, Tom Lane wrote:
Nathan Bossart <nathandbossart@gmail.com> writes:
Oh, actually, I see that we are already validating the hash, but you can
create valid SCRAM-SHA-256 hashes that are really long.You _can_, but it's up to a driver or a very determined user to do this, as
it involves creating a very long salt.
I can't think of any reason to support this, unless we want Alexander to
find more bugs.
So putting an
arbitrary limit (patch attached) is probably the correct path forward. I'd
also remove pg_authid's TOAST table while at it.Shouldn't we enforce the limit in every case in encrypt_password,
not just this one? (I do agree that encrypt_password is an okay
place to enforce it.)
Yeah, that seems like a good idea. I've attached a more fleshed-out patch
set that applies the limit in all cases.
+1; if there's any breakage, my guess is it would be on very long plaintext
passwords, but that would be from a very old upgrade?
IIUC there's zero support for plain-text passwords in newer versions, and
any that remain in older clusters will be silently converted to a hash by
pg_upgrade.
I think you will get pushback from a limit of 256 bytes --- I seem
to recall discussion of actual use-cases where people were using
strings of a couple of kB. Whatever the limit is, the error message
had better cite it explicitly.I think it's OK to be a bit generous with the limit. Also, currently oru
hashes are 256-bit (I know the above says byte), but this could increase
should we support larger hashes.
Hm. Are you thinking of commit 67a472d? That one removed the password
length restrictions in client-side code and password message packets, which
I think is entirely separate from the lengths of the hashes stored in
rolpassword.
Also, the ereport call needs an errcode.
ERRCODE_PROGRAM_LIMIT_EXCEEDED is probably suitable.
This is added in v2.
--
nathan
On Thu, Sep 19, 2024 at 09:46:00PM -0500, Nathan Bossart wrote:
On Thu, Sep 19, 2024 at 07:37:55PM -0400, Jonathan S. Katz wrote:
Shouldn't we enforce the limit in every case in encrypt_password,
not just this one? (I do agree that encrypt_password is an okay
place to enforce it.)Yeah, that seems like a good idea. I've attached a more fleshed-out patch
set that applies the limit in all cases.
Not sure. Is this really something we absolutely need? Sure, this
generates a better error when inserting a record too long to
pg_authid, but removing the toast relation is enough to avoid the
problems one would see when authenticating. Not sure if this argument
is enough to count as an objection, just sharing some doubts :)
Removing the toast relation for pg_authid sounds good to me.
-- These are the toast table and index of pg_authid. -REINDEX TABLE CONCURRENTLY pg_toast.pg_toast_1260; -- no catalog toast table +REINDEX TABLE CONCURRENTLY pg_toast.pg_toast_1262; -- no catalog toast table ERROR: cannot reindex system catalogs concurrently -REINDEX INDEX CONCURRENTLY pg_toast.pg_toast_1260_index; -- no catalog toast index +REINDEX INDEX CONCURRENTLY pg_toast.pg_toast_1262_index; -- no catalog toast index
This comment should be refreshed as of s/pg_authid/pg_database/.
--
Michael
On 9/20/24 1:23 AM, Michael Paquier wrote:
On Thu, Sep 19, 2024 at 09:46:00PM -0500, Nathan Bossart wrote:
On Thu, Sep 19, 2024 at 07:37:55PM -0400, Jonathan S. Katz wrote:
Shouldn't we enforce the limit in every case in encrypt_password,
not just this one? (I do agree that encrypt_password is an okay
place to enforce it.)Yeah, that seems like a good idea. I've attached a more fleshed-out patch
set that applies the limit in all cases.Not sure. Is this really something we absolutely need? Sure, this
generates a better error when inserting a record too long to
pg_authid, but removing the toast relation is enough to avoid the
problems one would see when authenticating. Not sure if this argument
is enough to count as an objection, just sharing some doubts :)
The errors from lack of TOAST are confusing to users. Why can't we have
a user friendly error here?
Jonathan
On Fri, Sep 20, 2024 at 10:06:28AM -0400, Jonathan S. Katz wrote:
On 9/20/24 1:23 AM, Michael Paquier wrote:
Not sure. Is this really something we absolutely need? Sure, this
generates a better error when inserting a record too long to
pg_authid, but removing the toast relation is enough to avoid the
problems one would see when authenticating. Not sure if this argument
is enough to count as an objection, just sharing some doubts :)The errors from lack of TOAST are confusing to users. Why can't we have a
user friendly error here?
If I wanted to argue against adding a user-friendly error, I'd point out
that it's highly unlikely anyone is actually trying to use super long
hashes unless they are trying to break things, and it's just another
arbitrary limit that we'll need to maintain/enforce. But on the off-chance
that someone is building a custom driver that generates long hashes for
whatever reason, I'd imagine that a clear error would be more helpful than
"row is too big."
Here is a v3 patch set that fixes the test comment and a compiler warning
in cfbot.
--
nathan
Nathan Bossart <nathandbossart@gmail.com> writes:
Here is a v3 patch set that fixes the test comment and a compiler warning
in cfbot.
Nitpick: the message should say "%d bytes" not "%d characters",
because we're counting bytes. Passes an eyeball check otherwise.
regards, tom lane
On Fri, Sep 20, 2024 at 12:27:41PM -0400, Tom Lane wrote:
Nitpick: the message should say "%d bytes" not "%d characters",
because we're counting bytes. Passes an eyeball check otherwise.
Thanks for reviewing. I went ahead and committed 0002 since it seems like
there's consensus on that one. I've attached a rebased version of 0001
with s/characters/bytes.
--
nathan
Attachments:
v4-0001-place-limit-on-password-hash-length.patchtext/plain; charset=us-asciiDownload+63-19
On Sat, Sep 21, 2024 at 03:25:54PM -0500, Nathan Bossart wrote:
Thanks for reviewing. I went ahead and committed 0002 since it seems like
there's consensus on that one. I've attached a rebased version of 0001
with s/characters/bytes.
For the reasons discussed upthread [0]/messages/by-id/Zu2eT2H8OT3OXauc@nathan, I can't bring myself to add an
arbitrary limit to the password hash length. I am going to leave 0001
uncommitted for now.
[0]: /messages/by-id/Zu2eT2H8OT3OXauc@nathan
--
nathan
Nathan Bossart <nathandbossart@gmail.com> writes:
For the reasons discussed upthread [0], I can't bring myself to add an
arbitrary limit to the password hash length. I am going to leave 0001
uncommitted for now.
[0] /messages/by-id/Zu2eT2H8OT3OXauc@nathan
I'm confused, as in [0] you said
... But on the off-chance
that someone is building a custom driver that generates long hashes for
whatever reason, I'd imagine that a clear error would be more helpful than
"row is too big."
I agree with the idea that complaining about the password being too
long is far more intelligible than that. Another problem with
leaving it as it stands in HEAD is that the effective limit is now
platform-specific, if not indeed dependent on the phase of the moon
(or at least, the other contents of the pg_authid row). I fear it
would be very easy to construct cases where a password is accepted
on one machine but fails with "row is too big" on another. A
uniform limit seems much less fraught with surprises.
regards, tom lane
On Thu, Oct 03, 2024 at 05:39:06PM -0400, Tom Lane wrote:
I agree with the idea that complaining about the password being too
long is far more intelligible than that. Another problem with
leaving it as it stands in HEAD is that the effective limit is now
platform-specific, if not indeed dependent on the phase of the moon
(or at least, the other contents of the pg_authid row). I fear it
would be very easy to construct cases where a password is accepted
on one machine but fails with "row is too big" on another. A
uniform limit seems much less fraught with surprises.
I don't mind proceeding with the patch if there is strong support for it.
I wavered only because it's hard to be confident that we are choosing the
right limit. AFAICT 256 bytes ought to be sufficient to avoid "row is too
big" errors independent of BLCKSZ today, but maybe someone will add another
varlena column in the future that breaks it. Or maybe we add a new
password hashing method that produces longer strings. Or maybe someone is
doing something really out there like storing additional information in the
salt. I don't have any reason to believe that any of these things are
happening or are likely to happen anytime soon, but they seem similar in
likelihood to someone building a custom driver that generates ginormous
hashes. But I can also buy the argument that none of this is a strong
enough reason to avoid making the error message nicer...
--
nathan
Nathan Bossart <nathandbossart@gmail.com> writes:
I don't mind proceeding with the patch if there is strong support for it.
I wavered only because it's hard to be confident that we are choosing the
right limit.
I'm not that fussed about it; surely 256 is more than anyone is using?
If not, we'll get push-back and then we can have a discussion about the
correct limit that's informed by more than guesswork.
... But I can also buy the argument that none of this is a strong
enough reason to avoid making the error message nicer...
There's that, and there's also the fact that if you assume someone is
using $sufficiently-long-passwords then we might have broken their
use-case already. We can't have much of a conversation here without
a concrete case to look at.
regards, tom lane