BUG #18080: to_tsvector fails for long text input

Started by PG Bug reporting formover 2 years ago5 messagesbugs
Jump to latest
#1PG Bug reporting form
noreply@postgresql.org

The following bug has been logged on the website:

Bug reference: 18080
Logged by: Uwe Binder
Email address: uwe.binder@pass-consulting.com
PostgreSQL version: 13.11
Operating system: Rocky Linux 9
Description:

PostgreSQL 13.11 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 11.3.1
20221121 (Red Hat 11.3.1-4), 64-bit

SELECT to_tsvector('english'::regconfig, (REPEAT('<Long123456789/>'::text,
20000000)));
results in
ERROR: invalid memory alloc request size 2133333320

Where SELECT LENGTH(REPEAT('<Long123456789/>'::text, 20000000));
correctly returns 320000000 .

PostgresSQL is running in a Docker Container with 4GB.

#2Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: PG Bug reporting form (#1)
Re: BUG #18080: to_tsvector fails for long text input

On 2023-Sep-04, PG Bug reporting form wrote:

SELECT to_tsvector('english'::regconfig, (REPEAT('<Long123456789/>'::text,
20000000)));
results in
ERROR: invalid memory alloc request size 2133333320

This is because to_tsvector_byid does this:

prs.lenwords = VARSIZE_ANY_EXHDR(in) / 6; /* just estimation of word's
* number */
if (prs.lenwords < 2)
prs.lenwords = 2;
prs.curwords = 0;
prs.pos = 0;
prs.words = (ParsedWord *) palloc(sizeof(ParsedWord) * prs.lenwords);

where sizeof(ParsedWord) is 40 (in my laptop). So this tries to
allocate more memory than palloc() is willing to give it. The attached
patch fixes just the query you supplied and nothing else.

I wonder if we want to support this kind of thing; I suspect we don't.
Other parts of text-search would fail in the same way and would also
need to receive similar fixes. However, the real problem comes when we
try to store such huge tsvectors, because that means we end up with
"huge" tuples on disk that need I/O support. Eventually AFAIR you run
into the size limit in the FE/BE protocol and all crashes and burns
because that one cannot be changed without bumping the version.

So I don't think this patch actually does you any good.

--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/

Attachments:

huge_tsvector.patchtext/x-diff; charset=utf-8Download+6-3
#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#2)
Re: BUG #18080: to_tsvector fails for long text input

Alvaro Herrera <alvherre@alvh.no-ip.org> writes:

On 2023-Sep-04, PG Bug reporting form wrote:

SELECT to_tsvector('english'::regconfig, (REPEAT('<Long123456789/>'::text,
20000000)));
results in
ERROR: invalid memory alloc request size 2133333320

This is because to_tsvector_byid does this:
prs.lenwords = VARSIZE_ANY_EXHDR(in) / 6; /* just estimation of word's
* number */
if (prs.lenwords < 2)
prs.lenwords = 2;

Yeah. My thought about blocking the error had been to limit
prs.lenwords to MaxAllocSize/sizeof(ParsedWord) in this code.
I doubt that switching over to MCXT_ALLOC_HUGE is a good idea.
(Would we not also have to touch the places that repalloc that
array?)

regards, tom lane

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#3)
Re: BUG #18080: to_tsvector fails for long text input

I wrote:

Yeah. My thought about blocking the error had been to limit
prs.lenwords to MaxAllocSize/sizeof(ParsedWord) in this code.

Concretely, as attached. This allows the given test case to
complete, since it doesn't actually create very many distinct
words. In other cases we could expect to fail when the array
has to get enlarged, but that's just a normal implementation
limitation.

I looked for other places that might initialize lenwords
to not-sane values, and didn't find any.

BTW, the field order in ParsedWord is such that there's a fair
amount of wasted pad space on 64-bit builds. I doubt we can
get away with rearranging it in released branches; but maybe
it's worth doing something about that in HEAD, to push out
the point at which you hit the 1Gb limit.

regards, tom lane

Attachments:

bound-lenwords-in-to_tsvector_byid.patchtext/x-diff; charset=us-ascii; name=bound-lenwords-in-to_tsvector_byid.patchDownload+2-0
#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#4)
Re: BUG #18080: to_tsvector fails for long text input

I wrote:

BTW, the field order in ParsedWord is such that there's a fair
amount of wasted pad space on 64-bit builds. I doubt we can
get away with rearranging it in released branches; but maybe
it's worth doing something about that in HEAD, to push out
the point at which you hit the 1Gb limit.

I poked at that a little bit. We can reduce 64-bit sizeof(ParsedWord)
from 40 bytes to 24 bytes with the attached patch. The main thing
needed to make this pack tightly is to reduce the "alen" field from
uint32 to uint16. While it's not immediately obvious that that's
a good thing to do, a look at the one place where alen is increased
(uniqueWORD() in to_tsany.c) shows that it cannot get to more than
twice MAXNUMPOS:

if (res->pos.apos[0] < MAXNUMPOS - 1 && ...)
{
if (res->pos.apos[0] + 1 >= res->alen)
{
res->alen *= 2;
res->pos.apos = (uint16 *) repalloc(res->pos.apos, sizeof(uint16) * res->alen);
}

MAXNUMPOS is currently 256, and even if it's possible to increase
that it seems unlikely that we'd want to make it more than 32k.
So this limitation seems OK to me.

regards, tom lane

Attachments:

pack-ParsedWord-tightly.patchtext/x-diff; charset=us-ascii; name=pack-ParsedWord-tightly.patchDownload+3-3