Avoid memory leaks during ANALYZE's compute_index_stats() ?

Started by Tom Laneabout 15 years ago4 messages
#1Tom Lane
tgl@sss.pgh.pa.us

I looked into the out-of-memory problem reported by Jakub Ouhrabka here:
http://archives.postgresql.org/pgsql-general/2010-11/msg00353.php

It's pretty simple to reproduce, even in HEAD; what you need is an index
expression that computes a bulky intermediate result. His example is

md5(array_to_string(f1, ''::text))

where f1 is a bytea array occupying typically 15kB per row. Even
though the final result of md5() is only 32 bytes, evaluation of this
expression will eat about 15kB for the detoasted value of f1, roughly
double that for the results of the per-element output function calls
done inside array_to_string, and another 30k for the final result string
of array_to_string. And *none of that gets freed* until
compute_index_stats() is all done. In my testing, with the default
stats target of 100, this gets repeated for 30k sample rows, requiring
something in excess of 2GB in transient space. Jakub was using stats
target 500 so it'd be closer to 10GB for him.

AFAICS the only practical fix for this is to have the inner loop of
compute_index_stats() copy each index expression value out of the
per-tuple memory context and into the per-index "Analyze Index" context.
That would allow it to reset the per-tuple memory context after each
FormIndexDatum call and thus clean up whatever intermediate result trash
the evaluation left behind. The extra copying is a bit annoying, since
it would add cycles while accomplishing nothing useful for index
expressions with no intermediate results, but I'm thinking this is a
must-fix.

Comments?

regards, tom lane

#2Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#1)
Re: Avoid memory leaks during ANALYZE's compute_index_stats() ?

On 11/8/10 5:04 PM, Tom Lane wrote:

The extra copying is a bit annoying, since
it would add cycles while accomplishing nothing useful for index
expressions with no intermediate results, but I'm thinking this is a
must-fix.

I'd say that performance numbers is what to check on this. How much
does it affect low-memory expressions to do the copying?

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Josh Berkus (#2)
Re: Avoid memory leaks during ANALYZE's compute_index_stats() ?

Josh Berkus <josh@agliodbs.com> writes:

On 11/8/10 5:04 PM, Tom Lane wrote:

The extra copying is a bit annoying, since
it would add cycles while accomplishing nothing useful for index
expressions with no intermediate results, but I'm thinking this is a
must-fix.

I'd say that performance numbers is what to check on this. How much
does it affect low-memory expressions to do the copying?

It's noticeable but not horrible. I tried this test case:

regression=# \d tst
Table "public.tst"
Column | Type | Modifiers
--------+------------------+-----------
f1 | double precision |
Indexes:
"tsti" btree ((f1 + 1.0::double precision))

with 100000 rows on a 32-bit machine (so that float8 is pass-by-ref).
The ANALYZE time went from about 625 msec to about 685. I believe that
this is pretty much the worst case percentage-wise: the table is small
enough to fit in RAM, so no I/O is involved, and the index expression is
about as simple and cheap to evaluate as it could possibly be, and the
amount of work done analyzing the main table is about as small as it
could possibly be. In any other situation those other components of
the ANALYZE cost would grow proportionally more than the copying cost.

Not-too-well-tested-yet patch attached.

regards, tom lane

#4Jakub Ouhrabka
kuba@comgate.cz
In reply to: Tom Lane (#1)
Re: Avoid memory leaks during ANALYZE's compute_index_stats() ?

Hi Tom,

thanks for brilliant analysis - now we know how to avoid the problem.

As a side note: from the user's point of view it would be really nice to
know that the error was caused by auto-ANALYZE - at least on 8.2 it's
not that obvious from the server log. It was the first message with
given backend PID so it seemed to me as it's problem during backend
startup - we have log_connections to on...

Thanks,

Kuba

Dne 9.11.2010 2:04, Tom Lane napsal(a):

Show quoted text

I looked into the out-of-memory problem reported by Jakub Ouhrabka here:
http://archives.postgresql.org/pgsql-general/2010-11/msg00353.php

It's pretty simple to reproduce, even in HEAD; what you need is an index
expression that computes a bulky intermediate result. His example is

md5(array_to_string(f1, ''::text))

where f1 is a bytea array occupying typically 15kB per row. Even
though the final result of md5() is only 32 bytes, evaluation of this
expression will eat about 15kB for the detoasted value of f1, roughly
double that for the results of the per-element output function calls
done inside array_to_string, and another 30k for the final result string
of array_to_string. And *none of that gets freed* until
compute_index_stats() is all done. In my testing, with the default
stats target of 100, this gets repeated for 30k sample rows, requiring
something in excess of 2GB in transient space. Jakub was using stats
target 500 so it'd be closer to 10GB for him.

AFAICS the only practical fix for this is to have the inner loop of
compute_index_stats() copy each index expression value out of the
per-tuple memory context and into the per-index "Analyze Index" context.
That would allow it to reset the per-tuple memory context after each
FormIndexDatum call and thus clean up whatever intermediate result trash
the evaluation left behind. The extra copying is a bit annoying, since
it would add cycles while accomplishing nothing useful for index
expressions with no intermediate results, but I'm thinking this is a
must-fix.

Comments?

regards, tom lane