max_memory_per_backend GUC to limit backend's memory usage

Started by Vladimir Sitnikovabout 8 years ago2 messageshackers
Jump to latest
#1Vladimir Sitnikov
sitnikov.vladimir@gmail.com

Hi,

I've got a problem with PostgreSQL 9.6.5: backend gets killed by OOM
killer, and it shuts the DB down.
Of course, the OOM case is to be investigated (MemoryContextStatsDetail,
etc), however I wonder if DB can be more robust.
The sad thing is a single backend crash results in the DB shutdown, so it
interrupts lots of transactions.

I wonder if a GUC can be implemented, so it could fail just a single
backend by limiting its memory use.
For instance: max_mememory_per_backend=100MiB.
The idea is to increase stability by limiting each process. Of course it
would result in "out of memory" in case a single query requires 100500MiB
(e.g. it misunderestimates the hash join). As far as I understand, it
should be safer to terminate just one bad backend rather than kill all the
processes.

I did some research, and I have not found the discussion of this idea.

Vladimir Rusinov> FWIW, lack of per-connection and/or global memory limit
for work_mem is major PITA

/messages/by-id/CAE1wr-ykMDUFMjucDGqU-s98ARk3oiCfhxrHkajnb3f=Up70JA@mail.gmail.com

Vladimir

#2Andres Freund
andres@anarazel.de
In reply to: Vladimir Sitnikov (#1)
Re: max_memory_per_backend GUC to limit backend's memory usage

Hi,

On 2018-03-23 15:58:55 +0000, Vladimir Sitnikov wrote:

I've got a problem with PostgreSQL 9.6.5: backend gets killed by OOM
killer, and it shuts the DB down.
Of course, the OOM case is to be investigated (MemoryContextStatsDetail,
etc), however I wonder if DB can be more robust.

Configuring overcommit_memory to not overcommit should do the trick.

Greetings,

Andres Freund