invalid memory alloc request size from pg_stat_activity?
Hello,
We're upgrading from postgresql 9.6 to 10.6 and everything is working fine, but any query against pg_stat_activity results in ERROR: invalid memory alloc request size 1652113408
e.g:
ourdb=> SELECT pid, state, age(query_start, clock_timestamp()), usename, query FROM pg_stat_activity WHERE query != '<IDLE>' AND state != 'idle' ORDER BY age limit 100;
ERROR: invalid memory alloc request size 1652113408
Maybe this has to do with us setting track_activity_query_size=102400? Is there a known safe maximum for that, or could there be some other problem?
Thanks,
James
Em seg, 6 de mai de 2019 às 20:04, James Tomson <james@pushd.com> escreveu:
ourdb=> SELECT pid, state, age(query_start, clock_timestamp()), usename, query FROM pg_stat_activity WHERE query != '<IDLE>' AND state != 'idle' ORDER BY age limit 100;
ERROR: invalid memory alloc request size 1652113408
That is because it is limited to 1GB - 1.
Maybe this has to do with us setting track_activity_query_size=102400? Is there a known safe maximum for that, or could there be some other problem?
Question is: why do you want to return such a big query text? That's
impractical for admin/monitoring tools. We can usually identify a
query with 1024 bytes (that is the default).
--
Euler Taveira Timbira -
http://www.timbira.com.br/
PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
On 2019-May-07, Euler Taveira wrote:
Em seg, 6 de mai de 2019 �s 20:04, James Tomson <james@pushd.com> escreveu:
ourdb=> SELECT pid, state, age(query_start, clock_timestamp()), usename, query FROM pg_stat_activity WHERE query != '<IDLE>' AND state != 'idle' ORDER BY age limit 100;
ERROR: invalid memory alloc request size 1652113408That is because it is limited to 1GB - 1.
Maybe this has to do with us setting track_activity_query_size=102400? Is there a known safe maximum for that, or could there be some other problem?
Question is: why do you want to return such a big query text? That's
impractical for admin/monitoring tools. We can usually identify a
query with 1024 bytes (that is the default).
Hmm, but 102400 is only 100kB, nowhere near the 1GB-1 limit, so there's
something odd going on there.
I would start investigating by attaching a debugger and setting a
breakpoint on 'errfinish', then running the above query; past the
backtrace here, as it might be illuminating.
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
James Tomson <james@pushd.com> writes:
We're upgrading from postgresql 9.6 to 10.6 and everything is working fine, but any query against pg_stat_activity results in ERROR: invalid memory alloc request size 1652113408
e.g:
ourdb=> SELECT pid, state, age(query_start, clock_timestamp()), usename, query FROM pg_stat_activity WHERE query != '<IDLE>' AND state != 'idle' ORDER BY age limit 100;
ERROR: invalid memory alloc request size 1652113408
That seems odd ...
Maybe this has to do with us setting track_activity_query_size=102400?
It wouldn't surprise me if reading pg_stat_activity causes allocation of
track_activity_query_size * max_connections bytes, but unless you also
have max_connections set to some pretty insane value, that wouldn't
be enough to fit this observation.
Did you have these parameters set differently in 9.6?
regards, tom lane
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
Hmm, but 102400 is only 100kB, nowhere near the 1GB-1 limit, so there's
something odd going on there.
I can reproduce the described behavior by also setting max_connections
to something around 16K.
Now, it seems pretty silly to me to be burning in excess of 1GB of shmem
just for the current-query strings, and then that much again in every
backend that reads pg_stat_activity. But should we be telling people they
can't do it? I'm working on a patch to use MemoryContextAllocHuge for
the "localactivity" buffer in pgstat_read_current_status. It might seem
dumb now, but perhaps in ten years it'll be common.
regards, tom lane
Thanks all - we did have our max_connections set very very high (100k) as we upgraded this db in a load test environment, and did not want to reboot the db after hitting the lower, saner, production limits during load testing.
I am surprised this allocation was taking place for unused connections - I’ve also verified that lowering max_connections to 10K on this instance fixed the issue for us. Much appreciated!
Best,
James
Show quoted text
On May 7, 2019, at 11:14 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
Hmm, but 102400 is only 100kB, nowhere near the 1GB-1 limit, so there's
something odd going on there.I can reproduce the described behavior by also setting max_connections
to something around 16K.Now, it seems pretty silly to me to be burning in excess of 1GB of shmem
just for the current-query strings, and then that much again in every
backend that reads pg_stat_activity. But should we be telling people they
can't do it? I'm working on a patch to use MemoryContextAllocHuge for
the "localactivity" buffer in pgstat_read_current_status. It might seem
dumb now, but perhaps in ten years it'll be common.regards, tom lane