Memory Usage

Started by Nathan Barnettover 25 years ago5 messagesgeneral
Jump to latest
#1Nathan Barnett
nbarnett@centuries.com

I am having a small issue with PostgreSQL 7.0.3 on FreeBSD 4.2
Stable. When I perform an update on a table with roughly 2 million rows,
the postgres process starts eating up memory until it eventually uses up
all of the memory and exits without finishing. I have also seen the same
thing happen in complex SELECT statements that I have run. The update
statement is as follows:

UPDATE pages SET createdtime = NOW();

Is there a reason why this would take up all of the memory?? Also the
select statement that caused the issue was a SELECT and GROUP BY that was
in the form of:

SELECT COUNT(*) AS hitcount, date_part('yy', createdtime) AS Year,
date_part('mm', createdtime) AS Month, date_part('dd', createdtime) AS Day
FROM Log group by Year, Month, Day

I found a better way to do this afterwards, but why wouldn't the database
utilize temporary tables to complete this query without using all the RAM?

++++++++++++++++++++++++
+Nathan Barnett
+Sr. Consultant
+Centuries Consulting, Inc.
++++++++++++++++++++++++
#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nathan Barnett (#1)
Re: Memory Usage

Nathan Barnett <nbarnett@centuries.com> writes:

UPDATE pages SET createdtime = NOW();

Is there a reason why this would take up all of the memory??

The now() function invocation leaks memory ... only a dozen or so bytes
per invocation, but that adds up over millions of rows :-(. In 7.0.*
the memory isn't recovered until end of statement. 7.1 fixes this by
recovering temporary memory after each tuple.

You'll see the same behavior for functions and operators on any
pass-by-reference datatype, not just timestamp.

regards, tom lane

#3Andrew W. Nosenko
awn@bcs.zp.ua
In reply to: Tom Lane (#2)
Re: Memory Usage

Tom Lane wrote:
: Nathan Barnett <nbarnett@centuries.com> writes:
: > UPDATE pages SET createdtime = NOW();
:
: > Is there a reason why this would take up all of the memory??
:
: The now() function invocation leaks memory ... only a dozen or so bytes
: per invocation, but that adds up over millions of rows :-(. In 7.0.*
: the memory isn't recovered until end of statement. 7.1 fixes this by
: recovering temporary memory after each tuple.

As I can see this is not that simple :-(
On UPDATE -- maybe, but not on SELECT.

When SELECT is executing Postgres (7.0.3) allocate how many memory as
need for store full result set of query. On
select * from some_big_table;
this can be in some times more than "physical memory + swap" exist. :-(

In general case I can't disable executing of this (and similar) queries
for users.

Question: Can I say postmaster (or postgres backend) don't use more than
some number of memory (in per-backend basis or for all running backends
totally -- no difference) and when this limit will be exceed -- switch
to using temporary files or simple rollback transaction and close
connection if using temporary files is impossible? (Yes, I mean what
bring down one postrgres process is more cheap that bring down or hang
up all machine.)

Any ideas/workarounds?

--
Andrew W. Nosenko (awn@bcs.zp.ua)

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew W. Nosenko (#3)
Re: Memory Usage

Andrew Nosenko <awn@bcs.zp.ua> writes:

When SELECT is executing Postgres (7.0.3) allocate how many memory as
need for store full result set of query.

The *client* does that (if it uses libpq). Not the server. A user can
only crash his own client if he retrieves an oversize result set.

regards, tom lane

#5Andrew W. Nosenko
awn@bcs.zp.ua
In reply to: Tom Lane (#4)
Re: Memory Usage

Tom Lane wrote:
: Andrew Nosenko <awn@bcs.zp.ua> writes:
: > When SELECT is executing Postgres (7.0.3) allocate how many memory as
: > need for store full result set of query.
:
: The *client* does that (if it uses libpq). Not the server. A user can
: only crash his own client if he retrieves an oversize result set.
:

Thanks for explanation.

--
Andrew W. Nosenko (awn@bcs.zp.ua)