out of memory problem in CVS

Started by laseralmost 21 years ago5 messagesgeneral
Jump to latest
#1laser
laser@toping.com.cn

hi,
we are using postgresql to analyze our web log, we got a 6M table,
and while doing the query:

SELECT url,sum(ct) as ctperkw from ctrraw group by url order by ctperkw
desc limit 1000;

the table structure is:

CREATE TABLE ctrRAW
(
cdate date,
ip inet,
kw varchar(128),
prd varchar(6),
pos int,
url varchar(512),
ct int
);

create INDEX ctr_ip_index on ctrraw(ip);

our boxes ia a freebsd 4.1 with 2G emmory, but our postgresql user got
a 512M data sigment ulimit.

and the query run out of memory, the log file attached.

the postgresql was check out from CVS around June 25.
we are going to test in newly checkout CVS tip, but want to report here
first.

regards laser

Attachments:

logfiletext/plain; name=logfileDownload
#2Neil Conway
neilc@samurai.com
In reply to: laser (#1)
Re: out of memory problem in CVS

laser wrote:

SELECT url,sum(ct) as ctperkw from ctrraw group by url order by ctperkw
desc limit 1000;

and the query run out of memory, the log file attached.

Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amount of
memory -- posting the EXPLAIN for the query would confirm that.

-Neil

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Neil Conway (#2)
Re: out of memory problem in CVS

Neil Conway <neilc@samurai.com> writes:

Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amount of
memory -- posting the EXPLAIN for the query would confirm that.

It would be useful to confirm whether this behavior is any different
from 8.0 ...

regards, tom lane

#4laser
laser@toping.com.cn
In reply to: Neil Conway (#2)
Re: out of memory problem in CVS

Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amount of
memory -- posting the EXPLAIN for the query would confirm that.

-Neil

yes, I run VACUUM ANALYZE VERBOSE then run the query,
and finally got the out of memory error.
After that, we dig in the archive and find a solution:
issue a 'set enable_hashagg=off' and re-run the query,
and now the query give me the result.

regards laser

#5laser
laser@toping.com.cn
In reply to: Tom Lane (#3)
Re: out of memory problem in CVS

Tom Lane 写锟斤拷:

Neil Conway <neilc@samurai.com> writes:

Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amount of
memory -- posting the EXPLAIN for the query would confirm that.

It would be useful to confirm whether this behavior is any different
from 8.0 ...

Ok, rerun the query in another 8.0.3 box, it comes with 1G RAM and it's
linux,
without the 512M data segment limit (ulimit -d unlimited)锟斤拷Load the data
and run vacuum analyze; then run the query, now the query return the result,
through top, we found that the postmaster process used 600+M RAM, the
enable_hashagg is ON.

wish that helps.

regards laser