out of memory problem in CVS
hi,
we are using postgresql to analyze our web log, we got a 6M table,
and while doing the query:
SELECT url,sum(ct) as ctperkw from ctrraw group by url order by ctperkw
desc limit 1000;
the table structure is:
CREATE TABLE ctrRAW
(
cdate date,
ip inet,
kw varchar(128),
prd varchar(6),
pos int,
url varchar(512),
ct int
);
create INDEX ctr_ip_index on ctrraw(ip);
our boxes ia a freebsd 4.1 with 2G emmory, but our postgresql user got
a 512M data sigment ulimit.
and the query run out of memory, the log file attached.
the postgresql was check out from CVS around June 25.
we are going to test in newly checkout CVS tip, but want to report here
first.
regards laser
Attachments:
logfiletext/plain; name=logfileDownload
laser wrote:
SELECT url,sum(ct) as ctperkw from ctrraw group by url order by ctperkw
desc limit 1000;
and the query run out of memory, the log file attached.
Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amount of
memory -- posting the EXPLAIN for the query would confirm that.
-Neil
Neil Conway <neilc@samurai.com> writes:
Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amount of
memory -- posting the EXPLAIN for the query would confirm that.
It would be useful to confirm whether this behavior is any different
from 8.0 ...
regards, tom lane
Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amount of
memory -- posting the EXPLAIN for the query would confirm that.-Neil
yes, I run VACUUM ANALYZE VERBOSE then run the query,
and finally got the out of memory error.
After that, we dig in the archive and find a solution:
issue a 'set enable_hashagg=off' and re-run the query,
and now the query give me the result.
regards laser
Tom Lane 写锟斤拷:
Neil Conway <neilc@samurai.com> writes:
Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amount of
memory -- posting the EXPLAIN for the query would confirm that.It would be useful to confirm whether this behavior is any different
from 8.0 ...
Ok, rerun the query in another 8.0.3 box, it comes with 1G RAM and it's
linux,
without the 512M data segment limit (ulimit -d unlimited)锟斤拷Load the data
and run vacuum analyze; then run the query, now the query return the result,
through top, we found that the postmaster process used 600+M RAM, the
enable_hashagg is ON.
wish that helps.
regards laser