huge backend processes

Started by Jim Mercerover 26 years ago3 messagesgeneral
Jump to latest
#1Jim Mercer
jim@reptiles.org

maybe i'm doing something wrong here:

CREATE TABLE samples
(
mark abstime,
subnet inet,
bytes_in float8,
bytes_out float8
);
CREATE INDEX samples_mark ON samples (mark);

--- fill it with lots and lots of data

BEGIN WORK;
DECLARE mycurs CURSOR FOR
SELECT DATE_PART('epoch', mark), subnet, bytes_in, bytes_out
FROM samples
WHERE mark >= 'epoch or another date'::abstime;

-- LOOP
FETCH FORWARD 1000 IN mycurs

END WORK;

given the above, actually done using C/libpq, i run my program, which
does a PQclear after each FETCH.

after reading 250000 records, top says:

PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
13748 postgres105 0 22724K 20588K RUN 3:05 86.14% 86.14% postgres

at some point, it runs out of memory (or something):

279001 records read (1177 rec/sec)
testprog: query failed - FETCH FORWARD 1000 IN samples;
testprog: (7) pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally before or while
processing the request.

i have had this problem with other processes on other tables.

the problem is usually if i am selecting a huge number of records,
and defining some conversion or something in the selected fields.

if i revert my code to (note: less the DATE_PART conversion):

DECLARE mycurs CURSOR FOR
SELECT mark, subnet, bytes_in, bytes_out
FROM samples
WHERE mark >= 'epoch or another date'::abstime;

it works fine.

--
[ Jim Mercer Reptilian Research jim@reptiles.org +1 416 410-5633 ]
[ The telephone, for those of you who have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail. ]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]

#2Jim Mercer
jim@reptiles.org
In reply to: Jim Mercer (#1)
Re: [GENERAL] huge backend processes

if i revert my code to (note: less the DATE_PART conversion):

DECLARE mycurs CURSOR FOR
SELECT mark, subnet, bytes_in, bytes_out
FROM samples
WHERE mark >= 'epoch or another date'::abstime;

it works fine.

as a followup, if i use:

DECLARE mycurs CURSOR FOR
SELECT mark::int8, subnet, bytes_in, bytes_out
FROM samples
WHERE mark >= 'epoch or another date'::abstime;

it works fine as well.

--
[ Jim Mercer Reptilian Research jim@reptiles.org +1 416 410-5633 ]
[ The telephone, for those of you who have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail. ]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]

#3Bruce Momjian
bruce@momjian.us
In reply to: Jim Mercer (#1)
Re: [GENERAL] huge backend processes

maybe i'm doing something wrong here:

CREATE TABLE samples
(
mark abstime,
subnet inet,
bytes_in float8,
bytes_out float8
);
CREATE INDEX samples_mark ON samples (mark);

--- fill it with lots and lots of data

BEGIN WORK;
DECLARE mycurs CURSOR FOR
SELECT DATE_PART('epoch', mark), subnet, bytes_in, bytes_out

This is clearly a known problem. We need a per-tuple memory context.
Function calls that return palloc'ed memory is not freed for each tuple.

-- 
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026