Out of Memory during Insert
Dear, Psqlers,
I encountered an out of memory error during executing un INSERT into
table1(v1,v2,v3) SELECT c1,c2,c3 from table2 where .....
The recordset of Select query is around 30M record. And I got following
Message :
------- ERROR: out of memoryDETAIL: Failed on request of size 40.' in
'insert into -------------------
I found my postgresql process used up 3G Memory . I guess postgresql try to
first get all the result of select , and then insert into Table . As the
process can't allocate more memory for result of select , and then I got OOM
error. Can someone verify my guess ? Or what else could be the reason of OOM
?
Is there any other ways to still insert same amount of data and avoid this
OOM error ?
Thanks ,
--
Yue
On 24 March 2010 10:57, yue peng <pengyuebupt@gmail.com> wrote:
Is there any other ways to still insert same amount of data and avoid this
OOM error ?
I'd expect COPY to be the most effective way of bulk loading data into a
database. http://www.postgresql.org/docs/current/static/sql-copy.html
Or do inserts in smaller batches.
Do you happen to have any triggers or constraints on the table?
Regards
Thom
yue peng <pengyuebupt@gmail.com> writes:
I encountered an out of memory error during executing un INSERT into
table1(v1,v2,v3) SELECT c1,c2,c3 from table2 where .....
Most likely the OOM is because of growth of the pending-trigger-event
queue --- do you have any foreign key references in that table?
Possible solutions are to insert fewer rows at a time, or to drop the FK
constraint and then re-create it after you do the bulk insertion.
You might also try updating to a newer PG version ... 8.4 and later use
only 12 bytes per pending INSERT trigger not 40. That's not necessarily
going to be enough to fix this particular case, of course.
regards, tom lane