Query Failed, out of memory
I am using the netflix database:
Table "public.ratings"
Column | Type | Modifiers
--------+----------+-----------
item | integer |
client | integer |
day | smallint |
rating | smallint |
The query was executed as:
psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from
ratings order by client" netflix > netflix.txt
My question, it looks like the kernel killed psql, and not postmaster. The
postgresql log file complained about a broken pipe.
Question, is this a bug in psql? It took over 4 hours of run time before
the crash.
dmesg:
Free pages: 13192kB (112kB HighMem)
Active:124664 inactive:124330 dirty:0 writeback:0 unstable:0 free:3298
slab:2188 mapped:248080 pagetables:1939
DMA free:12160kB min:16kB low:32kB high:48kB active:0kB inactive:0kB
present:16384kB pages_scanned:12602 all_unreclaimable? yes
protections[]: 0 0 0
Normal free:920kB min:928kB low:1856kB high:2784kB active:438608kB
inactive:437656kB present:901120kB pages_scanned:978318 all_unreclaimable?
yes
protections[]: 0 0 0
HighMem free:112kB min:128kB low:256kB high:384kB active:60176kB
inactive:59536kB present:131008kB pages_scanned:134673 all_unreclaimable?
yes
protections[]: 0 0 0
DMA: 6*4kB 3*8kB 3*16kB 3*32kB 3*64kB 2*128kB 1*256kB 0*512kB 1*1024kB
1*2048kB 2*4096kB = 12160kB
Normal: 0*4kB 1*8kB 7*16kB 1*32kB 0*64kB 0*128kB 1*256kB 1*512kB 0*1024kB
0*2048kB 0*4096kB = 920kB
HighMem: 0*4kB 0*8kB 1*16kB 1*32kB 1*64kB 0*128kB 0*256kB 0*512kB 0*1024kB
0*2048kB 0*4096kB = 112kB
Swap cache: add 548633, delete 548633, find 11883/13748, race 0+0
0 bounce buffer pages
Free swap: 0kB
262128 pages of RAM
32752 pages of HIGHMEM
3593 reserved pages
608 pages shared
0 pages swap cached
Out of Memory: Killed process 9143 (psql).
On Thu, Oct 05, 2006 at 11:56:43AM -0400, Mark Woodward wrote:
The query was executed as:
psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from
ratings order by client" netflix > netflix.txtMy question, it looks like the kernel killed psql, and not postmaster. The
postgresql log file complained about a broken pipe.Question, is this a bug in psql? It took over 4 hours of run time before
the crash.
Well, psql tried to store the entire resultset in memory at once, and
failed. I'm not sure how many records you were trying to display, but
try to estimate how much memory that would take to store...
What were you trying to do?
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
From each according to his ability. To each according to his ability to litigate.
On Oct 5, 2006, at 11:56 , Mark Woodward wrote:
I am using the netflix database:
Table "public.ratings"
Column | Type | Modifiers
--------+----------+-----------
item | integer |
client | integer |
day | smallint |
rating | smallint |The query was executed as:
psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from
ratings order by client" netflix > netflix.txtMy question, it looks like the kernel killed psql, and not
postmaster. The
postgresql log file complained about a broken pipe.Question, is this a bug in psql? It took over 4 hours of run time
before
the crash.
The OOM killer kills a process based on some heuristics but it is
just a guess. For reliable behavior, configure Linux to 1) disable
the OOM killer 2) stop overcommitting memory. Then, you should be
able to get the results you require.
-M
"Mark Woodward" <pgsql@mohawksoft.com> writes:
psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from
ratings order by client" netflix > netflix.txt
My question, it looks like the kernel killed psql, and not postmaster.
Not too surprising.
Question, is this a bug in psql?
It's really a libpq design issue: since libpq provides random access to
a PGresult, and has no mechanism for handling failures after returning
the PGresult to the client, it has to slurp the whole query result into
memory first.
FWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like this. For the moment I'd
suggest using COPY instead.
regards, tom lane
"Mark Woodward" <pgsql@mohawksoft.com> writes:
psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from
ratings order by client" netflix > netflix.txtMy question, it looks like the kernel killed psql, and not postmaster.
Not too surprising.
Question, is this a bug in psql?
It's really a libpq design issue: since libpq provides random access to
a PGresult, and has no mechanism for handling failures after returning
the PGresult to the client, it has to slurp the whole query result into
memory first.FWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like this. For the moment I'd
suggest using COPY instead.
That's sort of what I was afraid off. I am trying to get 100 million
records into a text file in a specific order.
Sigh, I have to write a quick program to use a cursor. :-(
On Thu, Oct 05, 2006 at 11:56:43AM -0400, Mark Woodward wrote:
The query was executed as:
psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from
ratings order by client" netflix > netflix.txtMy question, it looks like the kernel killed psql, and not postmaster.
The
postgresql log file complained about a broken pipe.Question, is this a bug in psql? It took over 4 hours of run time before
the crash.Well, psql tried to store the entire resultset in memory at once, and
failed. I'm not sure how many records you were trying to display, but
try to estimate how much memory that would take to store...What were you trying to do?
It's the stupid NetFlix prize thing, I need to dump out the data in a
specific order. This is just *one* such query I want to try. I guess, like
I told Tom, I have to write a small program that uses a cursor. :-(
FWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like this. For the moment I'd
suggest using COPY instead.That's sort of what I was afraid off. I am trying to get 100 million
records into a text file in a specific order.Sigh, I have to write a quick program to use a cursor. :-(
Why don't you try the psql client from 8.2beta1 then? This way you don't
have to write the program yourself and you're helping out with beta
testing as well :-)
See FETCH_COUNT in
http://developer.postgresql.org/pgdocs/postgres/app-psql.html
Bye,
Chris.
Tom Lane wrote:
"Mark Woodward" <pgsql@mohawksoft.com> writes:
psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from
ratings order by client" netflix > netflix.txtFWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like this. For the moment I'd
suggest using COPY instead.
but COPY doesn't guarantee any order.
BTW, I just this morning discovered the hard way that our linux boxes
didn't have strict memory allocation turned on, and then went and set
it. I'd advise Mark to do the same, if he hasn't already.
cheers
andrew
Create table as select ... Order by ...
Copy to ...
- Luke
Msg is shrt cuz m on ma treo
-----Original Message-----
From: Andrew Dunstan [mailto:andrew@dunslane.net]
Sent: Thursday, October 05, 2006 12:51 PM Eastern Standard Time
To: Tom Lane
Cc: Mark Woodward; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Query Failed, out of memory
Tom Lane wrote:
"Mark Woodward" <pgsql@mohawksoft.com> writes:
psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from
ratings order by client" netflix > netflix.txtFWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like this. For the moment I'd
suggest using COPY instead.
but COPY doesn't guarantee any order.
BTW, I just this morning discovered the hard way that our linux boxes
didn't have strict memory allocation turned on, and then went and set
it. I'd advise Mark to do the same, if he hasn't already.
cheers
andrew
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
Import Notes
Resolved by subject fallback
FWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like this. For the moment I'd
suggest using COPY instead.That's sort of what I was afraid off. I am trying to get 100 million
records into a text file in a specific order.Sigh, I have to write a quick program to use a cursor. :-(
Why don't you try the psql client from 8.2beta1 then? This way you don't
have to write the program yourself and you're helping out with beta
testing as well :-)
See FETCH_COUNT in
http://developer.postgresql.org/pgdocs/postgres/app-psql.html
Well, maybe next time, it only took about 10 minutes to write. It is a
simple program.
Tom Lane wrote:
"Mark Woodward" <pgsql@mohawksoft.com> writes:
psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from
ratings order by client" netflix > netflix.txtFWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like this. For the moment I'd
suggest using COPY instead.but COPY doesn't guarantee any order.
BTW, I just this morning discovered the hard way that our linux boxes
didn't have strict memory allocation turned on, and then went and set
it. I'd advise Mark to do the same, if he hasn't already.
Yea, I've been toying with the idea of that setting lately, I can't for
the life of me understand why it isn't the default behavior.
"MW" == Mark Woodward <pgsql@mohawksoft.com> writes:
MW> Yea, I've been toying with the idea of that setting lately, I
MW> can't for the life of me understand why it isn't the default
MW> behavior.
Lots of programs handle malloc() failures very badly. Including
daemons. Often it's better in practice to just keep going a little
longer and see if you can squeeze by -- and then perhaps kill the
memory hog, rather than some daemon getting a NULL and crashing.
/Benny
On Thu, 2006-10-05 at 12:52 -0400, Luke Lonergan wrote:
Create table as select ... Order by ...
Copy to ...
Or in 8.2, "COPY TO (SELECT ... ORDER BY)" (My, that's a neat feature.)
-Neil
:-D
Is that in the release notes?
- Luke
Msg is shrt cuz m on ma treo
-----Original Message-----
From: Neil Conway [mailto:neilc@samurai.com]
Sent: Thursday, October 05, 2006 02:35 PM Eastern Standard Time
To: Luke Lonergan
Cc: Andrew Dunstan; Tom Lane; Mark Woodward; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Query Failed, out of memory
On Thu, 2006-10-05 at 12:52 -0400, Luke Lonergan wrote:
Create table as select ... Order by ...
Copy to ...
Or in 8.2, "COPY TO (SELECT ... ORDER BY)" (My, that's a neat feature.)
-Neil
Import Notes
Resolved by subject fallback
On Thu, 2006-10-05 at 14:53 -0400, Luke Lonergan wrote:
Is that in the release notes?
Yes: "Allow COPY to dump a SELECT query (Zoltan Boszormenyi, Karel Zak)"
-Neil
On Thu, 2006-10-05 at 14:53 -0400, Luke Lonergan wrote:
Is that in the release notes?
Yes: "Allow COPY to dump a SELECT query (Zoltan Boszormenyi, Karel Zak)"
I remember this discussion, it is cool when great features get added.
On Oct 5, 2006, at 11:15 AM, Mark Woodward wrote:
On Thu, Oct 05, 2006 at 11:56:43AM -0400, Mark Woodward wrote:
The query was executed as:
psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day
from
ratings order by client" netflix > netflix.txtMy question, it looks like the kernel killed psql, and not
postmaster.
The
postgresql log file complained about a broken pipe.Question, is this a bug in psql? It took over 4 hours of run time
before
the crash.Well, psql tried to store the entire resultset in memory at once, and
failed. I'm not sure how many records you were trying to display, but
try to estimate how much memory that would take to store...What were you trying to do?
It's the stupid NetFlix prize thing, I need to dump out the data in a
specific order. This is just *one* such query I want to try. I
guess, like
I told Tom, I have to write a small program that uses a cursor. :-(
IIRC, 8.2 adds the ability to at least copy from a view, if not a raw
SELECT, so you should probably do that instead. Plus it'd be good to
bang on 8.2 with that data set. :) You'll also likely get better
performance.
--
Jim Nasby jim@nasby.net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)