out of memory

Started by Mahavir Trivediabout 13 years ago4 messages
#1Mahavir Trivedi
mahavir.trivedi@gmail.com

hi

i have sql file (it's size are 1GB )
when i execute it then the String is 987098801 bytr too long for encoding
conversion error occured .
pls give me solution about

i have XP 64-bit with 8 GB RAM shared_buffer 1GB check point = 34

with thanks
mahavir

#2Tatsuo Ishii
ishii@postgresql.org
In reply to: Mahavir Trivedi (#1)
Re: out of memory

i have sql file (it's size are 1GB )
when i execute it then the String is 987098801 bytr too long for encoding
conversion error occured .
pls give me solution about

You hit the upper limit of internal memory allocation limit in
PostgreSQL. IMO, there's no way to avoid the error except you use
client encoding identical to backend.

Hackers:
The particular limit seem to be set considering TOAST(from
include/utils/memutils.h):

* XXX This is deliberately chosen to correspond to the limiting size
* of varlena objects under TOAST. See VARSIZE_4B() and related macros
* in postgres.h. Many datatypes assume that any allocatable size can
* be represented in a varlena header.

IMO the SQL string size limit is totally different from
TOAST. Shouldn't we have different limit for SQL string?
(MAX_CONVERSION_GROWTH is different story, of course)
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

#3Robert Haas
robertmhaas@gmail.com
In reply to: Tatsuo Ishii (#2)
Re: [HACKERS] out of memory

On Tue, Oct 30, 2012 at 6:08 AM, Tatsuo Ishii <ishii@postgresql.org> wrote:

i have sql file (it's size are 1GB )
when i execute it then the String is 987098801 bytr too long for encoding
conversion error occured .
pls give me solution about

You hit the upper limit of internal memory allocation limit in
PostgreSQL. IMO, there's no way to avoid the error except you use
client encoding identical to backend.

We recently had a customer who suffered a failed in pg_dump because
the quadruple-allocation required by COPY OUT for an encoding
conversion exceeded allocatable memory. I wonder whether it would be
possible to rearrange things so that we can do a "streaming" encoding
conversion. That is, if we have a large datum that we're trying to
send back to the client, could we perhaps chop off the first 50MB or
so, do the encoding on that amount of data, send the data to the
client, lather, rinse, repeat?

Your recent work to increase the maximum possible size of large
objects (for which I thank you) seems like it could make these sorts
of issues more common. As objects get larger, I don't think we can go
on assuming that it's OK for peak memory utilization to keep hitting
5x or more.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#4John R Pierce
pierce@hogranch.com
In reply to: Robert Haas (#3)
Re: [PERFORM] out of memory

On 11/05/12 9:27 AM, Robert Haas wrote:

That is, if we have a large datum that we're trying to
send back to the client, could we perhaps chop off the first 50MB or
so, do the encoding on that amount of data, send the data to the
client, lather, rinse, repeat?

I'd suggest work_mem sized chunks for this?

--
john r pierce N 37, W 122
santa cruz ca mid-left coast