pg_dump 2GB limit?

Started by Laurette Cisnerosalmost 24 years ago13 messages
#1Laurette Cisneros
laurette@nextbus.com

The archives search is not working on postgresql.org so I need to ask this
question...

We are using postgresql 7.2 and when dumping one of our larger databases,
we get the following error:

File size limit exceeded (core dumped)

We suspect pg_dump. Is this true? Why would there be this limit in
pg_dump? Is it scheduled to be fixed?

Thanks,

--
Laurette Cisneros
Database Roadie
(510) 420-3137
NextBus Information Systems, Inc.
www.nextbus.com
Where's my bus?

#2Doug McNaught
doug@wireboard.com
In reply to: Laurette Cisneros (#1)
Re: pg_dump 2GB limit?

Laurette Cisneros <laurette@nextbus.com> writes:

The archives search is not working on postgresql.org so I need to ask this
question...

We are using postgresql 7.2 and when dumping one of our larger databases,
we get the following error:

File size limit exceeded (core dumped)

We suspect pg_dump. Is this true? Why would there be this limit in
pg_dump? Is it scheduled to be fixed?

This means one of two things:

1) Your ulimits are set too low, or
2) Your pg_dump wasn't compiled against a C library with large file
support (greater than 2GB).

Is this on Linux?

-Doug
--
Doug McNaught Wireboard Industries http://www.wireboard.com/

Custom software development, systems and network consulting.
Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...

#3Noname
dru-sql@redwoodsoft.com
In reply to: Laurette Cisneros (#1)
Re: pg_dump 2GB limit?

Are you on linux (most likely)? If so, then your pgsql was compiled
without large file support.

Dru Nelson
San Carlos, California

Show quoted text

The archives search is not working on postgresql.org so I need to ask this
question...

We are using postgresql 7.2 and when dumping one of our larger databases,
we get the following error:

File size limit exceeded (core dumped)

We suspect pg_dump. Is this true? Why would there be this limit in
pg_dump? Is it scheduled to be fixed?

Thanks,

--
Laurette Cisneros
Database Roadie
(510) 420-3137
NextBus Information Systems, Inc.
www.nextbus.com
Where's my bus?

#4Peter Eisentraut
peter_e@gmx.net
In reply to: Laurette Cisneros (#1)
Re: pg_dump 2GB limit?

Laurette Cisneros writes:

We are using postgresql 7.2 and when dumping one of our larger databases,
we get the following error:

File size limit exceeded (core dumped)

We suspect pg_dump. Is this true?

No, it's your operating sytem.

http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/backup.html#BACKUP-DUMP-LARGE

--
Peter Eisentraut peter_e@gmx.net

#5Laurette Cisneros
laurette@nextbus.com
In reply to: Doug McNaught (#2)
Re: pg_dump 2GB limit?

Hi,

I'm on Red Hat. Here's the uname info:
Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown

What do I need to do to "turn on large file support" in the compile?

Thanks,

L.
On 28 Mar 2002, Doug McNaught wrote:

Laurette Cisneros <laurette@nextbus.com> writes:

The archives search is not working on postgresql.org so I need to ask this
question...

We are using postgresql 7.2 and when dumping one of our larger databases,
we get the following error:

File size limit exceeded (core dumped)

We suspect pg_dump. Is this true? Why would there be this limit in
pg_dump? Is it scheduled to be fixed?

This means one of two things:

1) Your ulimits are set too low, or
2) Your pg_dump wasn't compiled against a C library with large file
support (greater than 2GB).

Is this on Linux?

-Doug

--
Laurette Cisneros
Database Roadie
(510) 420-3137
NextBus Information Systems, Inc.
www.nextbus.com
Where's my bus?

#6Doug McNaught
doug@wireboard.com
In reply to: Laurette Cisneros (#5)
Re: pg_dump 2GB limit?

Laurette Cisneros <laurette@nextbus.com> writes:

Hi,

I'm on Red Hat. Here's the uname info:
Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown

That's an old and buggy kernel, BTW--you should install the errata
upgrades,

What do I need to do to "turn on large file support" in the compile?

Make sure you are running the latest kernel and libs, and AFAIK
'configure' should set it up for you automatically.

-Doug
--
Doug McNaught Wireboard Industries http://www.wireboard.com/

Custom software development, systems and network consulting.
Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...

#7Noname
mmc@maruska.dyndns.org
In reply to: Laurette Cisneros (#5)
Re: pg_dump 2GB limit?

Laurette Cisneros <laurette@nextbus.com> writes:

Hi,

I'm on Red Hat. Here's the uname info:
Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown

What do I need to do to "turn on large file support" in the compile?

IIRC old version (format) of reiserFS (3.5 ??) has this limit, too. Solutions is
to reformat with new version (kernel & reiserfsprogs). (possible test with _dd_).

#8Laurette Cisneros
laurette@nextbus.com
In reply to: Doug McNaught (#6)
Re: pg_dump 2GB limit?

Oops sent the wrong uname, here's the one from the machine we compiled on:
Linux lept 2.4.16 #6 SMP Fri Feb 8 13:31:46 PST 2002 i686 unknown

and has: libc-2.2.2.so

We use ./configure

Still a problem?

We do compress (-Fc) right now, but are working on a backup scheme that
requires and uncompressed dump.

Thanks for the help!

L.

On 28 Mar 2002, Doug McNaught wrote:

Laurette Cisneros <laurette@nextbus.com> writes:

Hi,

I'm on Red Hat. Here's the uname info:
Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown

That's an old and buggy kernel, BTW--you should install the errata
upgrades,

What do I need to do to "turn on large file support" in the compile?

Make sure you are running the latest kernel and libs, and AFAIK
'configure' should set it up for you automatically.

-Doug

--
Laurette Cisneros
Database Roadie
(510) 420-3137
NextBus Information Systems, Inc.
www.nextbus.com
Where's my bus?

#9Doug McNaught
doug@wireboard.com
In reply to: Laurette Cisneros (#8)
Re: pg_dump 2GB limit?

Laurette Cisneros <laurette@nextbus.com> writes:

Oops sent the wrong uname, here's the one from the machine we compiled on:
Linux lept 2.4.16 #6 SMP Fri Feb 8 13:31:46 PST 2002 i686 unknown

and has: libc-2.2.2.so

We use ./configure

Still a problem?

Might be. Make sure you have up to date kernel and libs on the
compile machine and the one you're running on. Make sure your
filesystem supports files greater than 2GB.

Also, if you are using shell redirection to create the output file,
it's possible the shell isn't using the right open() flags.

-Doug
--
Doug McNaught Wireboard Industries http://www.wireboard.com/

Custom software development, systems and network consulting.
Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...

#10Noname
teg@redhat.com
In reply to: Laurette Cisneros (#5)
Re: pg_dump 2GB limit?

Laurette Cisneros <laurette@nextbus.com> writes:

Hi,

I'm on Red Hat. Here's the uname info:
Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown

You should really upgrade (kernel and the rest), but this kernel
supports large files.

--
Trond Eivind Glomsr�d
Red Hat, Inc.

#11Noname
teg@redhat.com
In reply to: Peter Eisentraut (#4)
Re: pg_dump 2GB limit?

Peter Eisentraut <peter_e@gmx.net> writes:

Laurette Cisneros writes:

We are using postgresql 7.2 and when dumping one of our larger databases,
we get the following error:

File size limit exceeded (core dumped)

We suspect pg_dump. Is this true?

No, it's your operating sytem.

Red Hat Linux 7.x which he seems to be using supports this.
--
Trond Eivind Glomsr�d
Red Hat, Inc.

#12Christopher Kings-Lynne
chriskl@familyhealth.com.au
In reply to: Doug McNaught (#2)
Re: pg_dump 2GB limit?

File size limit exceeded (core dumped)

We suspect pg_dump. Is this true? Why would there be this limit in
pg_dump? Is it scheduled to be fixed?

Try piping the output of pg_dump through bzip2 before writing it to disk.
Or else, I think that pg_dump has -z or something parameters for turning
on compression.

Chris

#13Jan Wieck
janwieck@yahoo.com
In reply to: Christopher Kings-Lynne (#12)
Re: pg_dump 2GB limit?

Christopher Kings-Lynne wrote:

File size limit exceeded (core dumped)

We suspect pg_dump. Is this true? Why would there be this limit in
pg_dump? Is it scheduled to be fixed?

Try piping the output of pg_dump through bzip2 before writing it to disk.
Or else, I think that pg_dump has -z or something parameters for turning
on compression.

And if that isn't enough, you can pipe the output (compressed
or not) into split(1).

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck@Yahoo.com #

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com