pg_dump 2 gig file size limit on ext3

Started by Jeremiah Jahnover 23 years ago4 messagesgeneral
Jump to latest
#1Jeremiah Jahn
jeremiah@cs.earlham.edu

I have the strangest thing happening. I can't finish a pg_dump of my db.
It says that I have reached the maximum file size @ 2GB. I'm running
this on a system with redhat 8.0 because the problem existed on 7.3 as
well on an ext3 raid array. the size of the db is +/- 4gig. I'm using
7.2.2, I tried 7.2.1 earlier today and got the same problem. I don't
think I can really do the split the data in different tables since I use
large objects. Any1 out there have any ideas as why this is happening.
I took the 2 gig dump and recopied it into itself just for to see what
would happen and the resulting 4.2 gig file was fine. This really seems
to be a problem with pg_dump. I've used pg_dump with -Ft just crashes
with some sort of "filed to write error tried to write 221 of 256" or
something like that. the resulting size though is about 1.2 gig. -Fc
stops at the 2 gig limit. Do I need to recompile this with some 64bit
setting or something..? I'm currently using the default redhat build.

thanx for any ideas,
-jj-
--
I hope you're not pretending to be evil while secretly being good.
That would be dishonest.

#2Tommi Maekitalo
t.maekitalo@epgmbh.de
In reply to: Jeremiah Jahn (#1)
Re: pg_dump 2 gig file size limit on ext3

Hi,

how do you use pg_dump? Versions < 7.3 might not have large file support. But
you can use 'pg_dump db >dump.out'. pg_dump writes to stdout and do not have
to deal with the file itself. This is done by your shell. If your shell have
trouble you should change your shell or use split.

Tommi

Am Freitag, 6. Dezember 2002 07:06 schrieb Jeremiah Jahn:

I have the strangest thing happening. I can't finish a pg_dump of my db.
It says that I have reached the maximum file size @ 2GB. I'm running
this on a system with redhat 8.0 because the problem existed on 7.3 as
well on an ext3 raid array. the size of the db is +/- 4gig. I'm using
7.2.2, I tried 7.2.1 earlier today and got the same problem. I don't
think I can really do the split the data in different tables since I use
large objects. Any1 out there have any ideas as why this is happening.
I took the 2 gig dump and recopied it into itself just for to see what
would happen and the resulting 4.2 gig file was fine. This really seems
to be a problem with pg_dump. I've used pg_dump with -Ft just crashes
with some sort of "filed to write error tried to write 221 of 256" or
something like that. the resulting size though is about 1.2 gig. -Fc
stops at the 2 gig limit. Do I need to recompile this with some 64bit
setting or something..? I'm currently using the default redhat build.

thanx for any ideas,
-jj-

--
Dr. Eckhardt + Partner GmbH
http://www.epgmbh.de

#3Shridhar Daithankar
shridhar_daithankar@persistent.co.in
In reply to: Tommi Maekitalo (#2)
Re: pg_dump 2 gig file size limit on ext3

On 6 Dec 2002 at 8:47, Tommi Maekitalo wrote:

how do you use pg_dump? Versions < 7.3 might not have large file support. But
you can use 'pg_dump db >dump.out'. pg_dump writes to stdout and do not have
to deal with the file itself. This is done by your shell. If your shell have
trouble you should change your shell or use split.

And as pointed out in pg_dump documentation, zipping the dump on the fly is
another possibility if you have CPU power..

Bye
Shridhar

--
critic, n.: A person who boasts himself hard to please because nobody tries to
please him. -- Ambrose Bierce, "The Devil's Dictionary"

#4Chris Gamache
cgg007@yahoo.com
In reply to: Shridhar Daithankar (#3)
Re: pg_dump 2 gig file size limit on ext3

An even better idea would be to

postgres@db:~# pg_dump db | /usr/bin/split -b 1024m - yourprefix_
That would split your dump into 1GB pieces. Easy to manage.

To get them back in

postgres@db:~# cat yourprefix_aa yourprefix_ab yourprefix_ac | psql -f -

This might even work... (Syntax might be a bit mangled, tho)

postgres@db:~# pg_dump db | /usr/bin/split -b 1024m - yourprefix_ | gzip
postgres@db:~# zcat yourprefix_aa.gz yourprefix_ab.gz yourprefix_ac.gz | psql
-f -

HTH

CG
--- Shridhar Daithankar <shridhar_daithankar@persistent.co.in> wrote:

On 6 Dec 2002 at 8:47, Tommi Maekitalo wrote:

how do you use pg_dump? Versions < 7.3 might not have large file support.

But

you can use 'pg_dump db >dump.out'. pg_dump writes to stdout and do not

have

to deal with the file itself. This is done by your shell. If your shell

have

trouble you should change your shell or use split.

And as pointed out in pg_dump documentation, zipping the dump on the fly is
another possibility if you have CPU power..

Bye
Shridhar

--
critic, n.: A person who boasts himself hard to please because nobody tries
to
please him. -- Ambrose Bierce, "The Devil's Dictionary"

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html

__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com