Does pg_dumpall do BLOBs too?

Started by Frank Joerdensabout 24 years ago8 messagesgeneral
Jump to latest
#1Frank Joerdens
frank@joerdens.de

The man page for pg_dumpall does not mention anything about BLOBs,
whereas the man page for pg_dump does. Does that mean you can't dump out
everything at once if you have databases with BLOBs on your server? (I
need to dump and reload everything cuz my new app wants a NAMEDATALEN of
64).

Regards, Frank

#2Daniel Lundin
daniel@helena-daniel.se
In reply to: Frank Joerdens (#1)
Re: Does pg_dumpall do BLOBs too?

On Fri, Jan 11, 2002 at 07:38:01PM +0100, Frank Joerdens wrote:

The man page for pg_dumpall does not mention anything about BLOBs,
whereas the man page for pg_dump does. Does that mean you can't dump out
everything at once if you have databases with BLOBs on your server? (I
need to dump and reload everything cuz my new app wants a NAMEDATALEN of
64).

As I understand it, pg_dumpall calls pg_dump, so if pg_dump backups blobs, so
would pg_dumpall.

Safest way of finding out whether your data is backed up safely or not is
trying to restore after a backup. Never trust any backup scheme without trying
it.

Show quoted text

Regards, Frank

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

#3Frank Joerdens
frank@joerdens.de
In reply to: Daniel Lundin (#2)
Re: Does pg_dumpall do BLOBs too?

On Sat, Jan 12, 2002 at 11:37:36AM +0100, Daniel Lundin wrote:

On Fri, Jan 11, 2002 at 07:38:01PM +0100, Frank Joerdens wrote:

The man page for pg_dumpall does not mention anything about BLOBs,
whereas the man page for pg_dump does. Does that mean you can't dump out
everything at once if you have databases with BLOBs on your server? (I
need to dump and reload everything cuz my new app wants a NAMEDATALEN of
64).

As I understand it, pg_dumpall calls pg_dump, so if pg_dump backups blobs, so
would pg_dumpall.

You're right: pg_dumpall is just a shell script that calls pg_dump.
Looks like it wouldn't be too hard to hack actually, which it seems
you'd have to do to get what I want:

--------------------------- begin ---------------------------
frank@limedes:~ > pg_dumpall -Ft -b > everything_13_jan_02.out.tar
pg_dump: BLOB output is not supported for plain text dump files. Use a
different output format.
pg_dump failed on template1, exiting
--------------------------- end ---------------------------

The wrapper script doesn't seem to pass the option -Ft to pg_dump so it
doesn't support output formats other than plain text which means you
can't do BLOBs.

I might try to fix this . . .

Regards, Frank

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Frank Joerdens (#3)
Re: Does pg_dumpall do BLOBs too?

Frank Joerdens <frank@joerdens.de> writes:

The wrapper script doesn't seem to pass the option -Ft to pg_dump so it
doesn't support output formats other than plain text which means you
can't do BLOBs.

The trouble is that pg_dumpall wants to concatenate the output from
several pg_dumps, intermixed with commands issued by itself. Easy to do
with text outputs, not so easy with the non-text formats.

I could see making pg_dumpall emit a script that has the global setup
commands plus pg_restore calls referencing separately-created data
files, one per database. Trouble with that is that the data files
couldn't be sent to pg_dumpall's stdout, which means that pg_dumpall
would have to include options for deciding where to put them. And what
about the case where you have more than 4GB of data and a system that
doesn't do large files? Presently it's easy to pipe pg_dumpall to
"split", and cat the segments together to feed to psql when reloading.
But that method won't work under this scenario.

regards, tom lane

#5Frank Joerdens
frank@joerdens.de
In reply to: Tom Lane (#4)
Re: Does pg_dumpall do BLOBs too?

On Sat, Jan 12, 2002 at 11:28:28AM -0500, Tom Lane wrote:

Frank Joerdens <frank@joerdens.de> writes:

The wrapper script doesn't seem to pass the option -Ft to pg_dump so it
doesn't support output formats other than plain text which means you
can't do BLOBs.

The trouble is that pg_dumpall wants to concatenate the output from
several pg_dumps, intermixed with commands issued by itself. Easy to do
with text outputs, not so easy with the non-text formats.

True.

I could see making pg_dumpall emit a script that has the global setup
commands plus pg_restore calls referencing separately-created data
files, one per database. Trouble with that is that the data files
couldn't be sent to pg_dumpall's stdout, which means that pg_dumpall
would have to include options for deciding where to put them.

As a default, create a subdir (maybe call it pg_dumpall_files[current
system date]) in the current directory where you'd also put the script
itself?

And what

about the case where you have more than 4GB of data and a system that
doesn't do large files? Presently it's easy to pipe pg_dumpall to
"split", and cat the segments together to feed to psql when reloading.
But that method won't work under this scenario.

On starting the script, give a message saying something along the lines
of 'If you have a lot of data, you might end up with files larger than 2
GB which may not work on some systems. In that case you should use the
plaintext option and dump and restore any databases with BLOBs
separately.' Then offer to continue or to quit?

What I just did to dump everything out and reload it was to do
pg_dumpall and pg_dump for the one (out of 26) databases that contained
BLOBs separately, then reload everything from the dumpall script, drop
the BLOB database, recreate it, and then restore it from its separate
script. I suppose that didn't kill me but an integrated solution would
be nicer and cleaner. I've got a sysadmin intern here next month who I
might put to the task.

Regards, Frank

#6Colm McCartan
colmm@owl.co.uk
In reply to: Frank Joerdens (#1)
OT: anon CVS hassles

Hello all,

When I try to do an anon cvs co I get the following:

cvs server: Updating pgsql/contrib/pgcrypto/expected
cvs server: failed to create lock directory for
`/projects/cvsroot/pgsql/contrib/pgcrypto/expected'
(/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock):
Permission denied
cvs server: failed to obtain dir lock in repository
`/projects/cvsroot/pgsql/contrib/pgcrypto/expected'
cvs [server aborted]: read lock failed - giving up

I can see this in the archives a lot and it seems to reappear now and
then - what gives? Is this a permission thing on new directories?

Also, I really only want to check out the jdbc driver but find that this
is impossible, the entire codebase has to be co'd. Is there a good
reason for this? Having the jdbc source in a seperate tarball would
allow people to debug into the source from java. Perhaps a cvs module
entry for the sql tree?

Anyway, I believe the above error needs to be fixed on the server.

Cheers,
colm

#7Colm McCartan
colmm@owl.co.uk
In reply to: Frank Joerdens (#1)
Re: [GENERAL] OT: anon CVS hassles

Colm McCartan wrote:

Hello all,

When I try to do an anon cvs co I get the following:

cvs server: Updating pgsql/contrib/pgcrypto/expected
cvs server: failed to create lock directory for
`/projects/cvsroot/pgsql/contrib/pgcrypto/expected'
(/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock):
Permission denied
cvs server: failed to obtain dir lock in repository
`/projects/cvsroot/pgsql/contrib/pgcrypto/expected'
cvs [server aborted]: read lock failed - giving up

Thanks to whoever fixed this - I posted to the general list initially
but I imagine whoever fixed it will see this list as well.

Also, I really only want to check out the jdbc driver but find that this
is impossible, the entire codebase has to be co'd. Is there a good
reason for this? Having the jdbc source in a seperate tarball would
allow people to debug into the source from java. Perhaps a cvs module
entry for the sql tree?

Any comments on this? I notice now that there is discussion on the list
of abandoning ant altogether. FWIW, I think this would discourage java
people from working on the driver. I personally would much rather be
able to download the JDBC stuff independently even if I will suffer from
the fact that I don't know enough about the server codebase. Also,
what are windoze people supposed to do with a make and configure-based
system? Part of Ant's appeal, surely, is that it is cross-platform.

Strictly IMHO.

Cheers,
colm

#8Bruce Momjian
bruce@momjian.us
In reply to: Colm McCartan (#7)
Re: [GENERAL] OT: anon CVS hassles

cvs server: Updating pgsql/contrib/pgcrypto/expected
cvs server: failed to create lock directory for
`/projects/cvsroot/pgsql/contrib/pgcrypto/expected'
(/projects/cvsroot/pgsql/contrib/pgcrypto/expected/#cvs.lock):
Permission denied
cvs server: failed to obtain dir lock in repository
`/projects/cvsroot/pgsql/contrib/pgcrypto/expected'
cvs [server aborted]: read lock failed - giving up

Thanks to whoever fixed this - I posted to the general list initially
but I imagine whoever fixed it will see this list as well.

Marc fixed it.

Also, I really only want to check out the jdbc driver but find that this
is impossible, the entire codebase has to be co'd. Is there a good
reason for this? Having the jdbc source in a seperate tarball would
allow people to debug into the source from java. Perhaps a cvs module
entry for the sql tree?

Any comments on this? I notice now that there is discussion on the list
of abandoning ant altogether. FWIW, I think this would discourage java
people from working on the driver. I personally would much rather be
able to download the JDBC stuff independently even if I will suffer from
the fact that I don't know enough about the server codebase. Also,
what are windoze people supposed to do with a make and configure-based
system? Part of Ant's appeal, surely, is that it is cross-platform.

I don't know of any particular way to do that except maybe checking out
pgsql/src/interfaces/jdbc instead of just 'pgsql'. I just tried it and
it worked fine, checking out only jdbc.

-- 
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026