directory archive format for pg_dump
This is the first of two patches for parallel pg_dump. In particular, this
patch adds a new pg_dump archive type which can save pg_dump data to a
directory, with each table/blob being a file so that several processes can
write to different files in parallel.
Since the compression is currently all down in the custom format backup
code,
the first thing I've done was refactoring the compression functions into a
separate file. While at it, I have added support for liblzf compression.
Writing the backup to a directory brings the disadvantage that your backup
now
consists of a bunch of files and you should make sure not to lose files or
mix
files of different backup sets. Therefore, I have added a -k switch that
checks if a directory backup set is complete. To do this, every backup has a
different id (basically a random md5sum) which is copied into every file
(both
TOC and data files). The TOC also knows about the size of each data file and
can check if it has been truncated for some reason.
Regarding lzf compression, the last discussion was here:
http://archives.postgresql.org/pgsql-hackers/2010-04/msg00442.php
I have included it to actually have multiple compression algorithms to build
a
framework for and to allow people to just compile and run it and see what
they
get. In my tests, when I run a backup with lzf compression, the postgres
backend is using 100% of one CPU and pg_dump is using 15% of another CPU.
Running with zlib however gives me 100% zlib and 70% postgres. Specifying
the
fastest zlib compression rate of 1 gives me 50% pg_dump and 100% postgres.
zlib
compression can be taken out of the code in like two minutes, it's all in
#ifdef's, so please see lzf just as an optional addition to the directory
patch
instead of as a main feature.
I am also submitting a WIP patch that shows the parallel version of pg_dump
which is a patch on top of this one. It is not completely ready yet but I am
releasing it as a WIP patch so you can see the overall picture and can play
with it already now. And hopefully I can get some feedback if I am going
into
the right direction.
There is a small shellscript included (test.sh) listing some of the
commands,
to give people a quick overview of how to call it.
Joachim
Attachments:
pg_dump-directory.difftext/x-patch; charset=US-ASCII; name=pg_dump-directory.diffDownload
diff --git a/configure.in b/configure.in
index 4bfa459..c3180cf 100644
*** a/configure.in
--- b/configure.in
*************** PGAC_ARG_BOOL(with, zlib, yes,
*** 755,760 ****
--- 755,766 ----
AC_SUBST(with_zlib)
#
+ # libLZF
+ #
+ PGAC_ARG_BOOL(with, lzf, no, [use lzf compression library])
+ AC_SUBST(with_lzf)
+
+ #
# Elf
#
*************** failure. It is possible the compiler is
*** 897,902 ****
--- 903,916 ----
Use --without-zlib to disable zlib support.])])
fi
+ if test "$with_lzf" = yes; then
+ AC_CHECK_LIB(lzf, lzf_compress, [],
+ [AC_MSG_ERROR([lzf library not found
+ If you have lzf already installed, see config.log for details on the
+ failure. It is possible the compiler isn't looking in the proper directory.
+ Use --without-lzf to disable lzf support.])])
+ fi
+
if test "$enable_spinlocks" = yes; then
AC_DEFINE(HAVE_SPINLOCKS, 1, [Define to 1 if you have spinlocks.])
else
diff --git a/src/bin/pg_dump/.gitignore b/src/bin/pg_dump/.gitignore
index c2c8677..c28ddea 100644
*** a/src/bin/pg_dump/.gitignore
--- b/src/bin/pg_dump/.gitignore
***************
*** 1,4 ****
--- 1,5 ----
/kwlookup.c
+ /md5.c
/pg_dump
/pg_dumpall
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 0367466..d012de8 100644
*** a/src/bin/pg_dump/Makefile
--- b/src/bin/pg_dump/Makefile
*************** override CPPFLAGS := -I$(libpq_srcdir) $
*** 20,32 ****
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
kwlookup.c: % : $(top_srcdir)/src/backend/parser/%
rm -f $@ && $(LN_S) $< .
all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) $(KEYWRDOBJS) | submake-libpq submake-libpgport
--- 20,35 ----
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o pg_backup_directory.o compress_io.o md5.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
kwlookup.c: % : $(top_srcdir)/src/backend/parser/%
rm -f $@ && $(LN_S) $< .
+ md5.c: % : $(top_srcdir)/src/backend/libpq/%
+ rm -f $@ && $(LN_S) $< .
+
all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) $(KEYWRDOBJS) | submake-libpq submake-libpgport
*************** uninstall:
*** 50,53 ****
rm -f $(addprefix '$(DESTDIR)$(bindir)'/, pg_dump$(X) pg_restore$(X) pg_dumpall$(X))
clean distclean maintainer-clean:
! rm -f pg_dump$(X) pg_restore$(X) pg_dumpall$(X) $(OBJS) pg_dump.o common.o pg_dump_sort.o pg_restore.o pg_dumpall.o kwlookup.c $(KEYWRDOBJS)
--- 53,56 ----
rm -f $(addprefix '$(DESTDIR)$(bindir)'/, pg_dump$(X) pg_restore$(X) pg_dumpall$(X))
clean distclean maintainer-clean:
! rm -f pg_dump$(X) pg_restore$(X) pg_dumpall$(X) $(OBJS) pg_dump.o common.o pg_dump_sort.o pg_restore.o pg_dumpall.o md5.c kwlookup.c $(KEYWRDOBJS)
diff --git a/src/bin/pg_dump/compress_io.c b/src/bin/pg_dump/compress_io.c
index ...c1f19a5 .
*** a/src/bin/pg_dump/compress_io.c
--- b/src/bin/pg_dump/compress_io.c
***************
*** 0 ****
--- 1,630 ----
+ /*-------------------------------------------------------------------------
+ *
+ * compress_io.c
+ * Routines for archivers to write an uncompressed or compressed data
+ * stream.
+ *
+ * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * pg_dump will read the system catalogs in a database and dump out a
+ * script that reproduces the schema in terms of SQL that is understood
+ * by PostgreSQL
+ *
+ * IDENTIFICATION
+ * XXX
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include "compress_io.h"
+
+ static const char *modulename = gettext_noop("compress_io");
+
+ static void _DoInflate(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF);
+ static void _DoDeflate(ArchiveHandle *AH, CompressorState *cs, int flush, WriteFunc writeF);
+
+ #ifdef HAVE_LIBZ
+ static void _DoInflateZlib(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF);
+ static void _DoDeflateZlib(ArchiveHandle *AH, CompressorState *cs, int flush, WriteFunc writeF);
+ #endif
+
+ #ifdef HAVE_LIBLZF
+ static void _DoInflateLZF(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF);
+ static void _DoDeflateLZF(ArchiveHandle *AH, CompressorState *cs, int flush, WriteFunc writeF);
+ static void DoDeflateBufferLZF(ArchiveHandle *AH, char *in, bool isBase, size_t dLen, char *outBase, WriteFunc writeF);
+ #endif
+
+ /*
+ * If a compression library is in use, then startit up. This is called from
+ * StartData & StartBlob. The buffers are setup in the Init routine.
+ */
+ void
+ InitCompressorState(ArchiveHandle *AH, CompressorState *cs, CompressorAction action)
+ {
+ if (AH->compression == 0 && cs->comprAlg != COMPR_ALG_NONE)
+ AH->compression = -1;
+
+ Assert(AH->compression == 0 ?
+ (cs->comprAlg == COMPR_ALG_NONE) :
+ (cs->comprAlg != COMPR_ALG_NONE));
+
+ if (cs->comprAlg == COMPR_ALG_LIBZ)
+ {
+ #ifdef HAVE_LIBZ
+ z_streamp zp = cs->zp;
+
+ if (AH->compression < 0 || AH->compression > 9)
+ AH->compression = Z_DEFAULT_COMPRESSION;
+
+ zp->zalloc = Z_NULL;
+ zp->zfree = Z_NULL;
+ zp->opaque = Z_NULL;
+
+ if (action == COMPRESSOR_DEFLATE)
+ if (deflateInit(zp, AH->compression) != Z_OK)
+ die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
+ if (action == COMPRESSOR_INFLATE)
+ if (inflateInit(zp) != Z_OK)
+ die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
+
+ /* Just be paranoid - maybe End is called after Start, with no Write */
+ zp->next_out = (void *) cs->comprOut;
+ zp->avail_out = comprOutInitSize;
+ #endif
+ }
+
+ /* Nothing to be done for COMPR_ALG_LIBLZF */
+
+ /* Nothing to be done for COMPR_ALG_NONE */
+ }
+
+ /*
+ * Terminate compression library context and flush its buffers. If no compression
+ * library is in use then just return.
+ */
+ void
+ FlushCompressorState(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF)
+ {
+ Assert(AH->compression == 0 ?
+ (cs->comprAlg == COMPR_ALG_NONE) :
+ (cs->comprAlg != COMPR_ALG_NONE));
+
+ #ifdef HAVE_LIBZ
+ if (cs->comprAlg == COMPR_ALG_LIBZ)
+ {
+ z_streamp zp = cs->zp;
+
+ zp->next_in = NULL;
+ zp->avail_in = 0;
+
+ _DoDeflate(AH, cs, Z_FINISH, writeF);
+
+ if (deflateEnd(zp) != Z_OK)
+ die_horribly(AH, modulename, "could not close compression stream: %s\n", zp->msg);
+ }
+ #endif
+ #ifdef HAVE_LIBLZF
+ if (cs->comprAlg == COMPR_ALG_LIBLZF)
+ {
+ lzf_streamp lzfp = cs->lzfp;
+
+ lzfp->next_in = NULL;
+ lzfp->avail_in = 0;
+
+ _DoDeflate(AH, cs, 1, writeF);
+ }
+ #endif
+ /* Nothing to be done for COMPR_ALG_NONE */
+ }
+
+ void
+ _DoDeflate(ArchiveHandle *AH, CompressorState *cs, int flush, WriteFunc writeF)
+ {
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_LIBZ:
+ #ifdef HAVE_LIBZ
+ _DoDeflateZlib(AH, cs, flush, writeF);
+ #endif
+ break;
+ case COMPR_ALG_LIBLZF:
+ #ifdef HAVE_LIBLZF
+ _DoDeflateLZF(AH, cs, flush, writeF);
+ #endif
+ break;
+ case COMPR_ALG_NONE:
+ Assert(false);
+ break;
+ }
+ }
+
+
+ #ifdef HAVE_LIBLZF
+ void
+ DoDeflateBufferLZF(ArchiveHandle *AH, char *in, bool isBase, size_t dLen, char *outBase, WriteFunc writeF)
+ {
+ size_t avail;
+ char *header;
+ size_t len;
+ const char *start = in;
+
+ if (isBase)
+ start += LZF_HDR_SIZE;
+
+ avail = lzf_compress(start, dLen, outBase + LZF_HDR_SIZE, dLen - 1);
+
+ if (avail == 0)
+ {
+ /* output buffer was not large enough. As the output buffer is
+ * always one byte less than the input buffer, we do save more
+ * if we just store the data uncompressed. */
+ if (!isBase)
+ {
+ memcpy(outBase + LZF_HDR_SIZE, in, dLen);
+ header = outBase;
+ }
+ else
+ header = in;
+ header[0] = 'Z';
+ header[1] = 'U'; /* not compressed */
+ header[2] = dLen >> 8;
+ header[3] = dLen & 0xff;
+ header[4] = 0;
+ header[5] = 0;
+ len = dLen + LZF_HDR_SIZE;
+ }
+ else
+ {
+ header = outBase;
+ header[0] = 'Z';
+ header[1] = 'C'; /* compressed */
+ header[2] = dLen >> 8;
+ header[3] = dLen & 0xff;
+ header[4] = avail >> 8;
+ header[5] = avail & 0xff;
+ len = avail + LZF_HDR_SIZE;
+ }
+ writeF(AH, header, len);
+ }
+ #endif
+
+
+ /*
+ * Send compressed data to the output stream (via writeF).
+ */
+ #ifdef HAVE_LIBLZF
+ void
+ _DoDeflateLZF(ArchiveHandle *AH, CompressorState *cs, int flush, WriteFunc writeF)
+ {
+ lzf_streamp lzfp = cs->lzfp;
+ size_t freeBytes;
+ size_t copyBytes;
+ size_t remainBytes;
+
+ freeBytes = LZF_BLOCKSIZE - lzfp->comprInFill;
+ copyBytes = (freeBytes >= lzfp->avail_in) ? lzfp->avail_in : freeBytes;
+ memcpy(cs->comprIn + lzfp->comprInFill, lzfp->next_in, copyBytes);
+
+ lzfp->comprInFill += copyBytes;
+
+ if (lzfp->comprInFill < LZF_BLOCKSIZE && !flush)
+ return;
+
+ DoDeflateBufferLZF(AH, lzfp->comprInBase, true, lzfp->comprInFill,
+ lzfp->comprOutBase, writeF);
+
+ for(;;)
+ {
+ remainBytes = lzfp->avail_in - copyBytes;
+ if (remainBytes < LZF_BLOCKSIZE)
+ break;
+ DoDeflateBufferLZF(AH, lzfp->next_in + copyBytes, false, LZF_BLOCKSIZE,
+ lzfp->comprOutBase, writeF);
+ copyBytes += LZF_BLOCKSIZE;
+ }
+ /* copy remaining bytes and overwrite "in" buffer */
+ memcpy(cs->comprIn, lzfp->next_in + copyBytes, remainBytes);
+ lzfp->comprInFill = remainBytes;
+ }
+ #endif
+
+ #ifdef HAVE_LIBZ
+ /*
+ * Send compressed data to the output stream (via writeF).
+ */
+ void
+ _DoDeflateZlib(ArchiveHandle *AH, CompressorState *cs, int flush, WriteFunc writeF)
+ {
+ z_streamp zp = cs->zp;
+ char *out = cs->comprOut;
+ int res = Z_OK;
+
+ Assert(AH->compression != 0);
+
+ while (cs->zp->avail_in != 0 || flush)
+ {
+ res = deflate(zp, flush);
+ if (res == Z_STREAM_ERROR)
+ die_horribly(AH, modulename, "could not compress data: %s\n", zp->msg);
+ if (((flush == Z_FINISH) && (zp->avail_out < comprOutInitSize))
+ || (zp->avail_out == 0)
+ || (zp->avail_in != 0)
+ )
+ {
+ /*
+ * Extra paranoia: avoid zero-length chunks, since a zero length
+ * chunk is the EOF marker in the custom format. This should never
+ * happen but...
+ */
+ if (zp->avail_out < comprOutInitSize)
+ {
+ /*
+ * Any write function shoud do its own error checking but
+ * to make sure we do a check here as well...
+ */
+ size_t len = comprOutInitSize - zp->avail_out;
+ if (writeF(AH, out, len) != len)
+ die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
+ }
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+ }
+
+ if (res == Z_STREAM_END)
+ break;
+ }
+ }
+ #endif
+
+ static void
+ _DoInflate(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF)
+ {
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_LIBZ:
+ #ifdef HAVE_LIBZ
+ _DoInflateZlib(AH, cs, readF);
+ #endif
+ break;
+ case COMPR_ALG_LIBLZF:
+ #ifdef HAVE_LIBLZF
+ _DoInflateLZF(AH, cs, readF);
+ #endif
+ break;
+ case COMPR_ALG_NONE:
+ Assert(false);
+ break;
+ }
+ }
+
+ #ifdef HAVE_LIBLZF
+ static void
+ _DoInflateLZF(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF)
+ {
+ void *in;
+ lzf_streamp lzfp = cs->lzfp;
+ size_t cnt;
+ char *header;
+ char *data;
+ size_t dLen;
+ size_t uncompressedSize, compressedSize, needSize;
+ bool isCompressed;
+
+ /* first we need at least LZF_HDR_SIZE */
+ while ((cnt = readF(AH, &in, LZF_HDR_SIZE)))
+ {
+ /* then we check the header and read the compressed size until we have
+ * LZF_HDR_SIZE + compressed_size. */
+ if (cnt < LZF_HDR_SIZE)
+ die_horribly(AH, modulename, "corrupted archive");
+
+ header = (char *) in;
+
+ if (header[0] != 'Z' || (header[1] != 'C' && header[1] != 'U'))
+ die_horribly(AH, modulename, "corrupted archive");
+
+ uncompressedSize = (unsigned char) header[2] << 8 | (unsigned char) header[3];
+ compressedSize = (unsigned char) header[4] << 8 | (unsigned char) header[5];
+ isCompressed = header[1] == 'C';
+ needSize = isCompressed ? compressedSize : uncompressedSize;
+
+ /*
+ * If we read more data in the beginning, it must match exactly the
+ * required size (because then the archive was written in blocks and
+ * the size of each block got recorded).
+ */
+ if (cnt > LZF_HDR_SIZE)
+ {
+ if (cnt != LZF_HDR_SIZE + needSize)
+ die_horribly(AH, modulename, "corrupted archive");
+
+ lzfp->avail_in = cnt - LZF_HDR_SIZE;
+ lzfp->next_in = (char *) in + LZF_HDR_SIZE;
+ }
+ else
+ {
+ cnt = readF(AH, &in, needSize);
+ if (cnt != needSize)
+ die_horribly(AH, modulename, "corrupted archive");
+
+ lzfp->avail_in = cnt;
+ lzfp->next_in = (char *) in;
+ }
+
+ if (isCompressed)
+ {
+ dLen = lzf_decompress(lzfp->next_in,
+ lzfp->avail_in,
+ cs->comprOut, cs->comprOutSize);
+
+ if (uncompressedSize != dLen)
+ die_horribly(AH, modulename, "corrupted archive");
+
+ data = cs->comprOut;
+ }
+ else
+ {
+ /* uncompressed data */
+ data = lzfp->next_in;
+ dLen = lzfp->avail_in;
+ }
+ data[dLen] = '\0';
+ ahwrite(data, 1, dLen, AH);
+ }
+ }
+ #endif
+
+ #ifdef HAVE_LIBZ
+ /*
+ * This function is void as it either returns successfully or fails via
+ * die_horribly().
+ */
+ static void
+ _DoInflateZlib(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF)
+ {
+ z_streamp zp = cs->zp;
+ char *out = cs->comprOut;
+ int res = Z_OK;
+ size_t cnt;
+ void *in;
+
+ Assert(AH->compression != 0);
+
+ /* no minimal chunk size for zlib */
+ while ((cnt = readF(AH, &in, 0)))
+ {
+ zp->next_in = (void *) in;
+ zp->avail_in = cnt;
+
+ while (zp->avail_in > 0)
+ {
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+
+ res = inflate(zp, 0);
+ if (res != Z_OK && res != Z_STREAM_END)
+ die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
+
+ out[comprOutInitSize - zp->avail_out] = '\0';
+ ahwrite(out, 1, comprOutInitSize - zp->avail_out, AH);
+ }
+ }
+
+ zp->next_in = NULL;
+ zp->avail_in = 0;
+ while (res != Z_STREAM_END)
+ {
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+ res = inflate(zp, 0);
+ if (res != Z_OK && res != Z_STREAM_END)
+ die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
+
+ out[comprOutInitSize - zp->avail_out] = '\0';
+ ahwrite(out, 1, comprOutInitSize - zp->avail_out, AH);
+ }
+
+ if (inflateEnd(zp) != Z_OK)
+ die_horribly(AH, modulename, "could not close compression library: %s\n", zp->msg);
+ }
+ #endif
+
+ void
+ ReadDataFromArchive(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF)
+ {
+ Assert(AH->compression == 0 ?
+ (cs->comprAlg == COMPR_ALG_NONE) :
+ (cs->comprAlg != COMPR_ALG_NONE));
+
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_LIBZ:
+ case COMPR_ALG_LIBLZF:
+ _DoInflate(AH, cs, readF);
+ break;
+ case COMPR_ALG_NONE:
+ {
+ size_t cnt;
+ void *in;
+
+ /* no minimal chunk size for uncompressed data */
+ while ((cnt = readF(AH, &in, 0)))
+ {
+ ahwrite(in, 1, cnt, AH);
+ }
+ }
+ }
+ }
+
+ size_t
+ WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF,
+ const void *data, size_t dLen)
+ {
+ Assert(AH->compression == 0 ?
+ (cs->comprAlg == COMPR_ALG_NONE) :
+ (cs->comprAlg != COMPR_ALG_NONE));
+
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_LIBZ:
+ #ifdef HAVE_LIBZ
+ cs->zp->next_in = (void *) data;
+ cs->zp->avail_in = dLen;
+ _DoDeflate(AH, cs, Z_NO_FLUSH, writeF);
+ #endif
+ break;
+ case COMPR_ALG_LIBLZF:
+ #ifdef HAVE_LIBLZF
+ cs->lzfp->next_in = (char *) data;
+ cs->lzfp->avail_in = dLen;
+ _DoDeflate(AH, cs, 0, writeF);
+ #endif
+ break;
+ case COMPR_ALG_NONE:
+ /*
+ * Any write function shoud do its own error checking but to make sure
+ * we do a check here as well...
+ */
+ if (writeF(AH, data, dLen) != dLen)
+ die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
+ }
+ /* we have either succeeded in writing dLen bytes or we have called die_horribly() */
+ return dLen;
+ }
+
+ CompressorState *
+ AllocateCompressorState(ArchiveHandle *AH)
+ {
+ CompressorAlgorithm alg = COMPR_ALG_NONE;
+ CompressorState *cs;
+
+ /*
+ * AH->compression is set either on the commandline when creating an archive
+ * or by ReadHead() when restoring an archive.
+ */
+
+ switch (AH->compression)
+ {
+ case Z_DEFAULT_COMPRESSION:
+ alg = COMPR_ALG_LIBZ;
+ break;
+ case 0:
+ alg = COMPR_ALG_NONE;
+ break;
+ case 1:
+ case 2:
+ case 3:
+ case 4:
+ case 5:
+ case 6:
+ case 7:
+ case 8:
+ case 9:
+ alg = COMPR_ALG_LIBZ;
+ break;
+ case COMPR_LZF_CODE:
+ alg = COMPR_ALG_LIBLZF;
+ break;
+ default:
+ die_horribly(AH, modulename, "Invalid compression code: %d\n",
+ AH->compression);
+ }
+
+ #ifndef HAVE_LIBZ
+ if (alg == COMPR_ALG_LIBZ)
+ die_horribly(AH, modulename, "not built with zlib support\n");
+ #endif
+ #ifndef HAVE_LIBLZF
+ if (alg == COMPR_ALG_LIBLZF)
+ die_horribly(AH, modulename, "not built with liblzf support\n");
+ #endif
+
+ cs = (CompressorState *) malloc(sizeof(CompressorState));
+ if (cs == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+
+ cs->comprAlg = alg;
+
+ switch(alg)
+ {
+ case COMPR_ALG_LIBZ:
+ #ifdef HAVE_LIBZ
+ cs->zp = (z_streamp) malloc(sizeof(z_stream));
+ if (cs->zp == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+
+ /*
+ * comprOutInitSize is the buffer size we tell zlib it can output
+ * to. We actually allocate one extra byte because some routines
+ * want to append a trailing zero byte to the zlib output. The
+ * input buffer is expansible and is always of size
+ * cs->comprInSize; comprInInitSize is just the initial default
+ * size for it.
+ */
+ cs->comprOut = (char *) malloc(comprOutInitSize + 1);
+ cs->comprIn = (char *) malloc(comprInInitSize);
+ cs->comprInSize = comprInInitSize;
+ cs->comprOutSize = comprOutInitSize;
+
+ if (cs->comprOut == NULL || cs->comprIn == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+ #endif
+ break;
+ case COMPR_ALG_LIBLZF:
+ #ifdef HAVE_LIBLZF
+ cs->lzfp = (lzf_streamp) malloc(sizeof(lzf_stream));
+ if (cs->lzfp == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+
+ cs->lzfp->comprOutBase = (char *) malloc(comprOutInitSize + LZF_HDR_SIZE);
+ cs->lzfp->comprInBase = (char *) malloc(comprInInitSize + LZF_HDR_SIZE);
+ cs->comprInSize = comprInInitSize;
+ cs->comprOutSize = comprOutInitSize;
+
+ if (cs->lzfp->comprOutBase == NULL || cs->lzfp->comprInBase == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+
+ cs->comprIn = cs->lzfp->comprInBase + LZF_HDR_SIZE;
+ cs->comprOut = cs->lzfp->comprOutBase + LZF_HDR_SIZE;
+
+ cs->lzfp->comprInFill = 0;
+ #endif
+ break;
+ case COMPR_ALG_NONE:
+ cs->comprOut = (char *) malloc(comprOutInitSize + 1);
+ cs->comprIn = (char *) malloc(comprInInitSize);
+ cs->comprInSize = comprInInitSize;
+ cs->comprOutSize = comprOutInitSize;
+
+ if (cs->comprOut == NULL || cs->comprIn == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+ break;
+ }
+
+ return cs;
+ }
+
+ void
+ FreeCompressorState(CompressorState *cs)
+ {
+ free(cs->comprOut);
+ free(cs->comprIn);
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_NONE:
+ break;
+ case COMPR_ALG_LIBZ:
+ #ifdef HAVE_LIBZ
+ free(cs->zp);
+ #endif
+ break;
+ case COMPR_ALG_LIBLZF:
+ #ifdef HAVE_LIBLZF
+ free(cs->lzfp);
+ #endif
+ break;
+ }
+ free(cs);
+ }
+
diff --git a/src/bin/pg_dump/compress_io.h b/src/bin/pg_dump/compress_io.h
index ...416cccc .
*** a/src/bin/pg_dump/compress_io.h
--- b/src/bin/pg_dump/compress_io.h
***************
*** 0 ****
--- 1,95 ----
+ /*-------------------------------------------------------------------------
+ *
+ * compress_io.h
+ * Routines for archivers to write an uncompressed or compressed data
+ * stream.
+ *
+ * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * pg_dump will read the system catalogs in a database and dump out a
+ * script that reproduces the schema in terms of SQL that is understood
+ * by PostgreSQL
+ *
+ * IDENTIFICATION
+ * XXX
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include "pg_backup_archiver.h"
+
+ #define comprOutInitSize 4096000
+ #define comprInInitSize 4096000
+
+
+ #ifdef HAVE_LIBLZF
+ #include "lzf.h"
+ /* we cannot do more with the current header format */
+ #define LZF_BLOCKSIZE (1024 * 64 - 1)
+ #define LZF_HDR_SIZE 6
+ typedef struct
+ {
+ char *next_in;
+ char *comprInBase;
+ char *comprOutBase;
+ size_t comprInFill; /* how much of comprIn are we using ? */
+ size_t avail_in;
+ } lzf_stream;
+
+ typedef lzf_stream *lzf_streamp;
+ #endif
+
+ typedef enum
+ {
+ COMPRESSOR_INFLATE,
+ COMPRESSOR_DEFLATE
+ } CompressorAction;
+
+ typedef enum
+ {
+ COMPR_ALG_NONE,
+ COMPR_ALG_LIBZ,
+ COMPR_ALG_LIBLZF
+ } CompressorAlgorithm;
+
+ #define COMPR_LZF_CODE 100
+
+ typedef struct
+ {
+ CompressorAlgorithm comprAlg;
+ #ifdef HAVE_LIBZ
+ z_streamp zp;
+ #endif
+ #ifdef HAVE_LIBLZF
+ lzf_streamp lzfp;
+ #endif
+ char *comprOut;
+ char *comprIn;
+ size_t comprInSize;
+ size_t comprOutSize;
+ } CompressorState;
+
+ typedef size_t (*WriteFunc)(ArchiveHandle *AH, const void *buf, size_t len);
+ /*
+ * The sizeHint parameter tells the format which size is required for the algorithm.
+ * If the format doesn't know better it should send back that many bytes from the input.
+ * If the format was written by blocks however, then the format already knows the block
+ * size and can deliver exactly the size of the next block.
+ *
+ * The custom archive is written in such blocks.
+ * The directory archive however is just a continuous stream of data. With liblzf however
+ * we get blocks on the algorithm level and then the algorithm is able to tell the format
+ * the amount of data that it is ready to consume next.
+ */
+ typedef size_t (*ReadFunc)(ArchiveHandle *AH, void **buf, size_t sizeHint);
+
+ void ReadDataFromArchive(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF);
+ size_t WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF, const void *data, size_t dLen);
+
+ void InitCompressorState(ArchiveHandle *AH, CompressorState *cs, CompressorAction action);
+ void FlushCompressorState(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF);
+
+ void FreeCompressorState(CompressorState *cs);
+ CompressorState *AllocateCompressorState(ArchiveHandle *AH);
+
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 8fa9a57..5def7a7 100644
*** a/src/bin/pg_dump/pg_backup.h
--- b/src/bin/pg_dump/pg_backup.h
*************** typedef enum _archiveFormat
*** 48,56 ****
{
archUnknown = 0,
archCustom = 1,
! archFiles = 2,
! archTar = 3,
! archNull = 4
} ArchiveFormat;
typedef enum _archiveMode
--- 48,58 ----
{
archUnknown = 0,
archCustom = 1,
! archDirectory = 2,
! archFiles = 3,
! archTar = 4,
! archNull = 5,
! archNullAppend = 6
} ArchiveFormat;
typedef enum _archiveMode
*************** typedef struct _restoreOptions
*** 112,117 ****
--- 114,120 ----
int schemaOnly;
int verbose;
int aclsSkip;
+ int checkArchive;
int tocSummary;
char *tocFile;
int format;
*************** extern Archive *CreateArchive(const char
*** 195,200 ****
--- 198,206 ----
/* The --list option */
extern void PrintTOCSummary(Archive *AH, RestoreOptions *ropt);
+ /* Check an existing archive */
+ extern bool CheckArchive(Archive *AH, RestoreOptions *ropt);
+
extern RestoreOptions *NewRestoreOptions(void);
/* Rearrange and filter TOC entries */
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index d1a9c54..c5b5fcc 100644
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
***************
*** 22,30 ****
--- 22,32 ----
#include "pg_backup_db.h"
#include "dumputils.h"
+ #include "compress_io.h"
#include <ctype.h>
#include <unistd.h>
+ #include <sys/stat.h>
#include <sys/types.h>
#include <sys/wait.h>
*************** static int _discoverArchiveFormat(Archiv
*** 108,113 ****
--- 110,117 ----
static void dump_lo_buf(ArchiveHandle *AH);
static void _write_msg(const char *modulename, const char *fmt, va_list ap);
static void _die_horribly(ArchiveHandle *AH, const char *modulename, const char *fmt, va_list ap);
+ static const char *getFmtName(ArchiveFormat fmt);
+ static void outputSummaryHeaderText(Archive *AHX);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static OutputContext SetOutput(ArchiveHandle *AH, char *filename, int compression);
*************** RestoreArchive(Archive *AHX, RestoreOpti
*** 230,242 ****
* Make sure we won't need (de)compression we haven't got
*/
#ifndef HAVE_LIBZ
! if (AH->compression != 0 && AH->PrintTocDataPtr !=NULL)
{
for (te = AH->toc->next; te != AH->toc; te = te->next)
{
reqs = _tocEntryRequired(te, ropt, false);
if (te->hadDumper && (reqs & REQ_DATA) != 0)
! die_horribly(AH, modulename, "cannot restore from compressed archive (compression not supported in this installation)\n");
}
}
#endif
--- 234,258 ----
* Make sure we won't need (de)compression we haven't got
*/
#ifndef HAVE_LIBZ
! if (AH->compression > 0 && AH->compression <= 9 && AH->PrintTocDataPtr !=NULL)
{
for (te = AH->toc->next; te != AH->toc; te = te->next)
{
reqs = _tocEntryRequired(te, ropt, false);
if (te->hadDumper && (reqs & REQ_DATA) != 0)
! die_horribly(AH, modulename, "cannot restore from compressed archive (zlib compression not supported in this installation)\n");
! }
! }
! #endif
! #ifndef HAVE_LIBLZF
! /* XXX are these checks correct?? */
! if (AH->compression == COMPR_LZF_CODE && AH->PrintTocDataPtr !=NULL)
! {
! for (te = AH->toc->next; te != AH->toc; te = te->next)
! {
! reqs = _tocEntryRequired(te, ropt, false);
! if (te->hadDumper && (reqs & REQ_DATA) != 0)
! die_horribly(AH, modulename, "cannot restore from compressed archive (lzf compression not supported in this installation)\n");
}
}
#endif
*************** PrintTOCSummary(Archive *AHX, RestoreOpt
*** 778,817 ****
ArchiveHandle *AH = (ArchiveHandle *) AHX;
TocEntry *te;
OutputContext sav;
- char *fmtName;
if (ropt->filename)
sav = SetOutput(AH, ropt->filename, 0 /* no compression */ );
! ahprintf(AH, ";\n; Archive created at %s", ctime(&AH->createDate));
! ahprintf(AH, "; dbname: %s\n; TOC Entries: %d\n; Compression: %d\n",
! AH->archdbname, AH->tocCount, AH->compression);
!
! switch (AH->format)
! {
! case archFiles:
! fmtName = "FILES";
! break;
! case archCustom:
! fmtName = "CUSTOM";
! break;
! case archTar:
! fmtName = "TAR";
! break;
! default:
! fmtName = "UNKNOWN";
! }
!
! ahprintf(AH, "; Dump Version: %d.%d-%d\n", AH->vmaj, AH->vmin, AH->vrev);
! ahprintf(AH, "; Format: %s\n", fmtName);
! ahprintf(AH, "; Integer: %d bytes\n", (int) AH->intSize);
! ahprintf(AH, "; Offset: %d bytes\n", (int) AH->offSize);
! if (AH->archiveRemoteVersion)
! ahprintf(AH, "; Dumped from database version: %s\n",
! AH->archiveRemoteVersion);
! if (AH->archiveDumpVersion)
! ahprintf(AH, "; Dumped by pg_dump version: %s\n",
! AH->archiveDumpVersion);
ahprintf(AH, ";\n;\n; Selected TOC Entries:\n;\n");
--- 794,804 ----
ArchiveHandle *AH = (ArchiveHandle *) AHX;
TocEntry *te;
OutputContext sav;
if (ropt->filename)
sav = SetOutput(AH, ropt->filename, 0 /* no compression */ );
! outputSummaryHeaderText(AHX);
ahprintf(AH, ";\n;\n; Selected TOC Entries:\n;\n");
*************** PrintTOCSummary(Archive *AHX, RestoreOpt
*** 840,845 ****
--- 827,869 ----
ResetOutput(AH, sav);
}
+ bool
+ CheckArchive(Archive *AHX, RestoreOptions *ropt)
+ {
+ ArchiveHandle *AH = (ArchiveHandle *) AHX;
+ TocEntry *te;
+ teReqs reqs;
+ bool checkOK;
+
+ outputSummaryHeaderText(AHX);
+
+ checkOK = (*AH->StartCheckArchivePtr)(AH);
+
+ /* this gets only called from the commandline so we write to stdout as
+ * usual */
+ printf(";\n; Performing Checks...\n;\n");
+
+ for (te = AH->toc->next; te != AH->toc; te = te->next)
+ {
+ if (!(reqs = _tocEntryRequired(te, ropt, true)))
+ continue;
+
+ if (!(*AH->CheckTocEntryPtr)(AH, te, reqs))
+ checkOK = false;
+
+ /* do not dump the contents but only the errors */
+ }
+
+ if (!(*AH->EndCheckArchivePtr)(AH))
+ checkOK = false;
+
+ printf("; Check result: %s\n", checkOK ? "OK" : "FAILED");
+
+ return checkOK;
+ }
+
+
+
/***********
* BLOB Archival
***********/
*************** archprintf(Archive *AH, const char *fmt,
*** 1115,1120 ****
--- 1139,1197 ----
* Stuff below here should be 'private' to the archiver routines
*******************************/
+ static const char *
+ getFmtName(ArchiveFormat fmt)
+ {
+ const char *fmtName;
+
+ switch (fmt)
+ {
+ case archCustom:
+ fmtName = "CUSTOM";
+ break;
+ case archDirectory:
+ fmtName = "DIRECTORY";
+ break;
+ case archFiles:
+ fmtName = "FILES";
+ break;
+ case archTar:
+ fmtName = "TAR";
+ break;
+ default:
+ fmtName = "UNKNOWN";
+ }
+
+ return fmtName;
+ }
+
+ static void
+ outputSummaryHeaderText(Archive *AHX)
+ {
+ ArchiveHandle *AH = (ArchiveHandle *) AHX;
+ const char *fmtName;
+
+ ahprintf(AH, ";\n; Archive created at %s", ctime(&AH->createDate));
+ ahprintf(AH, "; dbname: %s\n; TOC Entries: %d\n; Compression: %d\n",
+ AH->archdbname, AH->tocCount, AH->compression);
+
+ fmtName = getFmtName(AH->format);
+
+ ahprintf(AH, "; Dump Version: %d.%d-%d\n", AH->vmaj, AH->vmin, AH->vrev);
+ ahprintf(AH, "; Format: %s\n", fmtName);
+ ahprintf(AH, "; Integer: %d bytes\n", (int) AH->intSize);
+ ahprintf(AH, "; Offset: %d bytes\n", (int) AH->offSize);
+ if (AH->archiveRemoteVersion)
+ ahprintf(AH, "; Dumped from database version: %s\n",
+ AH->archiveRemoteVersion);
+ if (AH->archiveDumpVersion)
+ ahprintf(AH, "; Dumped by pg_dump version: %s\n",
+ AH->archiveDumpVersion);
+
+ if (AH->PrintExtraTocSummaryPtr != NULL)
+ (*AH->PrintExtraTocSummaryPtr) (AH);
+ }
+
static OutputContext
SetOutput(ArchiveHandle *AH, char *filename, int compression)
{
*************** _discoverArchiveFormat(ArchiveHandle *AH
*** 1720,1725 ****
--- 1797,1804 ----
char sig[6]; /* More than enough */
size_t cnt;
int wantClose = 0;
+ char buf[MAXPGPATH];
+ struct stat st;
#if 0
write_msg(modulename, "attempting to ascertain archive format\n");
*************** _discoverArchiveFormat(ArchiveHandle *AH
*** 1736,1742 ****
if (AH->fSpec)
{
wantClose = 1;
! fh = fopen(AH->fSpec, PG_BINARY_R);
if (!fh)
die_horribly(AH, modulename, "could not open input file \"%s\": %s\n",
AH->fSpec, strerror(errno));
--- 1815,1836 ----
if (AH->fSpec)
{
wantClose = 1;
! /*
! * Check if the specified archive is a directory actually. If so, we open
! * the TOC file instead.
! */
! buf[0] = '\0';
! if (stat(AH->fSpec, &st) == 0 && S_ISDIR(st.st_mode))
! {
! if (snprintf(buf, MAXPGPATH, "%s/%s", AH->fSpec, "TOC") >= MAXPGPATH)
! die_horribly(AH, modulename, "directory name too long: \"%s\"\n",
! AH->fSpec);
! }
!
! if (strlen(buf) == 0)
! strcpy(buf, AH->fSpec);
!
! fh = fopen(buf, PG_BINARY_R);
if (!fh)
die_horribly(AH, modulename, "could not open input file \"%s\": %s\n",
AH->fSpec, strerror(errno));
*************** _allocAH(const char *FileSpec, const Arc
*** 1949,1954 ****
--- 2043,2052 ----
InitArchiveFmt_Custom(AH);
break;
+ case archDirectory:
+ InitArchiveFmt_Directory(AH);
+ break;
+
case archFiles:
InitArchiveFmt_Files(AH);
break;
*************** WriteHead(ArchiveHandle *AH)
*** 2974,2984 ****
(*AH->WriteBytePtr) (AH, AH->format);
#ifndef HAVE_LIBZ
! if (AH->compression != 0)
write_msg(modulename, "WARNING: requested compression not available in this "
"installation -- archive will be uncompressed\n");
! AH->compression = 0;
#endif
WriteInt(AH, AH->compression);
--- 3072,3084 ----
(*AH->WriteBytePtr) (AH, AH->format);
#ifndef HAVE_LIBZ
! if (AH->compression > 0 && AH->compression <= 9)
! {
write_msg(modulename, "WARNING: requested compression not available in this "
"installation -- archive will be uncompressed\n");
! AH->compression = 0;
! }
#endif
WriteInt(AH, AH->compression);
*************** ReadHead(ArchiveHandle *AH)
*** 3062,3068 ****
AH->compression = Z_DEFAULT_COMPRESSION;
#ifndef HAVE_LIBZ
! if (AH->compression != 0)
write_msg(modulename, "WARNING: archive is compressed, but this installation does not support compression -- no data will be available\n");
#endif
--- 3162,3172 ----
AH->compression = Z_DEFAULT_COMPRESSION;
#ifndef HAVE_LIBZ
! if (AH->compression > 0 && AH->compression <= 9)
! write_msg(modulename, "WARNING: archive is compressed, but this installation does not support compression -- no data will be available\n");
! #endif
! #ifndef HAVE_LIBLZF
! if (AH->compression == COMPR_LZF_CODE)
write_msg(modulename, "WARNING: archive is compressed, but this installation does not support compression -- no data will be available\n");
#endif
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index ae0c6e0..9eb9f6f 100644
*** a/src/bin/pg_dump/pg_backup_archiver.h
--- b/src/bin/pg_dump/pg_backup_archiver.h
***************
*** 49,54 ****
--- 49,55 ----
#define GZCLOSE(fh) fclose(fh)
#define GZWRITE(p, s, n, fh) (fwrite(p, s, n, fh) * (s))
#define GZREAD(p, s, n, fh) fread(p, s, n, fh)
+ /* this is just the redefinition of a libz constant */
#define Z_DEFAULT_COMPRESSION (-1)
typedef struct _z_stream
*************** typedef struct _z_stream
*** 61,66 ****
--- 62,76 ----
typedef z_stream *z_streamp;
#endif
+ /* XXX eventually this should be an enum. However if we want something
+ * pluggable in the long run it can get hard to add values to a central
+ * enum from the plugins... */
+ #define COMPRESSION_UNKNOWN (-2)
+ #define COMPRESSION_NONE 0
+
+ /* XXX should we change the archive version for pg_dump with directory support?
+ * XXX We are not actually modifying the existing formats, but on the other hand
+ * XXX a file could now be compressed with liblzf. */
/* Current archive version number (the format we can output) */
#define K_VERS_MAJOR 1
#define K_VERS_MINOR 12
*************** struct _archiveHandle;
*** 103,108 ****
--- 113,125 ----
struct _tocEntry;
struct _restoreList;
+ typedef enum
+ {
+ REQ_SCHEMA = 1,
+ REQ_DATA = 2,
+ REQ_ALL = REQ_SCHEMA + REQ_DATA
+ } teReqs;
+
typedef void (*ClosePtr) (struct _archiveHandle * AH);
typedef void (*ReopenPtr) (struct _archiveHandle * AH);
typedef void (*ArchiveEntryPtr) (struct _archiveHandle * AH, struct _tocEntry * te);
*************** typedef void (*WriteExtraTocPtr) (struct
*** 125,134 ****
--- 142,156 ----
typedef void (*ReadExtraTocPtr) (struct _archiveHandle * AH, struct _tocEntry * te);
typedef void (*PrintExtraTocPtr) (struct _archiveHandle * AH, struct _tocEntry * te);
typedef void (*PrintTocDataPtr) (struct _archiveHandle * AH, struct _tocEntry * te, RestoreOptions *ropt);
+ typedef void (*PrintExtraTocSummaryPtr) (struct _archiveHandle * AH);
typedef void (*ClonePtr) (struct _archiveHandle * AH);
typedef void (*DeClonePtr) (struct _archiveHandle * AH);
+ typedef bool (*StartCheckArchivePtr)(struct _archiveHandle * AH);
+ typedef bool (*CheckTocEntryPtr)(struct _archiveHandle * AH, struct _tocEntry * te, teReqs reqs);
+ typedef bool (*EndCheckArchivePtr)(struct _archiveHandle * AH);
+
typedef size_t (*CustomOutPtr) (struct _archiveHandle * AH, const void *buf, size_t len);
typedef struct _outputContext
*************** typedef enum
*** 167,179 ****
STAGE_FINALIZING
} ArchiverStage;
- typedef enum
- {
- REQ_SCHEMA = 1,
- REQ_DATA = 2,
- REQ_ALL = REQ_SCHEMA + REQ_DATA
- } teReqs;
-
typedef struct _archiveHandle
{
Archive public; /* Public part of archive */
--- 189,194 ----
*************** typedef struct _archiveHandle
*** 229,234 ****
--- 244,250 ----
* archie format */
PrintExtraTocPtr PrintExtraTocPtr; /* Extra TOC info for format */
PrintTocDataPtr PrintTocDataPtr;
+ PrintExtraTocSummaryPtr PrintExtraTocSummaryPtr;
StartBlobsPtr StartBlobsPtr;
EndBlobsPtr EndBlobsPtr;
*************** typedef struct _archiveHandle
*** 238,243 ****
--- 254,263 ----
ClonePtr ClonePtr; /* Clone format-specific fields */
DeClonePtr DeClonePtr; /* Clean up cloned fields */
+ StartCheckArchivePtr StartCheckArchivePtr;
+ CheckTocEntryPtr CheckTocEntryPtr;
+ EndCheckArchivePtr EndCheckArchivePtr;
+
CustomOutPtr CustomOutPtr; /* Alternative script output routine */
/* Stuff for direct DB connection */
*************** typedef struct _archiveHandle
*** 267,272 ****
--- 287,297 ----
struct _tocEntry *currToc; /* Used when dumping data */
int compression; /* Compression requested on open */
+ /* Possible values for compression:
+ 0 no compression
+ 1-9 levels for gzip compression
+ 100 liblzf compression (see COMPR_LZF_CODE)
+ */
ArchiveMode mode; /* File mode - r or w */
void *formatData; /* Header data specific to file format */
*************** extern void EndRestoreBlob(ArchiveHandle
*** 367,372 ****
--- 392,398 ----
extern void EndRestoreBlobs(ArchiveHandle *AH);
extern void InitArchiveFmt_Custom(ArchiveHandle *AH);
+ extern void InitArchiveFmt_Directory(ArchiveHandle *AH);
extern void InitArchiveFmt_Files(ArchiveHandle *AH);
extern void InitArchiveFmt_Null(ArchiveHandle *AH);
extern void InitArchiveFmt_Tar(ArchiveHandle *AH);
*************** int ahprintf(ArchiveHandle *AH, const
*** 381,384 ****
--- 407,421 ----
void ahlog(ArchiveHandle *AH, int level, const char *fmt,...) __attribute__((format(printf, 3, 4)));
+ #ifdef USE_ASSERT_CHECKING
+ #define Assert(condition) \
+ if (!(condition)) \
+ { \
+ write_msg(NULL, "Failed assertion in %s, line %d\n", \
+ __FILE__, __LINE__); \
+ abort();\
+ }
+ #else
+ #define Assert(condition)
+ #endif
#endif
diff --git a/src/bin/pg_dump/pg_backup_custom.c b/src/bin/pg_dump/pg_backup_custom.c
index 2bc7e8f..ccc9acb 100644
*** a/src/bin/pg_dump/pg_backup_custom.c
--- b/src/bin/pg_dump/pg_backup_custom.c
***************
*** 25,30 ****
--- 25,31 ----
*/
#include "pg_backup_archiver.h"
+ #include "compress_io.h"
/*--------
* Routines in the format interface
*************** static void _LoadBlobs(ArchiveHandle *AH
*** 58,77 ****
static void _Clone(ArchiveHandle *AH);
static void _DeClone(ArchiveHandle *AH);
! /*------------
! * Buffers used in zlib compression and extra data stored in archive and
! * in TOC entries.
! *------------
! */
! #define zlibOutSize 4096
! #define zlibInSize 4096
typedef struct
{
! z_streamp zp;
! char *zlibOut;
! char *zlibIn;
! size_t inSize;
int hasSeek;
pgoff_t filePos;
pgoff_t dataStart;
--- 59,70 ----
static void _Clone(ArchiveHandle *AH);
static void _DeClone(ArchiveHandle *AH);
! static size_t _CustomWriteFunc(ArchiveHandle *AH, const void *buf, size_t len);
! static size_t _CustomReadFunction(ArchiveHandle *AH, void **buf, size_t sizeHint);
typedef struct
{
! CompressorState *cs;
int hasSeek;
pgoff_t filePos;
pgoff_t dataStart;
*************** typedef struct
*** 81,86 ****
--- 74,80 ----
{
int dataState;
pgoff_t dataPos;
+ int restore_status;
} lclTocEntry;
*************** static void _readBlockHeader(ArchiveHand
*** 92,98 ****
static void _StartDataCompressor(ArchiveHandle *AH, TocEntry *te);
static void _EndDataCompressor(ArchiveHandle *AH, TocEntry *te);
static pgoff_t _getFilePos(ArchiveHandle *AH, lclContext *ctx);
- static int _DoDeflate(ArchiveHandle *AH, lclContext *ctx, int flush);
static const char *modulename = gettext_noop("custom archiver");
--- 86,91 ----
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 128,133 ****
--- 121,127 ----
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = NULL;
AH->StartBlobsPtr = _StartBlobs;
AH->StartBlobPtr = _StartBlob;
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 136,141 ****
--- 130,139 ----
AH->ClonePtr = _Clone;
AH->DeClonePtr = _DeClone;
+ AH->StartCheckArchivePtr = NULL;
+ AH->CheckTocEntryPtr = NULL;
+ AH->EndCheckArchivePtr = NULL;
+
/*
* Set up some special context used in compressing data.
*/
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 144,179 ****
die_horribly(AH, modulename, "out of memory\n");
AH->formatData = (void *) ctx;
- ctx->zp = (z_streamp) malloc(sizeof(z_stream));
- if (ctx->zp == NULL)
- die_horribly(AH, modulename, "out of memory\n");
-
/* Initialize LO buffering */
AH->lo_buf_size = LOBBUFSIZE;
AH->lo_buf = (void *) malloc(LOBBUFSIZE);
if (AH->lo_buf == NULL)
die_horribly(AH, modulename, "out of memory\n");
- /*
- * zlibOutSize is the buffer size we tell zlib it can output to. We
- * actually allocate one extra byte because some routines want to append a
- * trailing zero byte to the zlib output. The input buffer is expansible
- * and is always of size ctx->inSize; zlibInSize is just the initial
- * default size for it.
- */
- ctx->zlibOut = (char *) malloc(zlibOutSize + 1);
- ctx->zlibIn = (char *) malloc(zlibInSize);
- ctx->inSize = zlibInSize;
ctx->filePos = 0;
- if (ctx->zlibOut == NULL || ctx->zlibIn == NULL)
- die_horribly(AH, modulename, "out of memory\n");
-
/*
* Now open the file
*/
if (AH->mode == archModeWrite)
{
if (AH->fSpec && strcmp(AH->fSpec, "") != 0)
{
AH->FH = fopen(AH->fSpec, PG_BINARY_W);
--- 142,162 ----
die_horribly(AH, modulename, "out of memory\n");
AH->formatData = (void *) ctx;
/* Initialize LO buffering */
AH->lo_buf_size = LOBBUFSIZE;
AH->lo_buf = (void *) malloc(LOBBUFSIZE);
if (AH->lo_buf == NULL)
die_horribly(AH, modulename, "out of memory\n");
ctx->filePos = 0;
/*
* Now open the file
*/
if (AH->mode == archModeWrite)
{
+ ctx->cs = AllocateCompressorState(AH);
+
if (AH->fSpec && strcmp(AH->fSpec, "") != 0)
{
AH->FH = fopen(AH->fSpec, PG_BINARY_W);
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 211,216 ****
--- 194,201 ----
ctx->hasSeek = checkSeek(AH->FH);
ReadHead(AH);
+ ctx->cs = AllocateCompressorState(AH);
+
ReadToc(AH);
ctx->dataStart = _getFilePos(AH, ctx);
}
*************** static size_t
*** 340,356 ****
_WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
{
lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
!
! zp->next_in = (void *) data;
! zp->avail_in = dLen;
! while (zp->avail_in != 0)
! {
! /* printf("Deflating %lu bytes\n", (unsigned long) dLen); */
! _DoDeflate(AH, ctx, 0);
! }
! return dLen;
}
/*
--- 325,333 ----
_WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
{
lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! return WriteDataToArchive(AH, cs, _CustomWriteFunc, data, dLen);
}
/*
*************** static void
*** 533,639 ****
_PrintData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
! size_t blkLen;
! char *in = ctx->zlibIn;
! size_t cnt;
!
! #ifdef HAVE_LIBZ
! int res;
! char *out = ctx->zlibOut;
! #endif
!
! #ifdef HAVE_LIBZ
!
! res = Z_OK;
!
! if (AH->compression != 0)
! {
! zp->zalloc = Z_NULL;
! zp->zfree = Z_NULL;
! zp->opaque = Z_NULL;
!
! if (inflateInit(zp) != Z_OK)
! die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
! }
! #endif
!
! blkLen = ReadInt(AH);
! while (blkLen != 0)
! {
! if (blkLen + 1 > ctx->inSize)
! {
! free(ctx->zlibIn);
! ctx->zlibIn = NULL;
! ctx->zlibIn = (char *) malloc(blkLen + 1);
! if (!ctx->zlibIn)
! die_horribly(AH, modulename, "out of memory\n");
!
! ctx->inSize = blkLen + 1;
! in = ctx->zlibIn;
! }
!
! cnt = fread(in, 1, blkLen, AH->FH);
! if (cnt != blkLen)
! {
! if (feof(AH->FH))
! die_horribly(AH, modulename,
! "could not read from input file: end of file\n");
! else
! die_horribly(AH, modulename,
! "could not read from input file: %s\n", strerror(errno));
! }
!
! ctx->filePos += blkLen;
!
! zp->next_in = (void *) in;
! zp->avail_in = blkLen;
!
! #ifdef HAVE_LIBZ
! if (AH->compression != 0)
! {
! while (zp->avail_in != 0)
! {
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! res = inflate(zp, 0);
! if (res != Z_OK && res != Z_STREAM_END)
! die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
!
! out[zlibOutSize - zp->avail_out] = '\0';
! ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
! }
! }
! else
! #endif
! {
! in[zp->avail_in] = '\0';
! ahwrite(in, 1, zp->avail_in, AH);
! zp->avail_in = 0;
! }
! blkLen = ReadInt(AH);
! }
!
! #ifdef HAVE_LIBZ
! if (AH->compression != 0)
! {
! zp->next_in = NULL;
! zp->avail_in = 0;
! while (res != Z_STREAM_END)
! {
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! res = inflate(zp, 0);
! if (res != Z_OK && res != Z_STREAM_END)
! die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
! out[zlibOutSize - zp->avail_out] = '\0';
! ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
! }
! if (inflateEnd(zp) != Z_OK)
! die_horribly(AH, modulename, "could not close compression library: %s\n", zp->msg);
! }
! #endif
}
static void
--- 510,519 ----
_PrintData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! InitCompressorState(AH, cs, COMPRESSOR_INFLATE);
! ReadDataFromArchive(AH, cs, _CustomReadFunction);
}
static void
*************** static void
*** 683,701 ****
_skipData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
size_t blkLen;
! char *in = ctx->zlibIn;
size_t cnt;
blkLen = ReadInt(AH);
while (blkLen != 0)
{
! if (blkLen > ctx->inSize)
{
! free(ctx->zlibIn);
! ctx->zlibIn = (char *) malloc(blkLen);
! ctx->inSize = blkLen;
! in = ctx->zlibIn;
}
cnt = fread(in, 1, blkLen, AH->FH);
if (cnt != blkLen)
--- 563,582 ----
_skipData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
size_t blkLen;
! char *in = cs->comprIn;
size_t cnt;
blkLen = ReadInt(AH);
while (blkLen != 0)
{
! if (blkLen > cs->comprInSize)
{
! free(cs->comprIn);
! cs->comprIn = (char *) malloc(blkLen);
! cs->comprInSize = blkLen;
! in = cs->comprIn;
}
cnt = fread(in, 1, blkLen, AH->FH);
if (cnt != blkLen)
*************** _readBlockHeader(ArchiveHandle *AH, int
*** 961,1099 ****
}
/*
! * If zlib is available, then startit up. This is called from
! * StartData & StartBlob. The buffers are setup in the Init routine.
*/
static void
_StartDataCompressor(ArchiveHandle *AH, TocEntry *te)
{
lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
!
! #ifdef HAVE_LIBZ
!
! if (AH->compression < 0 || AH->compression > 9)
! AH->compression = Z_DEFAULT_COMPRESSION;
! if (AH->compression != 0)
! {
! zp->zalloc = Z_NULL;
! zp->zfree = Z_NULL;
! zp->opaque = Z_NULL;
! if (deflateInit(zp, AH->compression) != Z_OK)
! die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
! }
! #else
! AH->compression = 0;
! #endif
! /* Just be paranoid - maybe End is called after Start, with no Write */
! zp->next_out = (void *) ctx->zlibOut;
! zp->avail_out = zlibOutSize;
}
! /*
! * Send compressed data to the output stream (via ahwrite).
! * Each data chunk is preceded by it's length.
! * In the case of Z0, or no zlib, just write the raw data.
! *
! */
! static int
! _DoDeflate(ArchiveHandle *AH, lclContext *ctx, int flush)
{
! z_streamp zp = ctx->zp;
! #ifdef HAVE_LIBZ
! char *out = ctx->zlibOut;
! int res = Z_OK;
! if (AH->compression != 0)
{
! res = deflate(zp, flush);
! if (res == Z_STREAM_ERROR)
! die_horribly(AH, modulename, "could not compress data: %s\n", zp->msg);
! if (((flush == Z_FINISH) && (zp->avail_out < zlibOutSize))
! || (zp->avail_out == 0)
! || (zp->avail_in != 0)
! )
! {
! /*
! * Extra paranoia: avoid zero-length chunks since a zero length
! * chunk is the EOF marker. This should never happen but...
! */
! if (zp->avail_out < zlibOutSize)
! {
! /*
! * printf("Wrote %lu byte deflated chunk\n", (unsigned long)
! * (zlibOutSize - zp->avail_out));
! */
! WriteInt(AH, zlibOutSize - zp->avail_out);
! if (fwrite(out, 1, zlibOutSize - zp->avail_out, AH->FH) != (zlibOutSize - zp->avail_out))
! die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
! ctx->filePos += zlibOutSize - zp->avail_out;
! }
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! }
}
! else
! #endif
{
! if (zp->avail_in > 0)
! {
! WriteInt(AH, zp->avail_in);
! if (fwrite(zp->next_in, 1, zp->avail_in, AH->FH) != zp->avail_in)
! die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
! ctx->filePos += zp->avail_in;
! zp->avail_in = 0;
! }
else
! {
! #ifdef HAVE_LIBZ
! if (flush == Z_FINISH)
! res = Z_STREAM_END;
! #endif
! }
}
!
! #ifdef HAVE_LIBZ
! return res;
! #else
! return 1;
! #endif
}
/*
! * Terminate zlib context and flush it's buffers. If no zlib
! * then just return.
*/
static void
_EndDataCompressor(ArchiveHandle *AH, TocEntry *te)
{
! #ifdef HAVE_LIBZ
! lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
! int res;
!
! if (AH->compression != 0)
! {
! zp->next_in = NULL;
! zp->avail_in = 0;
!
! do
! {
! /* printf("Ending data output\n"); */
! res = _DoDeflate(AH, ctx, Z_FINISH);
! } while (res != Z_STREAM_END);
!
! if (deflateEnd(zp) != Z_OK)
! die_horribly(AH, modulename, "could not close compression stream: %s\n", zp->msg);
! }
! #endif
/* Send the end marker */
WriteInt(AH, 0);
--- 842,924 ----
}
/*
! * If a compression algorithm is available, then startit up. This is called
! * from StartData & StartBlob. The buffers are setup in the Init routine.
*/
static void
_StartDataCompressor(ArchiveHandle *AH, TocEntry *te)
{
lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! InitCompressorState(AH, cs, COMPRESSOR_DEFLATE);
! }
! static size_t
! _CustomWriteFunc(ArchiveHandle *AH, const void *buf, size_t len)
! {
! Assert(len != 0);
! /* never write 0-byte blocks (this should not happen) */
! if (len == 0)
! return 0;
! WriteInt(AH, len);
! return _WriteBuf(AH, buf, len);
}
! static size_t
! _CustomReadFunction(ArchiveHandle *AH, void **buf, size_t sizeHint)
{
! lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! size_t blkLen;
! size_t cnt;
! /*
! * We deliberately ignore the sizeHint parameter because we know
! * the exact size of the next compressed block (=blkLen).
! */
! blkLen = ReadInt(AH);
!
! if (blkLen == 0)
! return 0;
!
! if (blkLen + 1 > cs->comprInSize)
{
! free(cs->comprIn);
! cs->comprIn = NULL;
! cs->comprIn = (char *) malloc(blkLen + 1);
! if (!cs->comprIn)
! die_horribly(AH, modulename, "out of memory\n");
! cs->comprInSize = blkLen + 1;
}
! cnt = _ReadBuf(AH, cs->comprIn, blkLen);
! if (cnt != blkLen)
{
! if (feof(AH->FH))
! die_horribly(AH, modulename,
! "could not read from input file: end of file\n");
else
! die_horribly(AH, modulename,
! "could not read from input file: %s\n", strerror(errno));
}
! *buf = cs->comprIn;
! return cnt;
}
/*
! * Terminate zlib context and flush it's buffers.
*/
static void
_EndDataCompressor(ArchiveHandle *AH, TocEntry *te)
{
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
! FlushCompressorState(AH, cs, _CustomWriteFunc);
/* Send the end marker */
WriteInt(AH, 0);
*************** _Clone(ArchiveHandle *AH)
*** 1114,1125 ****
memcpy(AH->formatData, ctx, sizeof(lclContext));
ctx = (lclContext *) AH->formatData;
! ctx->zp = (z_streamp) malloc(sizeof(z_stream));
! ctx->zlibOut = (char *) malloc(zlibOutSize + 1);
! ctx->zlibIn = (char *) malloc(ctx->inSize);
!
! if (ctx->zp == NULL || ctx->zlibOut == NULL || ctx->zlibIn == NULL)
! die_horribly(AH, modulename, "out of memory\n");
/*
* Note: we do not make a local lo_buf because we expect at most one BLOBS
--- 939,945 ----
memcpy(AH->formatData, ctx, sizeof(lclContext));
ctx = (lclContext *) AH->formatData;
! ctx->cs = AllocateCompressorState(AH);
/*
* Note: we do not make a local lo_buf because we expect at most one BLOBS
*************** static void
*** 1133,1141 ****
_DeClone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
- free(ctx->zlibOut);
- free(ctx->zlibIn);
- free(ctx->zp);
free(ctx);
}
--- 953,962 ----
_DeClone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ FreeCompressorState(cs);
free(ctx);
}
+
diff --git a/src/bin/pg_dump/pg_backup_directory.c b/src/bin/pg_dump/pg_backup_directory.c
index ...1da57b3 .
*** a/src/bin/pg_dump/pg_backup_directory.c
--- b/src/bin/pg_dump/pg_backup_directory.c
***************
*** 0 ****
--- 1,1496 ----
+ /*-------------------------------------------------------------------------
+ *
+ * pg_backup_directory.c
+ *
+ * This file is copied from the 'files' format file and dumps data into
+ * separate files in a directory.
+ *
+ * See the headers to pg_backup_files & pg_restore for more details.
+ *
+ * Copyright (c) 2000, Philip Warner
+ * Rights are granted to use this software in any way so long
+ * as this notice is not removed.
+ *
+ * The author is not responsible for loss or damages that may
+ * result from it's use.
+ *
+ *
+ * IDENTIFICATION
+ * XXX
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include <dirent.h>
+ #include <sys/stat.h>
+
+ #include "compress_io.h"
+ #include "pg_backup_archiver.h"
+ #include "libpq/md5.h"
+ #include "utils/pg_crc.h"
+
+ #ifdef USE_SSL
+ /* for RAND_bytes() */
+ #include <openssl/rand.h>
+ #endif
+
+ #define TOC_FH_ACTIVE (ctx->dataFH == NULL && ctx->blobsTocFH == NULL && AH->FH != NULL)
+ #define BLOBS_TOC_FH_ACTIVE (ctx->dataFH == NULL && ctx->blobsTocFH != NULL)
+ #define DATA_FH_ACTIVE (ctx->dataFH != NULL)
+
+ struct _lclFileHeader;
+ struct _lclContext;
+
+ static void _ArchiveEntry(ArchiveHandle *AH, TocEntry *te);
+ static void _StartData(ArchiveHandle *AH, TocEntry *te);
+ static void _EndData(ArchiveHandle *AH, TocEntry *te);
+ static size_t _WriteData(ArchiveHandle *AH, const void *data, size_t dLen);
+ static int _WriteByte(ArchiveHandle *AH, const int i);
+ static int _ReadByte(ArchiveHandle *);
+ static size_t _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len);
+ static size_t _ReadBuf(ArchiveHandle *AH, void *buf, size_t len);
+ static void _CloseArchive(ArchiveHandle *AH);
+ static void _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
+
+ static void _WriteExtraToc(ArchiveHandle *AH, TocEntry *te);
+ static void _ReadExtraToc(ArchiveHandle *AH, TocEntry *te);
+ static void _PrintExtraToc(ArchiveHandle *AH, TocEntry *te);
+ static void _PrintExtraTocSummary(ArchiveHandle *AH);
+
+ static void _WriteExtraHead(ArchiveHandle *AH);
+ static void _ReadExtraHead(ArchiveHandle *AH);
+
+ static void WriteFileHeader(ArchiveHandle *AH, int type);
+ static int ReadFileHeader(ArchiveHandle *AH, struct _lclFileHeader *fileHeader);
+
+ static void _StartBlobs(ArchiveHandle *AH, TocEntry *te);
+ static void _StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid);
+ static void _EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid);
+ static void _EndBlobs(ArchiveHandle *AH, TocEntry *te);
+ static void _LoadBlobs(ArchiveHandle *AH, RestoreOptions *ropt);
+
+ static size_t _DirectoryReadFunction(ArchiveHandle *AH, void **buf, size_t sizeHint);
+
+ static bool _StartCheckArchive(ArchiveHandle *AH);
+ static bool _CheckTocEntry(ArchiveHandle *AH, TocEntry *te, teReqs reqs);
+ static bool _CheckFileContents(ArchiveHandle *AH, const char *fname, const char* idStr, bool terminateOnError);
+ static bool _CheckFileSize(ArchiveHandle *AH, const char *fname, pgoff_t pgSize, bool terminateOnError);
+ static bool _CheckBlob(ArchiveHandle *AH, Oid oid, pgoff_t size);
+ static bool _CheckBlobs(ArchiveHandle *AH, TocEntry *te, teReqs reqs);
+ static bool _EndCheckArchive(ArchiveHandle *AH);
+
+ static char *prependDirectory(ArchiveHandle *AH, const char *relativeFilename);
+ static char *prependBlobsDirectory(ArchiveHandle *AH, Oid oid);
+ static void createDirectory(const char *dir, const char *subdir);
+
+ static char *getRandomData(char *s, int len);
+
+ static void _StartDataCompressor(ArchiveHandle *AH, TocEntry *te);
+ static void _EndDataCompressor(ArchiveHandle *AH, TocEntry *te);
+
+ static bool isDirectory(const char *fname);
+ static bool isRegularFile(const char *fname);
+
+ #define K_STD_BUF_SIZE 1024
+ #define FILE_SUFFIX ".dat"
+
+ typedef struct _lclContext
+ {
+ /*
+ * Our archive location. This is basically what the user specified as his
+ * backup file but of course here it is a directory.
+ */
+ char *directory;
+
+ /*
+ * As a directory archive contains of several files we want to make sure
+ * that we do not interchange files of different backups. That's why we
+ * assign a (hopefully) unique ID to every set. This ID is written to the
+ * TOC and to every data file.
+ */
+ char idStr[33];
+
+ /*
+ * In the directory archive format we have three file handles:
+ *
+ * AH->FH points to the TOC
+ * ctx->blobsTocFH points to the TOC for the BLOBs
+ * ctx->dataFH points to data files (both BLOBs and regular)
+ *
+ * Instead of specifying where each I/O operation should go (which would
+ * require own prototypes anyway and wouldn't be that straightforward
+ * either), we rely on a hierarchy among the file descriptors.
+ *
+ * As a matter of fact we never access any of the TOCs when we are writing
+ * to a data file, only before or after that. Similarly we never access the
+ * general TOC when we have opened the TOC for BLOBs. Given these facts we
+ * can just write our I/O routines such that they access:
+ *
+ * if defined(ctx->dataFH) => access ctx->dataFH
+ * else if defined(ctx->blobsTocFH) => access ctx->blobsTocFH
+ * else => access AH->FH
+ *
+ * To make it more transparent what is going on, we use assertions like
+ *
+ * Assert(DATA_FH_ACTIVE); ...
+ *
+ */
+ FILE *dataFH;
+ pgoff_t dataFilePos;
+ FILE *blobsTocFH;
+ pgoff_t blobsTocFilePos;
+ pgoff_t tocFilePos; /* this counts the file position for AH->FH */
+
+ /* these are used for checking a directory archive */
+ DumpId *chkList;
+ int chkListSize;
+
+ CompressorState *cs;
+ } lclContext;
+
+ typedef struct
+ {
+ char *filename; /* filename excluding the directory (basename) */
+ pgoff_t fileSize;
+ } lclTocEntry;
+
+ typedef struct _lclFileHeader
+ {
+ int version;
+ int type; /* BLK_DATA or BLK_BLOB */
+ char *idStr;
+ } lclFileHeader;
+
+ static const char *modulename = gettext_noop("directory archiver");
+
+ /*
+ * Init routine required by ALL formats. This is a global routine
+ * and should be declared in pg_backup_archiver.h
+ *
+ * It's task is to create any extra archive context (using AH->formatData),
+ * and to initialize the supported function pointers.
+ *
+ * It should also prepare whatever it's input source is for reading/writing,
+ * and in the case of a read mode connection, it should load the Header & TOC.
+ */
+ void
+ InitArchiveFmt_Directory(ArchiveHandle *AH)
+ {
+ lclContext *ctx;
+
+ /* Assuming static functions, this can be copied for each format. */
+ AH->ArchiveEntryPtr = _ArchiveEntry;
+ AH->StartDataPtr = _StartData;
+ AH->WriteDataPtr = _WriteData;
+ AH->EndDataPtr = _EndData;
+ AH->WriteBytePtr = _WriteByte;
+ AH->ReadBytePtr = _ReadByte;
+ AH->WriteBufPtr = _WriteBuf;
+ AH->ReadBufPtr = _ReadBuf;
+ AH->ClosePtr = _CloseArchive;
+ AH->ReopenPtr = NULL;
+ AH->PrintTocDataPtr = _PrintTocData;
+ AH->ReadExtraTocPtr = _ReadExtraToc;
+ AH->WriteExtraTocPtr = _WriteExtraToc;
+ AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = _PrintExtraTocSummary;
+
+ AH->StartBlobsPtr = _StartBlobs;
+ AH->StartBlobPtr = _StartBlob;
+ AH->EndBlobPtr = _EndBlob;
+ AH->EndBlobsPtr = _EndBlobs;
+
+ AH->ClonePtr = NULL;
+ AH->DeClonePtr = NULL;
+
+ AH->StartCheckArchivePtr = _StartCheckArchive;
+ AH->CheckTocEntryPtr = _CheckTocEntry;
+ AH->EndCheckArchivePtr = _EndCheckArchive;
+
+ /*
+ * Set up some special context used in compressing data.
+ */
+ ctx = (lclContext *) calloc(1, sizeof(lclContext));
+ if (ctx == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+ AH->formatData = (void *) ctx;
+
+ ctx->dataFH = NULL;
+ ctx->blobsTocFH = NULL;
+ ctx->cs = NULL;
+
+ /* Initialize LO buffering */
+ AH->lo_buf_size = LOBBUFSIZE;
+ AH->lo_buf = (void *) malloc(LOBBUFSIZE);
+ if (AH->lo_buf == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+
+ /*
+ * Now open the TOC file
+ */
+
+ if (!AH->fSpec || strcmp(AH->fSpec, "") == 0)
+ die_horribly(AH, modulename, "no directory specified\n");
+
+ ctx->directory = AH->fSpec;
+
+ if (AH->mode == archModeWrite)
+ {
+ char *fname = prependDirectory(AH, "TOC");
+ char buf[256];
+
+ /*
+ * Create the ID string, basically a large random number that prevents that
+ * we mix files from different backups
+ */
+ getRandomData(buf, sizeof(buf));
+ if (!pg_md5_hash(buf, strlen(buf), ctx->idStr))
+ die_horribly(AH, modulename, "Error computing checksum");
+
+ /* Create the directory, errors are caught there */
+ createDirectory(ctx->directory, NULL);
+
+ ctx->cs = AllocateCompressorState(AH);
+
+ AH->FH = fopen(fname, PG_BINARY_W);
+ if (AH->FH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+ }
+ else
+ { /* Read Mode */
+ char *fname;
+
+ fname = prependDirectory(AH, "TOC");
+
+ AH->FH = fopen(fname, PG_BINARY_R);
+ if (AH->FH == NULL)
+ die_horribly(AH, modulename,
+ "could not open input file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(TOC_FH_ACTIVE);
+
+ ReadHead(AH);
+ _ReadExtraHead(AH);
+ ReadToc(AH);
+
+ /*
+ * We get the compression information from the TOC, hence no need to
+ * initialize the compressor earlier. Also, remember that the TOC file is
+ * always uncompressed. Compression is only used for the data files.
+ */
+ ctx->cs = AllocateCompressorState(AH);
+
+ /* Nothing else in the file, so close it again... */
+
+ if (fclose(AH->FH) != 0)
+ die_horribly(AH, modulename, "could not close TOC file: %s\n", strerror(errno));
+ }
+ }
+
+ /*
+ * Called by the Archiver when the dumper creates a new TOC entry.
+ *
+ * Optional.
+ *
+ * Set up extrac format-related TOC data.
+ */
+ static void
+ _ArchiveEntry(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx;
+ char fn[MAXPGPATH];
+
+ tctx = (lclTocEntry *) calloc(1, sizeof(lclTocEntry));
+ if (te->dataDumper)
+ {
+ sprintf(fn, "%d"FILE_SUFFIX, te->dumpId);
+ tctx->filename = strdup(fn);
+ }
+ else if (strcmp(te->desc, "BLOBS") == 0)
+ {
+ tctx->filename = strdup("BLOBS.TOC");
+ }
+ else
+ tctx->filename = NULL;
+
+ tctx->fileSize = 0;
+ te->formatData = (void *) tctx;
+ }
+
+ /*
+ * Called by the Archiver to save any extra format-related TOC entry
+ * data.
+ *
+ * Optional.
+ *
+ * Use the Archiver routines to write data - they are non-endian, and
+ * maintain other important file information.
+ */
+ static void
+ _WriteExtraToc(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ /*
+ * A dumpable object has set tctx->filename, any other object hasnt.
+ * (see _ArchiveEntry).
+ */
+ if (tctx->filename)
+ {
+ WriteStr(AH, tctx->filename);
+ WriteOffset(AH, tctx->fileSize, K_OFFSET_POS_SET);
+ }
+ else
+ WriteStr(AH, "");
+ }
+
+ /*
+ * Called by the Archiver to read any extra format-related TOC data.
+ *
+ * Optional.
+ *
+ * Needs to match the order defined in _WriteExtraToc, and sould also
+ * use the Archiver input routines.
+ */
+ static void
+ _ReadExtraToc(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ if (tctx == NULL)
+ {
+ tctx = (lclTocEntry *) calloc(1, sizeof(lclTocEntry));
+ te->formatData = (void *) tctx;
+ }
+
+ tctx->filename = ReadStr(AH);
+ if (strlen(tctx->filename) == 0)
+ {
+ free(tctx->filename);
+ tctx->filename = NULL;
+ }
+ else
+ ReadOffset(AH, &(tctx->fileSize));
+ }
+
+ /*
+ * Called by the Archiver when restoring an archive to output a comment
+ * that includes useful information about the TOC entry.
+ *
+ * Optional.
+ *
+ */
+ static void
+ _PrintExtraToc(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ if (AH->public.verbose && tctx->filename)
+ ahprintf(AH, "-- File: %s\n", tctx->filename);
+ }
+
+ /*
+ * Called by the Archiver when listing the contents of an archive to output a
+ * comment that includes useful information about the archive.
+ *
+ * Optional.
+ *
+ */
+ static void
+ _PrintExtraTocSummary(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ ahprintf(AH, "; ID: %s\n", ctx->idStr);
+ }
+
+
+ /*
+ * Called by the archiver when saving TABLE DATA (not schema). This routine
+ * should save whatever format-specific information is needed to read
+ * the archive back.
+ *
+ * It is called just prior to the dumper's 'DataDumper' routine being called.
+ *
+ * Optional, but strongly recommended.
+ *
+ */
+ static void
+ _StartData(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+
+ fname = prependDirectory(AH, tctx->filename);
+
+ ctx->dataFH = (FILE *) fopen(fname, PG_BINARY_W);
+ if (ctx->dataFH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(DATA_FH_ACTIVE);
+
+ ctx->dataFilePos = 0;
+
+ WriteFileHeader(AH, BLK_DATA);
+
+ _StartDataCompressor(AH, te);
+ }
+
+ static void
+ WriteFileHeader(ArchiveHandle *AH, int type)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ int compression = AH->compression;
+
+ /*
+ * We always write the header uncompressed. If any compression is active,
+ * switch it off for a moment and restore it after writing the header.
+ */
+ AH->compression = 0;
+ (*AH->WriteBufPtr) (AH, "PGDMP", 5); /* Magic code */
+ (*AH->WriteBytePtr) (AH, AH->vmaj);
+ (*AH->WriteBytePtr) (AH, AH->vmin);
+ (*AH->WriteBytePtr) (AH, AH->vrev);
+
+ _WriteByte(AH, type);
+ WriteStr(AH, ctx->idStr);
+
+ AH->compression = compression;
+ }
+
+ static int
+ ReadFileHeader(ArchiveHandle *AH, lclFileHeader *fileHeader)
+ {
+ char tmpMag[7];
+ int vmaj, vmin, vrev;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ int compression = AH->compression;
+ bool err = false;
+
+ Assert(ftell(ctx->dataFH ? ctx->dataFH : ctx->blobsTocFH ? ctx->blobsTocFH : AH->FH) == 0);
+
+ /* Read with compression switched off. See WriteFileHeader() */
+ AH->compression = 0;
+ if ((*AH->ReadBufPtr) (AH, tmpMag, 5) != 5)
+ die_horribly(AH, modulename, "unexpected end of file\n");
+
+ vmaj = (*AH->ReadBytePtr) (AH);
+ vmin = (*AH->ReadBytePtr) (AH);
+ vrev = (*AH->ReadBytePtr) (AH);
+
+ /* Make a convenient integer <maj><min><rev>00 */
+ fileHeader->version = ((vmaj * 256 + vmin) * 256 + vrev) * 256 + 0;
+ fileHeader->type = _ReadByte(AH);
+ if (fileHeader->type != BLK_BLOBS && fileHeader->type != BLK_DATA)
+ err = true;
+ if (!err)
+ {
+ fileHeader->idStr = ReadStr(AH);
+ if (fileHeader->idStr == NULL)
+ err = true;
+ }
+ if (!err)
+ {
+ if (strcmp(fileHeader->idStr, ctx->idStr) != 0)
+ err = true;
+ }
+ AH->compression = compression;
+
+ return err ? -1 : 0;
+ }
+
+ /*
+ * Called by archiver when dumper calls WriteData. This routine is
+ * called for both BLOB and TABLE data; it is the responsibility of
+ * the format to manage each kind of data using StartBlob/StartData.
+ *
+ * It should only be called from within a DataDumper routine.
+ *
+ * Mandatory.
+ */
+ static size_t
+ _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ return WriteDataToArchive(AH, cs, _WriteBuf, data, dLen);
+ }
+
+ /*
+ * Called by the archiver when a dumper's 'DataDumper' routine has
+ * finished.
+ *
+ * Optional.
+ *
+ */
+ static void
+ _EndData(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ lclContext *ctx = (lclContext *) AH->formatData;
+
+ _EndDataCompressor(AH, te);
+
+ Assert(DATA_FH_ACTIVE);
+
+ /* Close the file */
+ fclose(ctx->dataFH);
+
+ /* the file won't grow anymore. Record the size. */
+ tctx->fileSize = ctx->dataFilePos;
+
+ ctx->dataFH = NULL;
+ }
+
+ /*
+ * Print data for a given file (can be a BLOB as well)
+ */
+ static void
+ _PrintFileData(ArchiveHandle *AH, char *filename, pgoff_t expectedSize, RestoreOptions *ropt)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+ lclFileHeader fileHeader;
+
+ InitCompressorState(AH, cs, COMPRESSOR_INFLATE);
+
+ if (!filename)
+ return;
+
+ _CheckFileSize(AH, filename, expectedSize, true);
+ _CheckFileContents(AH, filename, ctx->idStr, true);
+
+ ctx->dataFH = fopen(filename, PG_BINARY_R);
+ if (!ctx->dataFH)
+ die_horribly(AH, modulename, "could not open input file \"%s\": %s\n",
+ filename, strerror(errno));
+
+ if (ReadFileHeader(AH, &fileHeader) != 0)
+ die_horribly(AH, modulename, "could not read valid file header from file \"%s\"\n",
+ filename);
+
+ Assert(DATA_FH_ACTIVE);
+
+ ReadDataFromArchive(AH, cs, _DirectoryReadFunction);
+
+ ctx->dataFH = NULL;
+ }
+
+
+ /*
+ * Print data for a given TOC entry
+ */
+ static void
+ _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ if (!tctx->filename)
+ return;
+
+ if (strcmp(te->desc, "BLOBS") == 0)
+ _LoadBlobs(AH, ropt);
+ else
+ {
+ char *fname = prependDirectory(AH, tctx->filename);
+ _PrintFileData(AH, fname, tctx->fileSize, ropt);
+ }
+ }
+
+ static void
+ _LoadBlobs(ArchiveHandle *AH, RestoreOptions *ropt)
+ {
+ Oid oid;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ lclFileHeader fileHeader;
+ char *fname;
+
+ StartRestoreBlobs(AH);
+
+ fname = prependDirectory(AH, "BLOBS.TOC");
+
+ ctx->blobsTocFH = fopen(fname, "rb");
+
+ if (ctx->blobsTocFH == NULL)
+ die_horribly(AH, modulename, "could not open large object TOC file \"%s\" for input: %s\n",
+ fname, strerror(errno));
+
+ ReadFileHeader(AH, &fileHeader);
+
+ /* we cannot test for feof() since EOF only shows up in the low
+ * level read functions. But they would die_horribly() anyway. */
+ while (1)
+ {
+ char *blobFname;
+ pgoff_t blobSize;
+
+ oid = ReadInt(AH);
+ /* oid == 0 is our end marker */
+ if (oid == 0)
+ break;
+ ReadOffset(AH, &blobSize);
+
+ StartRestoreBlob(AH, oid, ropt->dropSchema);
+ blobFname = prependBlobsDirectory(AH, oid);
+ _PrintFileData(AH, blobFname, blobSize, ropt);
+ EndRestoreBlob(AH, oid);
+ }
+
+ if (fclose(ctx->blobsTocFH) != 0)
+ die_horribly(AH, modulename, "could not close large object TOC file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ ctx->blobsTocFH = NULL;
+
+ EndRestoreBlobs(AH);
+ }
+
+
+ /*
+ * Write a byte of data to the archive.
+ *
+ * Mandatory.
+ *
+ * Called by the archiver to do integer & byte output to the archive.
+ * These routines are only used to read & write headers & TOC.
+ *
+ */
+ static int
+ _WriteByte(ArchiveHandle *AH, const int i)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ if (fputc(i, stream) == EOF)
+ die_horribly(AH, modulename, "could not write byte\n");
+
+ *filePos += 1;
+
+ return 1;
+ }
+
+ /*
+ * Read a byte of data from the archive.
+ *
+ * Mandatory
+ *
+ * Called by the archiver to read bytes & integers from the archive.
+ * These routines are only used to read & write headers & TOC.
+ * EOF should be treated as a fatal error.
+ */
+ static int
+ _ReadByte(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ int res;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ res = getc(stream);
+ if (res == EOF)
+ die_horribly(AH, modulename, "unexpected end of file\n");
+
+ *filePos += 1;
+
+ return res;
+ }
+
+ /*
+ * Write a buffer of data to the archive.
+ *
+ * Mandatory.
+ *
+ * Called by the archiver to write a block of bytes to the TOC and by the
+ * compressor to write compressed data to the data files.
+ *
+ */
+ static size_t
+ _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ size_t res;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ res = fwrite(buf, 1, len, stream);
+ if (res != len)
+ die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
+
+ *filePos += res;
+
+ return res;
+ }
+
+ /*
+ * Read a block of bytes from the archive.
+ *
+ * Mandatory.
+ *
+ * Called by the archiver to read a block of bytes from the archive
+ *
+ */
+ static size_t
+ _ReadBuf(ArchiveHandle *AH, void *buf, size_t len)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ size_t res;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ res = fread(buf, 1, len, stream);
+
+ *filePos += res;
+
+ return res;
+ }
+
+ /*
+ * Close the archive.
+ *
+ * Mandatory.
+ *
+ * When writing the archive, this is the routine that actually starts
+ * the process of saving it to files. No data should be written prior
+ * to this point, since the user could sort the TOC after creating it.
+ *
+ * If an archive is to be written, this routine must call:
+ * WriteHead to save the archive header
+ * WriteToc to save the TOC entries
+ * WriteDataChunks to save all DATA & BLOBs.
+ *
+ */
+ static void
+ _CloseArchive(ArchiveHandle *AH)
+ {
+ if (AH->mode == archModeWrite)
+ {
+ #ifdef USE_ASSERT_CHECKING
+ lclContext *ctx = (lclContext *) AH->formatData;
+ #endif
+
+ WriteDataChunks(AH);
+
+ Assert(TOC_FH_ACTIVE);
+
+ WriteHead(AH);
+ _WriteExtraHead(AH);
+ WriteToc(AH);
+
+ if (fclose(AH->FH) != 0)
+ die_horribly(AH, modulename, "could not close TOC file: %s\n", strerror(errno));
+ }
+ AH->FH = NULL;
+ }
+
+
+
+ /*
+ * BLOB support
+ */
+
+ /*
+ * Called by the archiver when starting to save all BLOB DATA (not schema).
+ * This routine should save whatever format-specific information is needed
+ * to read the BLOBs back into memory.
+ *
+ * It is called just prior to the dumper's DataDumper routine.
+ *
+ * Optional, but strongly recommended.
+ */
+ static void
+ _StartBlobs(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+
+ fname = prependDirectory(AH, "BLOBS.TOC");
+ createDirectory(ctx->directory, "blobs");
+
+ ctx->blobsTocFH = fopen(fname, "ab");
+ if (ctx->blobsTocFH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ ctx->blobsTocFilePos = 0;
+
+ WriteFileHeader(AH, BLK_BLOBS);
+ }
+
+ /*
+ * Called by the archiver when the dumper calls StartBlob.
+ *
+ * Mandatory.
+ *
+ * Must save the passed OID for retrieval at restore-time.
+ */
+ static void
+ _StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+
+ fname = prependBlobsDirectory(AH, oid);
+ ctx->dataFH = (FILE *) fopen(fname, PG_BINARY_W);
+
+ if (ctx->dataFH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(DATA_FH_ACTIVE);
+
+ ctx->dataFilePos = 0;
+
+ WriteFileHeader(AH, BLK_BLOBS);
+
+ _StartDataCompressor(AH, te);
+ }
+
+ /*
+ * Called by the archiver when the dumper calls EndBlob.
+ *
+ * Optional.
+ */
+ static void
+ _EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t save_filePos;
+
+ _EndDataCompressor(AH, te);
+
+ Assert(DATA_FH_ACTIVE);
+
+ save_filePos = ctx->dataFilePos;
+
+ /* Close the BLOB data file itself */
+ fclose(ctx->dataFH);
+ ctx->dataFH = NULL;
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ /* register the BLOB data file to BLOBS.TOC */
+ WriteInt(AH, oid);
+ WriteOffset(AH, save_filePos, K_OFFSET_POS_NOT_SET);
+ }
+
+ /*
+ * Called by the archiver when finishing saving all BLOB DATA.
+ *
+ * Optional.
+ */
+ static void
+ _EndBlobs(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ WriteInt(AH, 0);
+
+ fclose(ctx->blobsTocFH);
+ ctx->blobsTocFH = NULL;
+
+ tctx->fileSize = ctx->blobsTocFilePos;
+ }
+
+ /*
+ * The idea for the directory check is as follows: First we do a list of every
+ * file that we find in the directory. We reject filenames that don't fit our
+ * pattern outright. So at this stage we only accept all kinds of TOC data
+ * and our data files.
+ *
+ * If a filename looks good (like nnnnn.dat), we save its dumpId to ctx->chkList.
+ *
+ * Other checks then walk through the TOC and for every file they make sure
+ * that the file is what it is pretending to be. Once it passes the checks we
+ * take out its entry in chkList, i.e. replace its dumpId by InvalidDumpId.
+ *
+ * At the end what is left in chkList must be files that are not referenced
+ * from the TOC.
+ */
+ static bool
+ _StartCheckArchive(ArchiveHandle *AH)
+ {
+ bool checkOK = true;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ DIR *dir;
+ char *dname = ctx->directory;
+ struct dirent *entry;
+ int idx = 0;
+ char *suffix;
+ bool tocSeen = false;
+
+ dir = opendir(dname);
+ if (!dir)
+ {
+ printf("Could not open directory \"%s\": %s\n", dname, strerror(errno));
+ return false;
+ }
+
+ /*
+ * Actually we are just avoiding a linked list here by getting an upper
+ * limit of the number of elements in the directory.
+ */
+ while ((entry = readdir(dir)))
+ idx++;
+
+ ctx->chkListSize = idx;
+ ctx->chkList = (DumpId *) malloc(ctx->chkListSize * sizeof(DumpId));
+
+ /* seems that Windows doesn't have a rewinddir() equivalent */
+ closedir(dir);
+ dir = opendir(dname);
+ if (!dir)
+ {
+ printf("Could not open directory \"%s\": %s\n", dname, strerror(errno));
+ return false;
+ }
+
+
+ idx = 0;
+
+ for (;;)
+ {
+ errno = 0;
+ entry = readdir(dir);
+ if (!entry && errno == 0)
+ /* end of directory entries reached */
+ break;
+ if (!entry && errno)
+ {
+ printf("Error reading directory %s: %s\n",
+ entry->d_name, strerror(errno));
+ checkOK = false;
+ break;
+ }
+
+ if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0)
+ continue;
+ if (strcmp(entry->d_name, "blobs") == 0 &&
+ isDirectory(prependDirectory(AH, entry->d_name)))
+ continue;
+ if (strcmp(entry->d_name, "BLOBS.TOC") == 0 &&
+ isRegularFile(prependDirectory(AH, entry->d_name)))
+ continue;
+ if (strcmp(entry->d_name, "TOC") == 0 &&
+ isRegularFile(prependDirectory(AH, entry->d_name)))
+ {
+ tocSeen = true;
+ continue;
+ }
+ /* besides the above we only expect nnnn.dat, with nnnn being our numerical dumpID */
+ if ((suffix = strstr(entry->d_name, FILE_SUFFIX)) == NULL)
+ {
+ printf("Unexpected file \"%s\" in directory \"%s\"\n", entry->d_name, dname);
+ checkOK = false;
+ continue;
+ }
+ else
+ {
+ /* suffix now points into entry->d_name */
+ int dumpId;
+ int scBytes, scItems;
+
+ /* check if FILE_SUFFIX is really a suffix instead of just a
+ * substring. */
+ if (strlen(suffix) != strlen(FILE_SUFFIX))
+ {
+ printf("Unexpected file \"%s\" in directory \"%s\"\n",
+ entry->d_name, dname);
+ checkOK = false;
+ continue;
+ }
+
+ /* cut off the suffix, now entry->d_name contains the null terminated dumpId,
+ * and we parse it back. */
+ *suffix = '\0';
+ scItems = sscanf(entry->d_name, "%d%n", &dumpId, &scBytes);
+ if (scItems != 1 || scBytes != strlen(entry->d_name))
+ {
+ printf("Unexpected file \"%s\" in directory \"%s\"\n",
+ entry->d_name, dname);
+ checkOK = false;
+ continue;
+ }
+
+ /* Still here so this entry is good. Add the dumpId to our list. */
+ ctx->chkList[idx++] = (DumpId) dumpId;
+ }
+ }
+ closedir(dir);
+
+ /* we probably counted a few entries too much, just ignore them. */
+ while (idx < ctx->chkListSize)
+ ctx->chkList[idx++] = InvalidDumpId;
+
+ /* also return false if we haven't seen the TOC file */
+ return checkOK && tocSeen;
+ }
+
+ static bool
+ _CheckFileSize(ArchiveHandle *AH, const char *fname, pgoff_t pgSize, bool terminateOnError)
+ {
+ bool checkOK = true;
+ FILE *f;
+ unsigned long size = (unsigned long) pgSize;
+ struct stat st;
+
+ /*
+ * If terminateOnError is true, then we don't expect this to fail and if it
+ * does, we need to terminate. On the other hand, if it is false we are
+ * checking, go on then and present a report of all findings at the end.
+ * Accordingly write to either stderr or stdout.
+ */
+ if (terminateOnError)
+ f = stderr;
+ else
+ f = stdout;
+
+ if (!fname || fname[0] == '\0')
+ {
+ fprintf(f, "Invalid (empty) filename\n");
+ checkOK = false;
+ }
+ else if (stat(fname, &st) != 0)
+ {
+ fprintf(f, "File not found: \"%s\"\n", fname);
+ checkOK = false;
+ }
+ else if (st.st_size != (off_t) pgSize)
+ {
+ getuid();
+ fprintf(f, "Size mismatch for file \"%s\" (expected: %lu bytes, actual %lu bytes)\n",
+ fname, size, (unsigned long) st.st_size);
+ getgid();
+ checkOK = false;
+ }
+
+ if (!checkOK && terminateOnError)
+ {
+ if (AH->connection)
+ PQfinish(AH->connection);
+
+ exit(1);
+ }
+
+ return checkOK;
+ }
+
+ static bool
+ _CheckFileContents(ArchiveHandle *AH, const char *fname, const char* idStr, bool terminateOnError)
+ {
+ bool checkOK = true;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ FILE *file;
+ FILE *f;
+ lclFileHeader fileHeader;
+
+ Assert(ctx->dataFH == NULL);
+
+ if (terminateOnError)
+ f = stderr;
+ else
+ f = stdout;
+
+ if (!fname || fname[0] == '\0')
+ {
+ fprintf(f, "Invalid (empty) filename\n");
+ return false;
+ }
+
+ if (!(file = fopen(fname, PG_BINARY_R)))
+ {
+ fprintf(f, "Could not open file \"%s\": %s\n", fname, strerror(errno));
+ return false;
+ }
+
+ ctx->dataFH = file;
+ if (ReadFileHeader(AH, &fileHeader) != 0)
+ {
+ fprintf(f, "Could not read valid file header from file \"%s\"\n", fname);
+ checkOK = false;
+ }
+ else if (strcmp(fileHeader.idStr, idStr) != 0)
+ {
+ fprintf(f, "File \"%s\" belongs to different backup (expected id: %s, actual id: %s)\n",
+ fname, idStr, fileHeader.idStr);
+ checkOK = false;
+ }
+
+ if (file)
+ fclose(file);
+
+ ctx->dataFH = NULL;
+
+ if (!checkOK && terminateOnError)
+ {
+ if (AH->connection)
+ PQfinish(AH->connection);
+ exit(1);
+ }
+
+ return checkOK;
+ }
+
+ static bool
+ _CheckBlob(ArchiveHandle *AH, Oid oid, pgoff_t size)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname = prependBlobsDirectory(AH, oid);
+ bool checkOK = true;
+
+ if (!_CheckFileSize(AH, fname, size, false))
+ checkOK = false;
+ else if (!_CheckFileContents(AH, fname, ctx->idStr, false))
+ checkOK = false;
+
+ return checkOK;
+ }
+
+ static bool
+ _CheckBlobs(ArchiveHandle *AH, TocEntry *te, teReqs reqs)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+ bool checkOK = true;
+ lclFileHeader fileHeader;
+ pgoff_t size;
+ Oid oid;
+
+ /* check the BLOBS.TOC first */
+ fname = prependDirectory(AH, "BLOBS.TOC");
+
+ if (!fname)
+ {
+ printf("Could not find BLOBS.TOC. Check the archive!\n");
+ return false;
+ }
+
+ if (!_CheckFileSize(AH, fname, tctx->fileSize, false))
+ checkOK = false;
+ else if (!_CheckFileContents(AH, fname, ctx->idStr, false))
+ checkOK = false;
+
+ /* now check every single BLOB object */
+ ctx->blobsTocFH = fopen(fname, "rb");
+ if (ctx->blobsTocFH == NULL)
+ {
+ printf("could not open large object TOC for input: %s\n",
+ strerror(errno));
+ return false;
+ }
+ ReadFileHeader(AH, &fileHeader);
+
+ /* we cannot test for feof() since EOF only shows up in the low
+ * level read functions. But they would die_horribly() anyway. */
+ while ((oid = ReadInt(AH)))
+ {
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ ReadOffset(AH, &size);
+
+ if (!_CheckBlob(AH, oid, size))
+ checkOK = false;
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+ }
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ if (fclose(ctx->blobsTocFH) != 0)
+ {
+ printf("could not close large object TOC file: %s\n",
+ strerror(errno));
+ checkOK = false;
+ }
+
+ return checkOK;
+ }
+
+
+ static bool
+ _CheckTocEntry(ArchiveHandle *AH, TocEntry *te, teReqs reqs)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ int idx;
+ bool checkOK = true;
+
+ /* take out files from chkList as we see them */
+ for (idx = 0; idx < ctx->chkListSize; idx++)
+ {
+ if (ctx->chkList[idx] == te->dumpId && te->section == SECTION_DATA)
+ {
+ ctx->chkList[idx] = InvalidDumpId;
+ break;
+ }
+ }
+
+ /* see comment in _tocEntryRequired() for the special case of SEQUENCE SET */
+ if (reqs & REQ_DATA && strcmp(te->desc, "BLOBS") == 0)
+ {
+ if (!_CheckBlobs(AH, te, reqs))
+ checkOK = false;
+ }
+ else if (reqs & REQ_DATA && strcmp(te->desc, "SEQUENCE SET") != 0
+ && strcmp(te->desc, "BLOB") != 0
+ && strcmp(te->desc, "COMMENT") != 0)
+ {
+ char *fname;
+
+ fname = prependDirectory(AH, tctx->filename);
+ if (!fname)
+ {
+ printf("Could not find file %s\n", tctx->filename);
+ checkOK = false;
+ }
+ else if (!_CheckFileSize(AH, fname, tctx->fileSize, false))
+ checkOK = false;
+ else if (!_CheckFileContents(AH, fname, ctx->idStr, false))
+ checkOK = false;
+ }
+
+ return checkOK;
+ }
+
+ static bool
+ _EndCheckArchive(ArchiveHandle *AH)
+ {
+ /* check left over files */
+ lclContext *ctx = (lclContext *) AH->formatData;
+ int idx;
+ bool checkOK = true;
+
+ for (idx = 0; idx < ctx->chkListSize; idx++)
+ {
+ if (ctx->chkList[idx] != InvalidDumpId)
+ {
+ printf("Unexpected file: %d"FILE_SUFFIX"\n", ctx->chkList[idx]);
+ checkOK = false;
+ }
+ }
+
+ return checkOK;
+ }
+
+
+ static void
+ createDirectory(const char *dir, const char *subdir)
+ {
+ struct stat st;
+ char dirname[MAXPGPATH];
+
+ /* the directory must not yet exist, first check if it is existing */
+ if (subdir && strlen(dir) + 1 + strlen(subdir) + 1 > MAXPGPATH)
+ die_horribly(NULL, modulename, "directory name %s too long", dir);
+
+ strcpy(dirname, dir);
+
+ if (subdir)
+ {
+ strcat(dirname, "/");
+ strcat(dirname, subdir);
+ }
+
+ if (stat(dirname, &st) == 0)
+ {
+ if (S_ISDIR(st.st_mode))
+ die_horribly(NULL, modulename,
+ "Cannot create directory %s, it exists already\n", dirname);
+ else
+ die_horribly(NULL, modulename,
+ "Cannot create directory %s, a file with this name exists already\n", dirname);
+ }
+
+ /*
+ * Now we create the directory. Note that for some race condition we
+ * could also run into the situation that the directory has been created
+ * just between our two calls.
+ */
+ if (mkdir(dirname, 0700) < 0)
+ die_horribly(NULL, modulename, "Could not create directory %s: %s",
+ dirname, strerror(errno));
+ }
+
+
+ static char *
+ prependDirectory(ArchiveHandle *AH, const char *relativeFilename)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ static char buf[MAXPGPATH];
+ char *dname;
+
+ dname = ctx->directory;
+
+ if (strlen(dname) + 1 + strlen(relativeFilename) + 1 > MAXPGPATH)
+ die_horribly(AH, modulename, "path name too long: %s", dname);
+
+ strcpy(buf, dname);
+ strcat(buf, "/");
+ strcat(buf, relativeFilename);
+
+ return buf;
+ }
+
+ static char *
+ prependBlobsDirectory(ArchiveHandle *AH, Oid oid)
+ {
+ static char buf[MAXPGPATH];
+ char *dname;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ int r;
+
+ dname = ctx->directory;
+
+ r = snprintf(buf, MAXPGPATH, "%s/blobs/%d%s",
+ dname, oid, FILE_SUFFIX);
+
+ if (r < 0 || r >= MAXPGPATH)
+ die_horribly(AH, modulename, "path name too long: %s", dname);
+
+ return buf;
+ }
+
+ static void
+ _StartDataCompressor(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ InitCompressorState(AH, cs, COMPRESSOR_DEFLATE);
+ }
+
+
+ static void
+ _EndDataCompressor(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ FlushCompressorState(AH, cs, _WriteBuf);
+ }
+
+ static size_t
+ _DirectoryReadFunction(ArchiveHandle *AH, void **buf, size_t sizeHint)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ Assert(cs->comprInSize >= comprInInitSize);
+
+ if (sizeHint == 0)
+ sizeHint = comprInInitSize;
+
+ *buf = cs->comprIn;
+ return _ReadBuf(AH, cs->comprIn, sizeHint);
+ }
+
+ static void
+ _WriteExtraHead(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ WriteStr(AH, ctx->idStr);
+ }
+
+ static void
+ _ReadExtraHead(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *str = ReadStr(AH);
+
+ if (strlen(str) != 32)
+ die_horribly(AH, modulename, "Invalid ID of the backup set (corrupted TOC file?)\n");
+
+ strcpy(ctx->idStr, str);
+ }
+
+ static char *
+ getRandomData(char *s, int len)
+ {
+ int i;
+
+ #ifdef USE_SSL
+ if (RAND_bytes((unsigned char *)s, len) != 1)
+ #endif
+ for (i = 0; i < len; i++)
+ /* Use a lower strengh random number if OpenSSL is not available */
+ s[i] = random() % 255;
+
+ return s;
+ }
+
+ static bool
+ isDirectory(const char *fname)
+ {
+ struct stat st;
+
+ if (stat(fname, &st))
+ return false;
+
+ return S_ISDIR(st.st_mode);
+ }
+
+ static bool
+ isRegularFile(const char *fname)
+ {
+ struct stat st;
+
+ if (stat(fname, &st))
+ return false;
+
+ return S_ISREG(st.st_mode);
+ }
+
diff --git a/src/bin/pg_dump/pg_backup_files.c b/src/bin/pg_dump/pg_backup_files.c
index abc93b1..825c473 100644
*** a/src/bin/pg_dump/pg_backup_files.c
--- b/src/bin/pg_dump/pg_backup_files.c
*************** InitArchiveFmt_Files(ArchiveHandle *AH)
*** 92,97 ****
--- 92,98 ----
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = NULL;
AH->StartBlobsPtr = _StartBlobs;
AH->StartBlobPtr = _StartBlob;
*************** InitArchiveFmt_Files(ArchiveHandle *AH)
*** 100,105 ****
--- 101,110 ----
AH->ClonePtr = NULL;
AH->DeClonePtr = NULL;
+ AH->StartCheckArchivePtr = NULL;
+ AH->CheckTocEntryPtr = NULL;
+ AH->EndCheckArchivePtr = NULL;
+
/*
* Set up some special context used in compressing data.
*/
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index 006f7da..dcc13ee 100644
*** a/src/bin/pg_dump/pg_backup_tar.c
--- b/src/bin/pg_dump/pg_backup_tar.c
*************** InitArchiveFmt_Tar(ArchiveHandle *AH)
*** 144,149 ****
--- 144,150 ----
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = NULL;
AH->StartBlobsPtr = _StartBlobs;
AH->StartBlobPtr = _StartBlob;
*************** InitArchiveFmt_Tar(ArchiveHandle *AH)
*** 152,157 ****
--- 153,162 ----
AH->ClonePtr = NULL;
AH->DeClonePtr = NULL;
+ AH->StartCheckArchivePtr = NULL;
+ AH->CheckTocEntryPtr = NULL;
+ AH->EndCheckArchivePtr = NULL;
+
/*
* Set up some special context used in compressing data.
*/
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 55ea684..39e68d9 100644
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
***************
*** 56,61 ****
--- 56,62 ----
#include "pg_backup_archiver.h"
#include "dumputils.h"
+ #include "compress_io.h"
extern char *optarg;
extern int optind,
*************** static int no_security_label = 0;
*** 137,142 ****
--- 138,144 ----
static void help(const char *progname);
+ static ArchiveFormat parseArchiveFormat(const char *format);
static void expand_schema_name_patterns(SimpleStringList *patterns,
SimpleOidList *oids);
static void expand_table_name_patterns(SimpleStringList *patterns,
*************** main(int argc, char **argv)
*** 255,261 ****
int numObjs;
int i;
enum trivalue prompt_password = TRI_DEFAULT;
! int compressLevel = -1;
int plainText = 0;
int outputClean = 0;
int outputCreateDB = 0;
--- 257,263 ----
int numObjs;
int i;
enum trivalue prompt_password = TRI_DEFAULT;
! int compressLevel = COMPRESSION_UNKNOWN;
int plainText = 0;
int outputClean = 0;
int outputCreateDB = 0;
*************** main(int argc, char **argv)
*** 266,275 ****
--- 268,279 ----
int my_version;
int optindex;
RestoreOptions *ropt;
+ ArchiveFormat archiveFormat = archUnknown;
static int disable_triggers = 0;
static int outputNoTablespaces = 0;
static int use_setsessauth = 0;
+ static int compressLZF = 0;
static struct option long_options[] = {
{"data-only", no_argument, NULL, 'a'},
*************** main(int argc, char **argv)
*** 311,316 ****
--- 315,321 ----
{"disable-triggers", no_argument, &disable_triggers, 1},
{"inserts", no_argument, &dump_inserts, 1},
{"lock-wait-timeout", required_argument, NULL, 2},
+ {"compress-lzf", no_argument, &compressLZF, 1},
{"no-tablespaces", no_argument, &outputNoTablespaces, 1},
{"quote-all-identifiers", no_argument, "e_all_identifiers, 1},
{"role", required_argument, NULL, 3},
*************** main(int argc, char **argv)
*** 535,568 ****
exit(1);
}
! /* open the output file */
! if (pg_strcasecmp(format, "a") == 0 || pg_strcasecmp(format, "append") == 0)
! {
! /* This is used by pg_dumpall, and is not documented */
plainText = 1;
! g_fout = CreateArchive(filename, archNull, 0, archModeAppend);
! }
! else if (pg_strcasecmp(format, "c") == 0 || pg_strcasecmp(format, "custom") == 0)
! g_fout = CreateArchive(filename, archCustom, compressLevel, archModeWrite);
! else if (pg_strcasecmp(format, "f") == 0 || pg_strcasecmp(format, "file") == 0)
{
! /*
! * Dump files into the current directory; for demonstration only, not
! * documented.
! */
! g_fout = CreateArchive(filename, archFiles, compressLevel, archModeWrite);
}
! else if (pg_strcasecmp(format, "p") == 0 || pg_strcasecmp(format, "plain") == 0)
{
! plainText = 1;
! g_fout = CreateArchive(filename, archNull, 0, archModeWrite);
}
! else if (pg_strcasecmp(format, "t") == 0 || pg_strcasecmp(format, "tar") == 0)
! g_fout = CreateArchive(filename, archTar, compressLevel, archModeWrite);
! else
{
! write_msg(NULL, "invalid output format \"%s\" specified\n", format);
! exit(1);
}
if (g_fout == NULL)
--- 540,615 ----
exit(1);
}
! archiveFormat = parseArchiveFormat(format);
!
! /* archiveFormat specific setup */
! if (archiveFormat == archNull || archiveFormat == archNullAppend)
plainText = 1;
!
! if (compressLZF)
{
! if (archiveFormat != archCustom && archiveFormat != archDirectory)
! {
! write_msg(NULL, "LZF compression is currently only supported for the custom "
! " or directory format\n");
! exit(1);
! }
! else
! compressLevel = COMPR_LZF_CODE;
}
!
! /*
! * If AH->compression == UNKNOWN_COMPRESSION then it has not been set to some
! * value explicitly.
! *
! * Fall back to default:
! *
! * zlib with Z_DEFAULT_COMPRESSION for those formats that support it.
! * If either one is not available: use no compression at all.
! */
!
! if (compressLevel == COMPRESSION_UNKNOWN)
{
! #ifdef HAVE_LIBZ
! if (archiveFormat == archCustom || archiveFormat == archDirectory)
! compressLevel = Z_DEFAULT_COMPRESSION;
! else
! compressLevel = 0;
! #else
! compressLevel = 0;
! #endif
}
!
! /* open the output file */
! switch(archiveFormat)
{
! case archCustom:
! g_fout = CreateArchive(filename, archCustom, compressLevel,
! archModeWrite);
! break;
! case archDirectory:
! g_fout = CreateArchive(filename, archDirectory, compressLevel,
! archModeWrite);
! break;
! case archFiles:
! g_fout = CreateArchive(filename, archFiles, compressLevel,
! archModeWrite);
! break;
! case archNull:
! g_fout = CreateArchive(filename, archNull, 0, archModeWrite);
! break;
! case archNullAppend:
! g_fout = CreateArchive(filename, archNull, 0, archModeAppend);
! break;
! case archTar:
! g_fout = CreateArchive(filename, archTar, compressLevel,
! archModeWrite);
! break;
!
! default:
! /* we never reach here, because we check in parseArchiveFormat()
! * already. */
! break;
}
if (g_fout == NULL)
*************** main(int argc, char **argv)
*** 671,677 ****
*/
do_sql_command(g_conn, "BEGIN");
! do_sql_command(g_conn, "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE");
/* Select the appropriate subquery to convert user IDs to names */
if (g_fout->remoteVersion >= 80100)
--- 718,724 ----
*/
do_sql_command(g_conn, "BEGIN");
! do_sql_command(g_conn, "SET TRANSACTION READ ONLY ISOLATION LEVEL SERIALIZABLE");
/* Select the appropriate subquery to convert user IDs to names */
if (g_fout->remoteVersion >= 80100)
*************** help(const char *progname)
*** 832,840 ****
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|t|p output file format (custom, tar, plain text)\n"));
printf(_(" -v, --verbose verbose mode\n"));
! printf(_(" -Z, --compress=0-9 compression level for compressed formats\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
--- 879,888 ----
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar, plain text)\n"));
printf(_(" -v, --verbose verbose mode\n"));
! printf(_(" -Z, --compress=0-9 compression level of libz for compressed formats\n"));
! printf(_(" --compress-lzf use liblzf compression instead of zlib\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
*************** exit_nicely(void)
*** 889,894 ****
--- 937,980 ----
exit(1);
}
+ static ArchiveFormat
+ parseArchiveFormat(const char *format)
+ {
+ ArchiveFormat archiveFormat;
+
+ if (pg_strcasecmp(format, "a") == 0 || pg_strcasecmp(format, "append") == 0)
+ /* This is used by pg_dumpall, and is not documented */
+ archiveFormat = archNullAppend;
+ else if (pg_strcasecmp(format, "c") == 0)
+ archiveFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archiveFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archiveFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archiveFormat = archDirectory;
+ else if (pg_strcasecmp(format, "f") == 0 || pg_strcasecmp(format, "file") == 0)
+ /*
+ * Dump files into the current directory; for demonstration only, not
+ * documented.
+ */
+ archiveFormat = archFiles;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archiveFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archiveFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archiveFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archiveFormat = archTar;
+ else
+ {
+ write_msg(NULL, "invalid output format \"%s\" specified\n", format);
+ exit(1);
+ }
+ return archiveFormat;
+ }
+
/*
* Find the OIDs of all schemas matching the given list of patterns,
* and append them to the given OID list.
*************** dumpBlobs(Archive *AH, void *arg)
*** 2174,2180 ****
exit_nicely();
}
! WriteData(AH, buf, cnt);
} while (cnt > 0);
lo_close(g_conn, loFd);
--- 2260,2267 ----
exit_nicely();
}
! if (cnt > 0)
! WriteData(AH, buf, cnt);
} while (cnt > 0);
lo_close(g_conn, loFd);
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7885535..0f643b9 100644
*** a/src/bin/pg_dump/pg_dump.h
--- b/src/bin/pg_dump/pg_dump.h
*************** typedef struct
*** 39,44 ****
--- 39,45 ----
} CatalogId;
typedef int DumpId;
+ #define InvalidDumpId (-1)
/*
* Data structures for simple lists of OIDs and strings. The support for
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 1ddba72..3fbe264 100644
*** a/src/bin/pg_dump/pg_restore.c
--- b/src/bin/pg_dump/pg_restore.c
*************** main(int argc, char **argv)
*** 79,84 ****
--- 79,85 ----
static int skip_seclabel = 0;
struct option cmdopts[] = {
+ {"check", 0, NULL, 'k'},
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
*************** main(int argc, char **argv)
*** 144,150 ****
}
}
! while ((c = getopt_long(argc, argv, "acCd:ef:F:h:iI:j:lL:n:Op:P:RsS:t:T:U:vwWxX:1",
cmdopts, NULL)) != -1)
{
switch (c)
--- 145,151 ----
}
}
! while ((c = getopt_long(argc, argv, "acCd:ef:F:h:iI:j:klL:n:Op:P:RsS:t:T:U:vwWxX:1",
cmdopts, NULL)) != -1)
{
switch (c)
*************** main(int argc, char **argv)
*** 182,188 ****
case 'j': /* number of restore jobs */
opts->number_of_jobs = atoi(optarg);
break;
!
case 'l': /* Dump the TOC summary */
opts->tocSummary = 1;
break;
--- 183,191 ----
case 'j': /* number of restore jobs */
opts->number_of_jobs = atoi(optarg);
break;
! case 'k': /* check the archive */
! opts->checkArchive = 1;
! break;
case 'l': /* Dump the TOC summary */
opts->tocSummary = 1;
break;
*************** main(int argc, char **argv)
*** 352,357 ****
--- 355,365 ----
opts->format = archCustom;
break;
+ case 'd':
+ case 'D':
+ opts->format = archDirectory;
+ break;
+
case 'f':
case 'F':
opts->format = archFiles;
*************** main(int argc, char **argv)
*** 363,369 ****
break;
default:
! write_msg(NULL, "unrecognized archive format \"%s\"; please specify \"c\" or \"t\"\n",
opts->formatName);
exit(1);
}
--- 371,377 ----
break;
default:
! write_msg(NULL, "unrecognized archive format \"%s\"; please specify \"c\", \"d\" or \"t\"\n",
opts->formatName);
exit(1);
}
*************** main(int argc, char **argv)
*** 392,397 ****
--- 400,413 ----
if (opts->tocSummary)
PrintTOCSummary(AH, opts);
+ else if (opts->checkArchive)
+ {
+ bool checkOK;
+ checkOK = CheckArchive(AH, opts);
+ CloseArchive(AH);
+ if (!checkOK)
+ exit(1);
+ }
else
RestoreArchive(AH, opts);
*************** usage(const char *progname)
*** 418,425 ****
printf(_("\nGeneral options:\n"));
printf(_(" -d, --dbname=NAME connect to database name\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|t backup file format (should be automatic)\n"));
printf(_(" -l, --list print summarized TOC of the archive\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
--- 434,442 ----
printf(_("\nGeneral options:\n"));
printf(_(" -d, --dbname=NAME connect to database name\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|d|t backup file format (should be automatic)\n"));
printf(_(" -l, --list print summarized TOC of the archive\n"));
+ printf(_(" -k check the directory archive\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
diff --git a/src/bin/pg_dump/test.sh b/src/bin/pg_dump/test.sh
index ...23547fa .
*** a/src/bin/pg_dump/test.sh
--- b/src/bin/pg_dump/test.sh
***************
*** 0 ****
--- 1,68 ----
+ #!/bin/sh -x
+
+
+ # lzf compression
+ rm -rf out.dir
+ dropdb foodb
+ createdb --template=template0 foodb --lc-ctype=C
+ psql foodb -c "alter database foodb set lc_monetary to 'C'"
+ #./pg_dump --column-inserts --compress-lzf -Fd -f out.dir regression || exit 1
+ ./pg_dump --compress-lzf -Fd -f out.dir regression || exit 1
+ ./pg_restore out.dir -d foodb && ./pg_restore -k out.dir || exit 1
+
+ # zlib compression
+ rm -rf out.dir
+ dropdb foodb
+ createdb --template=template0 foodb --lc-ctype=C
+ psql foodb -c "alter database foodb set lc_monetary to 'C'"
+ ./pg_dump --compress=4 -Fd -f out.dir regression || exit 1
+ ./pg_restore out.dir -d foodb || exit 1
+ ./pg_restore -k out.dir || exit 1
+
+ rm out.custom
+ dropdb foodb
+ createdb --template=template0 foodb --lc-ctype=C
+ psql foodb -c "alter database foodb set lc_monetary to 'C'"
+ #./pg_dump --inserts --compress=8 -Fc -f out.custom regression || exit 1
+ ./pg_dump --compress=8 -Fc -f out.custom regression || exit 1
+ ./pg_restore out.custom -d foodb || exit 1
+
+ # no compression
+ rm -rf out.dir
+ dropdb foodb
+ createdb --template=template0 foodb --lc-ctype=C
+ psql foodb -c "alter database foodb set lc_monetary to 'C'"
+ ./pg_dump --disable-dollar-quoting --compress=0 -Fd -f out.dir regression || exit 1
+ ./pg_restore out.dir -d foodb || exit 1
+ ./pg_restore -k out.dir || exit 1
+
+ rm out.custom
+ dropdb foodb
+ createdb --template=template0 foodb --lc-ctype=C
+ psql foodb -c "alter database foodb set lc_monetary to 'C'"
+ ./pg_dump --quote-all-identifiers --compress=0 -Fc -f out.custom regression || exit 1
+ ./pg_restore out.custom -d foodb || exit 1
+
+ dropdb foodb
+ createdb --template=template0 foodb --lc-ctype=C
+ psql foodb -c "alter database foodb set lc_monetary to 'C'"
+ pg_dump -Ft regression | pg_restore -d foodb || exit 1
+
+ dropdb foodb
+ createdb --template=template0 foodb --lc-ctype=C
+ psql foodb -c "alter database foodb set lc_monetary to 'C'"
+ pg_dump regression | psql foodb || exit 1
+
+ # restore 9.0 archives
+ dropdb foodb
+ createdb --template=template0 foodb --lc-ctype=C
+ psql foodb -c "alter database foodb set lc_monetary to 'C'"
+ ./pg_restore out.cust.none.90 -d foodb || exit 1
+
+ dropdb foodb
+ createdb --template=template0 foodb --lc-ctype=C
+ psql foodb -c "alter database foodb set lc_monetary to 'C'"
+ ./pg_restore out.cust.z.90 -d foodb || exit 1
+
+
+ echo Success
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index fd169b6..b79d765 100644
*** a/src/include/pg_config.h.in
--- b/src/include/pg_config.h.in
***************
*** 323,328 ****
--- 323,331 ----
/* Define to 1 if you have the `z' library (-lz). */
#undef HAVE_LIBZ
+ /* Define to 1 if you have the `lzf' library (-llzf). */
+ #undef HAVE_LIBLZF
+
/* Define to 1 if constants of type 'long long int' should have the suffix LL.
*/
#undef HAVE_LL_CONSTANTS
Hi,
Sharing some thoughts after a first round of reviewing, where I only had
time to read the patch itself.
Joachim Wieland <joe@mcknight.de> writes:
Since the compression is currently all down in the custom format
backup code,
the first thing I've done was refactoring the compression functions
into a
separate file. While at it, I have added support for liblzf
compression.
I think I'd like to see a separate patch for the new compression
support. Sorry about that, I realize that's extra work…
And it could be about personal preferences, but the way you added the
liblzf support strikes me at odd, with all those #ifdefs everywhere. Is
it possible to have a specific file for each supported compression
format, then some routing code in src/bin/pg_dump/compress_io.c?
The routing code already exists but then the file is full of #ifdef
sections to define the right supporting function when I think having a
compress_io_zlib and a compress_io_lzf files would be better.
Then there's the bulk of the new dump format feature in the other part
of the patch, namely src/bin/pg_dump/pg_backup_directory.c. You have to
update the copyright in the file header there, at least :)
I'm yet to devote more time on this part of the patch but it seems like
it's rewriting the full support without using the existing bits. That's
something I have to check, didn't have time to read the existing other
archive formats code there.
I'm hesitant as far as marking the patch "Waiting on author" to get it
split. Joachim, what do you think?
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Hi Dimitri and Joachim.
I've looked the patch too, and I want to share some thoughts too. I've
used http://wiki.postgresql.org/wiki/Reviewing_a_Patch to guide my
review.
Submission review:
I've apllied and compiled the patch successfully using the current master.
Usability review:
The dir format generated in my database 60 files, with different
sizes, and it looks very confusing. Is it possible to use the same
trick as pigz and pbzip2, creating a concatenated file of streams?
Feature test:
Just a partial review. I can dump / restore using lzf, but didnt
stress it hard to check robustness.
Performance review:
Didnt test it hard too, but looks ok.
Coding review:
Just a shallow review here.
I think I'd like to see a separate patch for the new compression
support. Sorry about that, I realize that's extra work…
Same feeling here, this is the 1st thing that I notice.
The md5.c and kwlookup.c reuse using a link doesn't look nice either.
This way you need to compile twice, among others things, but I think
that its temporary, right?
--
José Arthur Benetasso Villanova
Import Notes
Resolved by subject fallback
Excerpts from José Arthur Benetasso Villanova's message of vie nov 19 18:28:03 -0300 2010:
The md5.c and kwlookup.c reuse using a link doesn't look nice either.
This way you need to compile twice, among others things, but I think
that its temporary, right?
Not sure what you mean here, but kwlookup.c is a symlink without this
patch too. It's just the way it works; the compilation environments
here and in the backend are different, so there is no other option but
to compile twice. I guess md5.c is a new one (I didn't check), but I
would assume it's the same thing.
--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Hi Dimitri,
thanks for reviewing my patch!
On Fri, Nov 19, 2010 at 2:44 PM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
I think I'd like to see a separate patch for the new compression
support. Sorry about that, I realize that's extra work…
I guess it wouldn't be a very big deal but I also doubt that it makes
the review that much easier. Basically the compression refactor patch
would just touch pg_backup_custom.c (because this is the place where
the libz compression is currently burried into) and the two new
compress_io.(c|h) files. Everything else is pretty much the directory
stuff and is on top of these changes.
And it could be about personal preferences, but the way you added the
liblzf support strikes me at odd, with all those #ifdefs everywhere. Is
it possible to have a specific file for each supported compression
format, then some routing code in src/bin/pg_dump/compress_io.c?
Sure we could. But I wanted to wait with any fancy function pointer
stuff until we have decided if we want to include the liblzf support
at all. The #ifdefs might be a bit ugly but in case we do not include
liblzf support, it's the easiest way to take it out again. As written
in my introduction, this patch is not really about liblzf, liblzf is
just a proof of concept for factoring out the compression part and I
have included it, so that people can use it and see how much speed
improvement they get.
The routing code already exists but then the file is full of #ifdef
sections to define the right supporting function when I think having a
compress_io_zlib and a compress_io_lzf files would be better.
Sure! I completely agree...
Then there's the bulk of the new dump format feature in the other part
of the patch, namely src/bin/pg_dump/pg_backup_directory.c. You have to
update the copyright in the file header there, at least :)
Well, not sure if we can just change the copyright notice, because in
the end the structure was copied from one of the other files which all
have the copyright notice in them, so my work is based on those other
files...
I'm hesitant as far as marking the patch "Waiting on author" to get it
split. Joachim, what do you think?
I will see if I can split it.
Joachim
Dimitri Fontaine <dimitri@2ndQuadrant.fr> writes:
I think I'd like to see a separate patch for the new compression
support. Sorry about that, I realize that's extra work…
That part of the patch is likely to get rejected outright anyway,
so I *strongly* recommend splitting it out. We have generally resisted
adding random compression algorithms to pg_dump because of license and
patent considerations, and I see no reason to suppose this one is going
to pass muster.
regards, tom lane
On Fri, Nov 19, 2010 at 11:53 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Dimitri Fontaine <dimitri@2ndQuadrant.fr> writes:
I think I'd like to see a separate patch for the new compression
support. Sorry about that, I realize that's extra work…That part of the patch is likely to get rejected outright anyway,
so I *strongly* recommend splitting it out. We have generally resisted
adding random compression algorithms to pg_dump because of license and
patent considerations, and I see no reason to suppose this one is going
to pass muster.
I was already anticipating that possiblitiy and my inital patch description
is along these lines.
However, liblzf is BSD licensed so on the license side we should be fine.
Regarding patents, your last comment was that you'd like to see if it's
really worth it and so I have included support for lzf for anybody to go
ahead and find that out.
Will send an updated split up patch this weekend (which would actually be
four patches already...).
Joachim
Hi Jose,
2010/11/19 José Arthur Benetasso Villanova <jose.arthur@gmail.com>:
The dir format generated in my database 60 files, with different
sizes, and it looks very confusing. Is it possible to use the same
trick as pigz and pbzip2, creating a concatenated file of streams?
What pigz is parallelizing is the actual computation of the compressed
data. The directory archive format however is a preparation for a
parallel pg_dump, dumping several tables (especially large tables of
course) in parallel via multiple database connections and multiple
pg_dump frontends. The idea of multiplexing their output into one file
has been rejected on the grounds that it would probably slow down the
whole process.
Nevertheless pigz could be implemented as an alternative compression
algorithm and that way the custom and the directory archive format
could use it, but here as well, license and patent questions might be
in the way, even though it is based on libz.
The md5.c and kwlookup.c reuse using a link doesn't look nice either.
This way you need to compile twice, among others things, but I think
that its temporary, right?
No, it isn't. md5.c is used in the same way by e.g. libpq and there
are other examples for links in core, check out src/bin/psql for
example.
Joachim
On Fri, Nov 19, 2010 at 2:44 PM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
I think I'd like to see a separate patch for the new compression
support. Sorry about that, I realize that's extra work…
Attached are two patches building on top of each other. The first one
factors out the I/O routines (esp. libz) of pg_backup_custom.c into a
new file compress_io.c. This patch is without liblzf support now.
The second patch on top implements the new archive format of a directory.
Regarding the parallel part, I have been playing with Windows support
this weekend but I am still facing some issues (if anybody wants to
help who knows more about Windows programming than me, just let me
know). I will send the parallel patch and the liblzf part as two other
separate patches in the next few days.
Joachim
Attachments:
pg_dump-compression-refactor.difftext/x-patch; charset=US-ASCII; name=pg_dump-compression-refactor.diffDownload
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 0367466..efb031a 100644
*** a/src/bin/pg_dump/Makefile
--- b/src/bin/pg_dump/Makefile
*************** override CPPFLAGS := -I$(libpq_srcdir) $
*** 20,26 ****
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
--- 20,26 ----
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o compress_io.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
diff --git a/src/bin/pg_dump/compress_io.c b/src/bin/pg_dump/compress_io.c
index ...c31f3a9 .
*** a/src/bin/pg_dump/compress_io.c
--- b/src/bin/pg_dump/compress_io.c
***************
*** 0 ****
--- 1,381 ----
+ /*-------------------------------------------------------------------------
+ *
+ * compress_io.c
+ * Routines for archivers to write an uncompressed or compressed data
+ * stream.
+ *
+ * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * pg_dump will read the system catalogs in a database and dump out a
+ * script that reproduces the schema in terms of SQL that is understood
+ * by PostgreSQL
+ *
+ * IDENTIFICATION
+ * XXX
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include "compress_io.h"
+
+ static const char *modulename = gettext_noop("compress_io");
+
+ #ifdef HAVE_LIBZ
+ static void _DoInflate(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF);
+ static void _DoDeflate(ArchiveHandle *AH, CompressorState *cs, int flush, WriteFunc writeF);
+ static void _DoInflateZlib(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF);
+ static void _DoDeflateZlib(ArchiveHandle *AH, CompressorState *cs, int flush, WriteFunc writeF);
+ #endif
+
+ /*
+ * If a compression library is in use, then startit up. This is called from
+ * StartData & StartBlob. The buffers are setup in the Init routine.
+ */
+ void
+ InitCompressorState(ArchiveHandle *AH, CompressorState *cs, CompressorAction action)
+ {
+ if (AH->compression == 0 && cs->comprAlg != COMPR_ALG_NONE)
+ AH->compression = -1;
+
+ Assert(AH->compression == 0 ?
+ (cs->comprAlg == COMPR_ALG_NONE) :
+ (cs->comprAlg != COMPR_ALG_NONE));
+
+ if (cs->comprAlg == COMPR_ALG_LIBZ)
+ {
+ #ifdef HAVE_LIBZ
+ z_streamp zp = cs->zp;
+
+ if (AH->compression < 0 || AH->compression > 9)
+ AH->compression = Z_DEFAULT_COMPRESSION;
+
+ zp->zalloc = Z_NULL;
+ zp->zfree = Z_NULL;
+ zp->opaque = Z_NULL;
+
+ if (action == COMPRESSOR_DEFLATE)
+ if (deflateInit(zp, AH->compression) != Z_OK)
+ die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
+ if (action == COMPRESSOR_INFLATE)
+ if (inflateInit(zp) != Z_OK)
+ die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
+
+ /* Just be paranoid - maybe End is called after Start, with no Write */
+ zp->next_out = (void *) cs->comprOut;
+ zp->avail_out = comprOutInitSize;
+ #endif
+ }
+
+ /* Nothing to be done for COMPR_ALG_NONE */
+ }
+
+ /*
+ * Terminate compression library context and flush its buffers. If no compression
+ * library is in use then just return.
+ */
+ void
+ FlushCompressorState(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF)
+ {
+ Assert(AH->compression == 0 ?
+ (cs->comprAlg == COMPR_ALG_NONE) :
+ (cs->comprAlg != COMPR_ALG_NONE));
+
+ #ifdef HAVE_LIBZ
+ if (cs->comprAlg == COMPR_ALG_LIBZ)
+ {
+ z_streamp zp = cs->zp;
+
+ zp->next_in = NULL;
+ zp->avail_in = 0;
+
+ _DoDeflate(AH, cs, Z_FINISH, writeF);
+
+ if (deflateEnd(zp) != Z_OK)
+ die_horribly(AH, modulename, "could not close compression stream: %s\n", zp->msg);
+ }
+ #endif
+ /* Nothing to be done for COMPR_ALG_NONE */
+ }
+
+ #ifdef HAVE_LIBZ
+ void
+ _DoDeflate(ArchiveHandle *AH, CompressorState *cs, int flush, WriteFunc writeF)
+ {
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_LIBZ:
+ _DoDeflateZlib(AH, cs, flush, writeF);
+ break;
+ case COMPR_ALG_NONE:
+ Assert(false);
+ break;
+ }
+ }
+
+ /*
+ * Send compressed data to the output stream (via writeF).
+ */
+ void
+ _DoDeflateZlib(ArchiveHandle *AH, CompressorState *cs, int flush, WriteFunc writeF)
+ {
+ z_streamp zp = cs->zp;
+ char *out = cs->comprOut;
+ int res = Z_OK;
+
+ Assert(AH->compression != 0);
+
+ while (cs->zp->avail_in != 0 || flush)
+ {
+ res = deflate(zp, flush);
+ if (res == Z_STREAM_ERROR)
+ die_horribly(AH, modulename, "could not compress data: %s\n", zp->msg);
+ if (((flush == Z_FINISH) && (zp->avail_out < comprOutInitSize))
+ || (zp->avail_out == 0)
+ || (zp->avail_in != 0)
+ )
+ {
+ /*
+ * Extra paranoia: avoid zero-length chunks, since a zero length
+ * chunk is the EOF marker in the custom format. This should never
+ * happen but...
+ */
+ if (zp->avail_out < comprOutInitSize)
+ {
+ /*
+ * Any write function shoud do its own error checking but
+ * to make sure we do a check here as well...
+ */
+ size_t len = comprOutInitSize - zp->avail_out;
+ if (writeF(AH, out, len) != len)
+ die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
+ }
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+ }
+
+ if (res == Z_STREAM_END)
+ break;
+ }
+ }
+
+ static void
+ _DoInflate(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF)
+ {
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_LIBZ:
+ _DoInflateZlib(AH, cs, readF);
+ break;
+ case COMPR_ALG_NONE:
+ Assert(false);
+ break;
+ }
+ }
+
+ /*
+ * This function is void as it either returns successfully or fails via
+ * die_horribly().
+ */
+ static void
+ _DoInflateZlib(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF)
+ {
+ z_streamp zp = cs->zp;
+ char *out = cs->comprOut;
+ int res = Z_OK;
+ size_t cnt;
+ void *in;
+
+ Assert(AH->compression != 0);
+
+ /* no minimal chunk size for zlib */
+ while ((cnt = readF(AH, &in, 0)))
+ {
+ zp->next_in = (void *) in;
+ zp->avail_in = cnt;
+
+ while (zp->avail_in > 0)
+ {
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+
+ res = inflate(zp, 0);
+ if (res != Z_OK && res != Z_STREAM_END)
+ die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
+
+ out[comprOutInitSize - zp->avail_out] = '\0';
+ ahwrite(out, 1, comprOutInitSize - zp->avail_out, AH);
+ }
+ }
+
+ zp->next_in = NULL;
+ zp->avail_in = 0;
+ while (res != Z_STREAM_END)
+ {
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+ res = inflate(zp, 0);
+ if (res != Z_OK && res != Z_STREAM_END)
+ die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
+
+ out[comprOutInitSize - zp->avail_out] = '\0';
+ ahwrite(out, 1, comprOutInitSize - zp->avail_out, AH);
+ }
+
+ if (inflateEnd(zp) != Z_OK)
+ die_horribly(AH, modulename, "could not close compression library: %s\n", zp->msg);
+ }
+ #endif
+
+ void
+ ReadDataFromArchive(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF)
+ {
+ Assert(AH->compression == 0 ?
+ (cs->comprAlg == COMPR_ALG_NONE) :
+ (cs->comprAlg != COMPR_ALG_NONE));
+
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_LIBZ:
+ #ifdef HAVE_LIBZ
+ _DoInflate(AH, cs, readF);
+ #endif
+ break;
+ case COMPR_ALG_NONE:
+ {
+ size_t cnt;
+ void *in;
+
+ /* no minimal chunk size for uncompressed data */
+ while ((cnt = readF(AH, &in, 0)))
+ {
+ ahwrite(in, 1, cnt, AH);
+ }
+ }
+ }
+ }
+
+ size_t
+ WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF,
+ const void *data, size_t dLen)
+ {
+ Assert(AH->compression == 0 ?
+ (cs->comprAlg == COMPR_ALG_NONE) :
+ (cs->comprAlg != COMPR_ALG_NONE));
+
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_LIBZ:
+ #ifdef HAVE_LIBZ
+ cs->zp->next_in = (void *) data;
+ cs->zp->avail_in = dLen;
+ _DoDeflate(AH, cs, Z_NO_FLUSH, writeF);
+ #endif
+ break;
+ case COMPR_ALG_NONE:
+ /*
+ * Any write function shoud do its own error checking but to make sure
+ * we do a check here as well...
+ */
+ if (writeF(AH, data, dLen) != dLen)
+ die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
+ }
+ /* we have either succeeded in writing dLen bytes or we have called die_horribly() */
+ return dLen;
+ }
+
+ CompressorState *
+ AllocateCompressorState(ArchiveHandle *AH)
+ {
+ CompressorAlgorithm alg = COMPR_ALG_NONE;
+ CompressorState *cs;
+
+ /*
+ * AH->compression is set either on the commandline when creating an archive
+ * or by ReadHead() when restoring an archive.
+ */
+
+ if (AH->compression == Z_DEFAULT_COMPRESSION ||
+ (AH->compression > 0 && AH->compression <= 9))
+ alg = COMPR_ALG_LIBZ;
+ else if (AH->compression == COMPRESSION_NONE)
+ alg = COMPR_ALG_NONE;
+ else
+ die_horribly(AH, modulename, "Invalid compression code: %d\n",
+ AH->compression);
+
+ #ifndef HAVE_LIBZ
+ /* if no compression was specified and we are not built with libz support
+ * anyway, fall back to no compression */
+ if (AH->compression == Z_DEFAULT_COMPRESSION)
+ {
+ AH->compression = COMPRESSION_NONE;
+ alg = COMPR_ALG_NONE;
+ }
+ else if (alg == COMPR_ALG_LIBZ)
+ die_horribly(AH, modulename, "not built with zlib support\n");
+ #endif
+
+ cs = (CompressorState *) malloc(sizeof(CompressorState));
+ if (cs == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+
+ cs->comprAlg = alg;
+
+ switch(alg)
+ {
+ case COMPR_ALG_LIBZ:
+ #ifdef HAVE_LIBZ
+ cs->zp = (z_streamp) malloc(sizeof(z_stream));
+ if (cs->zp == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+
+ /*
+ * comprOutInitSize is the buffer size we tell zlib it can output
+ * to. We actually allocate one extra byte because some routines
+ * want to append a trailing zero byte to the zlib output. The
+ * input buffer is expansible and is always of size
+ * cs->comprInSize; comprInInitSize is just the initial default
+ * size for it.
+ */
+ cs->comprOut = (char *) malloc(comprOutInitSize + 1);
+ cs->comprIn = (char *) malloc(comprInInitSize);
+ cs->comprInSize = comprInInitSize;
+ cs->comprOutSize = comprOutInitSize;
+
+ if (cs->comprOut == NULL || cs->comprIn == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+ #endif
+ break;
+ case COMPR_ALG_NONE:
+ cs->comprOut = (char *) malloc(comprOutInitSize + 1);
+ cs->comprIn = (char *) malloc(comprInInitSize);
+ cs->comprInSize = comprInInitSize;
+ cs->comprOutSize = comprOutInitSize;
+
+ if (cs->comprOut == NULL || cs->comprIn == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+ break;
+ }
+
+ return cs;
+ }
+
+ void
+ FreeCompressorState(CompressorState *cs)
+ {
+ free(cs->comprOut);
+ free(cs->comprIn);
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_LIBZ:
+ #ifdef HAVE_LIBZ
+ free(cs->zp);
+ #endif
+ break;
+ case COMPR_ALG_NONE:
+ break;
+ }
+ free(cs);
+ }
+
diff --git a/src/bin/pg_dump/compress_io.h b/src/bin/pg_dump/compress_io.h
index ...b6b94d3 .
*** a/src/bin/pg_dump/compress_io.h
--- b/src/bin/pg_dump/compress_io.h
***************
*** 0 ****
--- 1,73 ----
+ /*-------------------------------------------------------------------------
+ *
+ * compress_io.h
+ * Routines for archivers to write an uncompressed or compressed data
+ * stream.
+ *
+ * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * pg_dump will read the system catalogs in a database and dump out a
+ * script that reproduces the schema in terms of SQL that is understood
+ * by PostgreSQL
+ *
+ * IDENTIFICATION
+ * XXX
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include "pg_backup_archiver.h"
+
+ #define comprOutInitSize 65536
+ #define comprInInitSize 65536
+
+
+ typedef enum
+ {
+ COMPRESSOR_INFLATE,
+ COMPRESSOR_DEFLATE
+ } CompressorAction;
+
+ typedef enum
+ {
+ COMPR_ALG_NONE,
+ COMPR_ALG_LIBZ,
+ } CompressorAlgorithm;
+
+ typedef struct
+ {
+ CompressorAlgorithm comprAlg;
+ #ifdef HAVE_LIBZ
+ z_streamp zp;
+ #endif
+ char *comprOut;
+ char *comprIn;
+ size_t comprInSize;
+ size_t comprOutSize;
+ } CompressorState;
+
+ typedef size_t (*WriteFunc)(ArchiveHandle *AH, const void *buf, size_t len);
+ /*
+ * The sizeHint parameter tells the format which size is required for the algorithm.
+ * If the format doesn't know better it should send back that many bytes of input.
+ * If the format was written by blocks however, then the format already knows the block
+ * size and can deliver exactly the size of the next block.
+ *
+ * The custom archive is written in such blocks.
+ * The directory archive however is just a continuous stream of data. Other
+ * compressed formats than libz however deal with blocks on the algorithm level
+ * and then the algorithm is able to tell the format the amount of data that it
+ * is ready to consume next.
+ */
+ typedef size_t (*ReadFunc)(ArchiveHandle *AH, void **buf, size_t sizeHint);
+
+ void ReadDataFromArchive(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF);
+ size_t WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF, const void *data, size_t dLen);
+
+ void InitCompressorState(ArchiveHandle *AH, CompressorState *cs, CompressorAction action);
+ void FlushCompressorState(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF);
+
+ void FreeCompressorState(CompressorState *cs);
+ CompressorState *AllocateCompressorState(ArchiveHandle *AH);
+
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index d1a9c54..a28956b 100644
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
***************
*** 22,27 ****
--- 22,28 ----
#include "pg_backup_db.h"
#include "dumputils.h"
+ #include "compress_io.h"
#include <ctype.h>
#include <unistd.h>
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index ae0c6e0..705b2e4 100644
*** a/src/bin/pg_dump/pg_backup_archiver.h
--- b/src/bin/pg_dump/pg_backup_archiver.h
***************
*** 49,54 ****
--- 49,55 ----
#define GZCLOSE(fh) fclose(fh)
#define GZWRITE(p, s, n, fh) (fwrite(p, s, n, fh) * (s))
#define GZREAD(p, s, n, fh) fread(p, s, n, fh)
+ /* this is just the redefinition of a libz constant */
#define Z_DEFAULT_COMPRESSION (-1)
typedef struct _z_stream
*************** typedef struct _z_stream
*** 61,66 ****
--- 62,76 ----
typedef z_stream *z_streamp;
#endif
+ /* XXX eventually this should be an enum. However if we want something
+ * pluggable in the long run it can get hard to add values to a central
+ * enum from the plugins... */
+ #define COMPRESSION_UNKNOWN (-2)
+ #define COMPRESSION_NONE 0
+
+ /* XXX should we change the archive version for pg_dump with directory support?
+ * XXX We are not actually modifying the existing formats, but on the other hand
+ * XXX a file could now be compressed with liblzf. */
/* Current archive version number (the format we can output) */
#define K_VERS_MAJOR 1
#define K_VERS_MINOR 12
*************** typedef struct _archiveHandle
*** 267,272 ****
--- 277,288 ----
struct _tocEntry *currToc; /* Used when dumping data */
int compression; /* Compression requested on open */
+ /* Possible values for compression:
+ -2 COMPRESSION_UNKNOWN
+ -1 Z_DEFAULT_COMPRESSION
+ 0 COMPRESSION_NONE
+ 1-9 levels for gzip compression
+ */
ArchiveMode mode; /* File mode - r or w */
void *formatData; /* Header data specific to file format */
*************** int ahprintf(ArchiveHandle *AH, const
*** 381,384 ****
--- 397,411 ----
void ahlog(ArchiveHandle *AH, int level, const char *fmt,...) __attribute__((format(printf, 3, 4)));
+ #ifdef USE_ASSERT_CHECKING
+ #define Assert(condition) \
+ if (!(condition)) \
+ { \
+ write_msg(NULL, "Failed assertion in %s, line %d\n", \
+ __FILE__, __LINE__); \
+ abort();\
+ }
+ #else
+ #define Assert(condition)
+ #endif
#endif
diff --git a/src/bin/pg_dump/pg_backup_custom.c b/src/bin/pg_dump/pg_backup_custom.c
index 2bc7e8f..95135b7 100644
*** a/src/bin/pg_dump/pg_backup_custom.c
--- b/src/bin/pg_dump/pg_backup_custom.c
***************
*** 25,30 ****
--- 25,31 ----
*/
#include "pg_backup_archiver.h"
+ #include "compress_io.h"
/*--------
* Routines in the format interface
*************** static void _LoadBlobs(ArchiveHandle *AH
*** 58,77 ****
static void _Clone(ArchiveHandle *AH);
static void _DeClone(ArchiveHandle *AH);
! /*------------
! * Buffers used in zlib compression and extra data stored in archive and
! * in TOC entries.
! *------------
! */
! #define zlibOutSize 4096
! #define zlibInSize 4096
typedef struct
{
! z_streamp zp;
! char *zlibOut;
! char *zlibIn;
! size_t inSize;
int hasSeek;
pgoff_t filePos;
pgoff_t dataStart;
--- 59,70 ----
static void _Clone(ArchiveHandle *AH);
static void _DeClone(ArchiveHandle *AH);
! static size_t _CustomWriteFunc(ArchiveHandle *AH, const void *buf, size_t len);
! static size_t _CustomReadFunction(ArchiveHandle *AH, void **buf, size_t sizeHint);
typedef struct
{
! CompressorState *cs;
int hasSeek;
pgoff_t filePos;
pgoff_t dataStart;
*************** static void _readBlockHeader(ArchiveHand
*** 92,98 ****
static void _StartDataCompressor(ArchiveHandle *AH, TocEntry *te);
static void _EndDataCompressor(ArchiveHandle *AH, TocEntry *te);
static pgoff_t _getFilePos(ArchiveHandle *AH, lclContext *ctx);
- static int _DoDeflate(ArchiveHandle *AH, lclContext *ctx, int flush);
static const char *modulename = gettext_noop("custom archiver");
--- 85,90 ----
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 144,179 ****
die_horribly(AH, modulename, "out of memory\n");
AH->formatData = (void *) ctx;
- ctx->zp = (z_streamp) malloc(sizeof(z_stream));
- if (ctx->zp == NULL)
- die_horribly(AH, modulename, "out of memory\n");
-
/* Initialize LO buffering */
AH->lo_buf_size = LOBBUFSIZE;
AH->lo_buf = (void *) malloc(LOBBUFSIZE);
if (AH->lo_buf == NULL)
die_horribly(AH, modulename, "out of memory\n");
- /*
- * zlibOutSize is the buffer size we tell zlib it can output to. We
- * actually allocate one extra byte because some routines want to append a
- * trailing zero byte to the zlib output. The input buffer is expansible
- * and is always of size ctx->inSize; zlibInSize is just the initial
- * default size for it.
- */
- ctx->zlibOut = (char *) malloc(zlibOutSize + 1);
- ctx->zlibIn = (char *) malloc(zlibInSize);
- ctx->inSize = zlibInSize;
ctx->filePos = 0;
- if (ctx->zlibOut == NULL || ctx->zlibIn == NULL)
- die_horribly(AH, modulename, "out of memory\n");
-
/*
* Now open the file
*/
if (AH->mode == archModeWrite)
{
if (AH->fSpec && strcmp(AH->fSpec, "") != 0)
{
AH->FH = fopen(AH->fSpec, PG_BINARY_W);
--- 136,156 ----
die_horribly(AH, modulename, "out of memory\n");
AH->formatData = (void *) ctx;
/* Initialize LO buffering */
AH->lo_buf_size = LOBBUFSIZE;
AH->lo_buf = (void *) malloc(LOBBUFSIZE);
if (AH->lo_buf == NULL)
die_horribly(AH, modulename, "out of memory\n");
ctx->filePos = 0;
/*
* Now open the file
*/
if (AH->mode == archModeWrite)
{
+ ctx->cs = AllocateCompressorState(AH);
+
if (AH->fSpec && strcmp(AH->fSpec, "") != 0)
{
AH->FH = fopen(AH->fSpec, PG_BINARY_W);
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 211,216 ****
--- 188,195 ----
ctx->hasSeek = checkSeek(AH->FH);
ReadHead(AH);
+ ctx->cs = AllocateCompressorState(AH);
+
ReadToc(AH);
ctx->dataStart = _getFilePos(AH, ctx);
}
*************** static size_t
*** 340,356 ****
_WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
{
lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
!
! zp->next_in = (void *) data;
! zp->avail_in = dLen;
! while (zp->avail_in != 0)
! {
! /* printf("Deflating %lu bytes\n", (unsigned long) dLen); */
! _DoDeflate(AH, ctx, 0);
! }
! return dLen;
}
/*
--- 319,327 ----
_WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
{
lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! return WriteDataToArchive(AH, cs, _CustomWriteFunc, data, dLen);
}
/*
*************** static void
*** 533,639 ****
_PrintData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
! size_t blkLen;
! char *in = ctx->zlibIn;
! size_t cnt;
!
! #ifdef HAVE_LIBZ
! int res;
! char *out = ctx->zlibOut;
! #endif
!
! #ifdef HAVE_LIBZ
!
! res = Z_OK;
!
! if (AH->compression != 0)
! {
! zp->zalloc = Z_NULL;
! zp->zfree = Z_NULL;
! zp->opaque = Z_NULL;
!
! if (inflateInit(zp) != Z_OK)
! die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
! }
! #endif
!
! blkLen = ReadInt(AH);
! while (blkLen != 0)
! {
! if (blkLen + 1 > ctx->inSize)
! {
! free(ctx->zlibIn);
! ctx->zlibIn = NULL;
! ctx->zlibIn = (char *) malloc(blkLen + 1);
! if (!ctx->zlibIn)
! die_horribly(AH, modulename, "out of memory\n");
!
! ctx->inSize = blkLen + 1;
! in = ctx->zlibIn;
! }
!
! cnt = fread(in, 1, blkLen, AH->FH);
! if (cnt != blkLen)
! {
! if (feof(AH->FH))
! die_horribly(AH, modulename,
! "could not read from input file: end of file\n");
! else
! die_horribly(AH, modulename,
! "could not read from input file: %s\n", strerror(errno));
! }
!
! ctx->filePos += blkLen;
!
! zp->next_in = (void *) in;
! zp->avail_in = blkLen;
!
! #ifdef HAVE_LIBZ
! if (AH->compression != 0)
! {
! while (zp->avail_in != 0)
! {
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! res = inflate(zp, 0);
! if (res != Z_OK && res != Z_STREAM_END)
! die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
!
! out[zlibOutSize - zp->avail_out] = '\0';
! ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
! }
! }
! else
! #endif
! {
! in[zp->avail_in] = '\0';
! ahwrite(in, 1, zp->avail_in, AH);
! zp->avail_in = 0;
! }
! blkLen = ReadInt(AH);
! }
!
! #ifdef HAVE_LIBZ
! if (AH->compression != 0)
! {
! zp->next_in = NULL;
! zp->avail_in = 0;
! while (res != Z_STREAM_END)
! {
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! res = inflate(zp, 0);
! if (res != Z_OK && res != Z_STREAM_END)
! die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
! out[zlibOutSize - zp->avail_out] = '\0';
! ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
! }
! if (inflateEnd(zp) != Z_OK)
! die_horribly(AH, modulename, "could not close compression library: %s\n", zp->msg);
! }
! #endif
}
static void
--- 504,513 ----
_PrintData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! InitCompressorState(AH, cs, COMPRESSOR_INFLATE);
! ReadDataFromArchive(AH, cs, _CustomReadFunction);
}
static void
*************** static void
*** 683,701 ****
_skipData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
size_t blkLen;
! char *in = ctx->zlibIn;
size_t cnt;
blkLen = ReadInt(AH);
while (blkLen != 0)
{
! if (blkLen > ctx->inSize)
{
! free(ctx->zlibIn);
! ctx->zlibIn = (char *) malloc(blkLen);
! ctx->inSize = blkLen;
! in = ctx->zlibIn;
}
cnt = fread(in, 1, blkLen, AH->FH);
if (cnt != blkLen)
--- 557,576 ----
_skipData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
size_t blkLen;
! char *in = cs->comprIn;
size_t cnt;
blkLen = ReadInt(AH);
while (blkLen != 0)
{
! if (blkLen > cs->comprInSize)
{
! free(cs->comprIn);
! cs->comprIn = (char *) malloc(blkLen);
! cs->comprInSize = blkLen;
! in = cs->comprIn;
}
cnt = fread(in, 1, blkLen, AH->FH);
if (cnt != blkLen)
*************** _readBlockHeader(ArchiveHandle *AH, int
*** 961,1099 ****
}
/*
! * If zlib is available, then startit up. This is called from
! * StartData & StartBlob. The buffers are setup in the Init routine.
*/
static void
_StartDataCompressor(ArchiveHandle *AH, TocEntry *te)
{
lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
!
! #ifdef HAVE_LIBZ
!
! if (AH->compression < 0 || AH->compression > 9)
! AH->compression = Z_DEFAULT_COMPRESSION;
! if (AH->compression != 0)
! {
! zp->zalloc = Z_NULL;
! zp->zfree = Z_NULL;
! zp->opaque = Z_NULL;
! if (deflateInit(zp, AH->compression) != Z_OK)
! die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
! }
! #else
! AH->compression = 0;
! #endif
! /* Just be paranoid - maybe End is called after Start, with no Write */
! zp->next_out = (void *) ctx->zlibOut;
! zp->avail_out = zlibOutSize;
}
! /*
! * Send compressed data to the output stream (via ahwrite).
! * Each data chunk is preceded by it's length.
! * In the case of Z0, or no zlib, just write the raw data.
! *
! */
! static int
! _DoDeflate(ArchiveHandle *AH, lclContext *ctx, int flush)
{
! z_streamp zp = ctx->zp;
! #ifdef HAVE_LIBZ
! char *out = ctx->zlibOut;
! int res = Z_OK;
! if (AH->compression != 0)
{
! res = deflate(zp, flush);
! if (res == Z_STREAM_ERROR)
! die_horribly(AH, modulename, "could not compress data: %s\n", zp->msg);
! if (((flush == Z_FINISH) && (zp->avail_out < zlibOutSize))
! || (zp->avail_out == 0)
! || (zp->avail_in != 0)
! )
! {
! /*
! * Extra paranoia: avoid zero-length chunks since a zero length
! * chunk is the EOF marker. This should never happen but...
! */
! if (zp->avail_out < zlibOutSize)
! {
! /*
! * printf("Wrote %lu byte deflated chunk\n", (unsigned long)
! * (zlibOutSize - zp->avail_out));
! */
! WriteInt(AH, zlibOutSize - zp->avail_out);
! if (fwrite(out, 1, zlibOutSize - zp->avail_out, AH->FH) != (zlibOutSize - zp->avail_out))
! die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
! ctx->filePos += zlibOutSize - zp->avail_out;
! }
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! }
}
! else
! #endif
{
! if (zp->avail_in > 0)
! {
! WriteInt(AH, zp->avail_in);
! if (fwrite(zp->next_in, 1, zp->avail_in, AH->FH) != zp->avail_in)
! die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
! ctx->filePos += zp->avail_in;
! zp->avail_in = 0;
! }
else
! {
! #ifdef HAVE_LIBZ
! if (flush == Z_FINISH)
! res = Z_STREAM_END;
! #endif
! }
}
!
! #ifdef HAVE_LIBZ
! return res;
! #else
! return 1;
! #endif
}
/*
! * Terminate zlib context and flush it's buffers. If no zlib
! * then just return.
*/
static void
_EndDataCompressor(ArchiveHandle *AH, TocEntry *te)
{
! #ifdef HAVE_LIBZ
! lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
! int res;
!
! if (AH->compression != 0)
! {
! zp->next_in = NULL;
! zp->avail_in = 0;
!
! do
! {
! /* printf("Ending data output\n"); */
! res = _DoDeflate(AH, ctx, Z_FINISH);
! } while (res != Z_STREAM_END);
!
! if (deflateEnd(zp) != Z_OK)
! die_horribly(AH, modulename, "could not close compression stream: %s\n", zp->msg);
! }
! #endif
/* Send the end marker */
WriteInt(AH, 0);
--- 836,918 ----
}
/*
! * If a compression algorithm is available, then startit up. This is called
! * from StartData & StartBlob. The buffers are setup in the Init routine.
*/
static void
_StartDataCompressor(ArchiveHandle *AH, TocEntry *te)
{
lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! InitCompressorState(AH, cs, COMPRESSOR_DEFLATE);
! }
! static size_t
! _CustomWriteFunc(ArchiveHandle *AH, const void *buf, size_t len)
! {
! Assert(len != 0);
! /* never write 0-byte blocks (this should not happen) */
! if (len == 0)
! return 0;
! WriteInt(AH, len);
! return _WriteBuf(AH, buf, len);
}
! static size_t
! _CustomReadFunction(ArchiveHandle *AH, void **buf, size_t sizeHint)
{
! lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! size_t blkLen;
! size_t cnt;
! /*
! * We deliberately ignore the sizeHint parameter because we know
! * the exact size of the next compressed block (=blkLen).
! */
! blkLen = ReadInt(AH);
!
! if (blkLen == 0)
! return 0;
!
! if (blkLen + 1 > cs->comprInSize)
{
! free(cs->comprIn);
! cs->comprIn = NULL;
! cs->comprIn = (char *) malloc(blkLen + 1);
! if (!cs->comprIn)
! die_horribly(AH, modulename, "out of memory\n");
! cs->comprInSize = blkLen + 1;
}
! cnt = _ReadBuf(AH, cs->comprIn, blkLen);
! if (cnt != blkLen)
{
! if (feof(AH->FH))
! die_horribly(AH, modulename,
! "could not read from input file: end of file\n");
else
! die_horribly(AH, modulename,
! "could not read from input file: %s\n", strerror(errno));
}
! *buf = cs->comprIn;
! return cnt;
}
/*
! * Terminate zlib context and flush it's buffers.
*/
static void
_EndDataCompressor(ArchiveHandle *AH, TocEntry *te)
{
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
! FlushCompressorState(AH, cs, _CustomWriteFunc);
/* Send the end marker */
WriteInt(AH, 0);
*************** _Clone(ArchiveHandle *AH)
*** 1114,1125 ****
memcpy(AH->formatData, ctx, sizeof(lclContext));
ctx = (lclContext *) AH->formatData;
! ctx->zp = (z_streamp) malloc(sizeof(z_stream));
! ctx->zlibOut = (char *) malloc(zlibOutSize + 1);
! ctx->zlibIn = (char *) malloc(ctx->inSize);
!
! if (ctx->zp == NULL || ctx->zlibOut == NULL || ctx->zlibIn == NULL)
! die_horribly(AH, modulename, "out of memory\n");
/*
* Note: we do not make a local lo_buf because we expect at most one BLOBS
--- 933,939 ----
memcpy(AH->formatData, ctx, sizeof(lclContext));
ctx = (lclContext *) AH->formatData;
! ctx->cs = AllocateCompressorState(AH);
/*
* Note: we do not make a local lo_buf because we expect at most one BLOBS
*************** static void
*** 1133,1141 ****
_DeClone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
- free(ctx->zlibOut);
- free(ctx->zlibIn);
- free(ctx->zp);
free(ctx);
}
--- 947,956 ----
_DeClone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ FreeCompressorState(cs);
free(ctx);
}
+
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 55ea684..04ded33 100644
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
***************
*** 56,61 ****
--- 56,62 ----
#include "pg_backup_archiver.h"
#include "dumputils.h"
+ #include "compress_io.h"
extern char *optarg;
extern int optind,
*************** main(int argc, char **argv)
*** 255,261 ****
int numObjs;
int i;
enum trivalue prompt_password = TRI_DEFAULT;
! int compressLevel = -1;
int plainText = 0;
int outputClean = 0;
int outputCreateDB = 0;
--- 256,262 ----
int numObjs;
int i;
enum trivalue prompt_password = TRI_DEFAULT;
! int compressLevel = COMPRESSION_UNKNOWN;
int plainText = 0;
int outputClean = 0;
int outputCreateDB = 0;
*************** main(int argc, char **argv)
*** 535,540 ****
--- 536,547 ----
exit(1);
}
+ /* actually we are using a zlib constant here but formats that don't
+ * support compression won't care and if we are not compiled with zlib
+ * compression we will be forced to no compression anyway. */
+ if (compressLevel == COMPRESSION_UNKNOWN)
+ compressLevel = Z_DEFAULT_COMPRESSION;
+
/* open the output file */
if (pg_strcasecmp(format, "a") == 0 || pg_strcasecmp(format, "append") == 0)
{
*************** dumpBlobs(Archive *AH, void *arg)
*** 2174,2180 ****
exit_nicely();
}
! WriteData(AH, buf, cnt);
} while (cnt > 0);
lo_close(g_conn, loFd);
--- 2181,2189 ----
exit_nicely();
}
! /* we try to avoid writing empty chunks */
! if (cnt > 0)
! WriteData(AH, buf, cnt);
} while (cnt > 0);
lo_close(g_conn, loFd);
pg_dump-directory.difftext/x-patch; charset=US-ASCII; name=pg_dump-directory.diffDownload
diff --git a/src/bin/pg_dump/.gitignore b/src/bin/pg_dump/.gitignore
index c2c8677..c28ddea 100644
*** a/src/bin/pg_dump/.gitignore
--- b/src/bin/pg_dump/.gitignore
***************
*** 1,4 ****
--- 1,5 ----
/kwlookup.c
+ /md5.c
/pg_dump
/pg_dumpall
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index efb031a..1f4a5b8 100644
*** a/src/bin/pg_dump/Makefile
--- b/src/bin/pg_dump/Makefile
*************** override CPPFLAGS := -I$(libpq_srcdir) $
*** 20,32 ****
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o compress_io.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
kwlookup.c: % : $(top_srcdir)/src/backend/parser/%
rm -f $@ && $(LN_S) $< .
all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) $(KEYWRDOBJS) | submake-libpq submake-libpgport
--- 20,35 ----
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o pg_backup_directory.o md5.o compress_io.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
kwlookup.c: % : $(top_srcdir)/src/backend/parser/%
rm -f $@ && $(LN_S) $< .
+ md5.c: % : $(top_srcdir)/src/backend/libpq/%
+ rm -f $@ && $(LN_S) $< .
+
all: pg_dump pg_restore pg_dumpall
pg_dump: pg_dump.o common.o pg_dump_sort.o $(OBJS) $(KEYWRDOBJS) | submake-libpq submake-libpgport
*************** uninstall:
*** 50,53 ****
rm -f $(addprefix '$(DESTDIR)$(bindir)'/, pg_dump$(X) pg_restore$(X) pg_dumpall$(X))
clean distclean maintainer-clean:
! rm -f pg_dump$(X) pg_restore$(X) pg_dumpall$(X) $(OBJS) pg_dump.o common.o pg_dump_sort.o pg_restore.o pg_dumpall.o kwlookup.c $(KEYWRDOBJS)
--- 53,56 ----
rm -f $(addprefix '$(DESTDIR)$(bindir)'/, pg_dump$(X) pg_restore$(X) pg_dumpall$(X))
clean distclean maintainer-clean:
! rm -f pg_dump$(X) pg_restore$(X) pg_dumpall$(X) $(OBJS) pg_dump.o common.o pg_dump_sort.o pg_restore.o pg_dumpall.o md5.c kwlookup.c $(KEYWRDOBJS)
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 8fa9a57..08fb910 100644
*** a/src/bin/pg_dump/pg_backup.h
--- b/src/bin/pg_dump/pg_backup.h
*************** typedef enum _archiveFormat
*** 48,56 ****
{
archUnknown = 0,
archCustom = 1,
! archFiles = 2,
! archTar = 3,
! archNull = 4
} ArchiveFormat;
typedef enum _archiveMode
--- 48,58 ----
{
archUnknown = 0,
archCustom = 1,
! archDirectory = 2,
! archFiles = 3,
! archTar = 4,
! archNull = 5,
! archNullAppend = 6
} ArchiveFormat;
typedef enum _archiveMode
*************** typedef struct _restoreOptions
*** 112,117 ****
--- 114,120 ----
int schemaOnly;
int verbose;
int aclsSkip;
+ int checkArchive;
int tocSummary;
char *tocFile;
int format;
*************** extern Archive *CreateArchive(const char
*** 195,200 ****
--- 198,206 ----
/* The --list option */
extern void PrintTOCSummary(Archive *AH, RestoreOptions *ropt);
+ /* Check an existing archive */
+ extern bool CheckArchive(Archive *AH, RestoreOptions *ropt);
+
extern RestoreOptions *NewRestoreOptions(void);
/* Rearrange and filter TOC entries */
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index a28956b..1cb1926 100644
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
***************
*** 26,31 ****
--- 26,32 ----
#include <ctype.h>
#include <unistd.h>
+ #include <sys/stat.h>
#include <sys/types.h>
#include <sys/wait.h>
*************** static int _discoverArchiveFormat(Archiv
*** 109,114 ****
--- 110,116 ----
static void dump_lo_buf(ArchiveHandle *AH);
static void _write_msg(const char *modulename, const char *fmt, va_list ap);
static void _die_horribly(ArchiveHandle *AH, const char *modulename, const char *fmt, va_list ap);
+ static void outputSummaryHeaderText(Archive *AHX);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static OutputContext SetOutput(ArchiveHandle *AH, char *filename, int compression);
*************** PrintTOCSummary(Archive *AHX, RestoreOpt
*** 779,818 ****
ArchiveHandle *AH = (ArchiveHandle *) AHX;
TocEntry *te;
OutputContext sav;
- char *fmtName;
if (ropt->filename)
sav = SetOutput(AH, ropt->filename, 0 /* no compression */ );
! ahprintf(AH, ";\n; Archive created at %s", ctime(&AH->createDate));
! ahprintf(AH, "; dbname: %s\n; TOC Entries: %d\n; Compression: %d\n",
! AH->archdbname, AH->tocCount, AH->compression);
!
! switch (AH->format)
! {
! case archFiles:
! fmtName = "FILES";
! break;
! case archCustom:
! fmtName = "CUSTOM";
! break;
! case archTar:
! fmtName = "TAR";
! break;
! default:
! fmtName = "UNKNOWN";
! }
!
! ahprintf(AH, "; Dump Version: %d.%d-%d\n", AH->vmaj, AH->vmin, AH->vrev);
! ahprintf(AH, "; Format: %s\n", fmtName);
! ahprintf(AH, "; Integer: %d bytes\n", (int) AH->intSize);
! ahprintf(AH, "; Offset: %d bytes\n", (int) AH->offSize);
! if (AH->archiveRemoteVersion)
! ahprintf(AH, "; Dumped from database version: %s\n",
! AH->archiveRemoteVersion);
! if (AH->archiveDumpVersion)
! ahprintf(AH, "; Dumped by pg_dump version: %s\n",
! AH->archiveDumpVersion);
ahprintf(AH, ";\n;\n; Selected TOC Entries:\n;\n");
--- 781,791 ----
ArchiveHandle *AH = (ArchiveHandle *) AHX;
TocEntry *te;
OutputContext sav;
if (ropt->filename)
sav = SetOutput(AH, ropt->filename, 0 /* no compression */ );
! outputSummaryHeaderText(AHX);
ahprintf(AH, ";\n;\n; Selected TOC Entries:\n;\n");
*************** PrintTOCSummary(Archive *AHX, RestoreOpt
*** 841,846 ****
--- 814,856 ----
ResetOutput(AH, sav);
}
+ bool
+ CheckArchive(Archive *AHX, RestoreOptions *ropt)
+ {
+ ArchiveHandle *AH = (ArchiveHandle *) AHX;
+ TocEntry *te;
+ teReqs reqs;
+ bool checkOK;
+
+ outputSummaryHeaderText(AHX);
+
+ checkOK = (*AH->StartCheckArchivePtr)(AH);
+
+ /* this gets only called from the commandline so we write to stdout as
+ * usual */
+ printf(";\n; Performing Checks...\n;\n");
+
+ for (te = AH->toc->next; te != AH->toc; te = te->next)
+ {
+ if (!(reqs = _tocEntryRequired(te, ropt, true)))
+ continue;
+
+ if (!(*AH->CheckTocEntryPtr)(AH, te, reqs))
+ checkOK = false;
+
+ /* do not dump the contents but only the errors */
+ }
+
+ if (!(*AH->EndCheckArchivePtr)(AH))
+ checkOK = false;
+
+ printf("; Check result: %s\n", checkOK ? "OK" : "FAILED");
+
+ return checkOK;
+ }
+
+
+
/***********
* BLOB Archival
***********/
*************** archprintf(Archive *AH, const char *fmt,
*** 1116,1121 ****
--- 1126,1174 ----
* Stuff below here should be 'private' to the archiver routines
*******************************/
+ static void
+ outputSummaryHeaderText(Archive *AHX)
+ {
+ ArchiveHandle *AH = (ArchiveHandle *) AHX;
+ const char *fmtName;
+
+ ahprintf(AH, ";\n; Archive created at %s", ctime(&AH->createDate));
+ ahprintf(AH, "; dbname: %s\n; TOC Entries: %d\n; Compression: %d\n",
+ AH->archdbname, AH->tocCount, AH->compression);
+
+ switch (AH->format)
+ {
+ case archCustom:
+ fmtName = "CUSTOM";
+ break;
+ case archDirectory:
+ fmtName = "DIRECTORY";
+ break;
+ case archFiles:
+ fmtName = "FILES";
+ break;
+ case archTar:
+ fmtName = "TAR";
+ break;
+ default:
+ fmtName = "UNKNOWN";
+ }
+
+ ahprintf(AH, "; Dump Version: %d.%d-%d\n", AH->vmaj, AH->vmin, AH->vrev);
+ ahprintf(AH, "; Format: %s\n", fmtName);
+ ahprintf(AH, "; Integer: %d bytes\n", (int) AH->intSize);
+ ahprintf(AH, "; Offset: %d bytes\n", (int) AH->offSize);
+ if (AH->archiveRemoteVersion)
+ ahprintf(AH, "; Dumped from database version: %s\n",
+ AH->archiveRemoteVersion);
+ if (AH->archiveDumpVersion)
+ ahprintf(AH, "; Dumped by pg_dump version: %s\n",
+ AH->archiveDumpVersion);
+
+ if (AH->PrintExtraTocSummaryPtr != NULL)
+ (*AH->PrintExtraTocSummaryPtr) (AH);
+ }
+
static OutputContext
SetOutput(ArchiveHandle *AH, char *filename, int compression)
{
*************** _discoverArchiveFormat(ArchiveHandle *AH
*** 1721,1726 ****
--- 1774,1781 ----
char sig[6]; /* More than enough */
size_t cnt;
int wantClose = 0;
+ char buf[MAXPGPATH];
+ struct stat st;
#if 0
write_msg(modulename, "attempting to ascertain archive format\n");
*************** _discoverArchiveFormat(ArchiveHandle *AH
*** 1737,1743 ****
if (AH->fSpec)
{
wantClose = 1;
! fh = fopen(AH->fSpec, PG_BINARY_R);
if (!fh)
die_horribly(AH, modulename, "could not open input file \"%s\": %s\n",
AH->fSpec, strerror(errno));
--- 1792,1813 ----
if (AH->fSpec)
{
wantClose = 1;
! /*
! * Check if the specified archive is a directory actually. If so, we open
! * the TOC file instead.
! */
! buf[0] = '\0';
! if (stat(AH->fSpec, &st) == 0 && S_ISDIR(st.st_mode))
! {
! if (snprintf(buf, MAXPGPATH, "%s/%s", AH->fSpec, "TOC") >= MAXPGPATH)
! die_horribly(AH, modulename, "directory name too long: \"%s\"\n",
! AH->fSpec);
! }
!
! if (strlen(buf) == 0)
! strcpy(buf, AH->fSpec);
!
! fh = fopen(buf, PG_BINARY_R);
if (!fh)
die_horribly(AH, modulename, "could not open input file \"%s\": %s\n",
AH->fSpec, strerror(errno));
*************** _allocAH(const char *FileSpec, const Arc
*** 1950,1955 ****
--- 2020,2029 ----
InitArchiveFmt_Custom(AH);
break;
+ case archDirectory:
+ InitArchiveFmt_Directory(AH);
+ break;
+
case archFiles:
InitArchiveFmt_Files(AH);
break;
*************** WriteHead(ArchiveHandle *AH)
*** 2975,2985 ****
(*AH->WriteBytePtr) (AH, AH->format);
#ifndef HAVE_LIBZ
! if (AH->compression != 0)
write_msg(modulename, "WARNING: requested compression not available in this "
"installation -- archive will be uncompressed\n");
! AH->compression = 0;
#endif
WriteInt(AH, AH->compression);
--- 3049,3061 ----
(*AH->WriteBytePtr) (AH, AH->format);
#ifndef HAVE_LIBZ
! if (AH->compression > 0 && AH->compression <= 9)
! {
write_msg(modulename, "WARNING: requested compression not available in this "
"installation -- archive will be uncompressed\n");
! AH->compression = 0;
! }
#endif
WriteInt(AH, AH->compression);
*************** ReadHead(ArchiveHandle *AH)
*** 3063,3069 ****
AH->compression = Z_DEFAULT_COMPRESSION;
#ifndef HAVE_LIBZ
! if (AH->compression != 0)
write_msg(modulename, "WARNING: archive is compressed, but this installation does not support compression -- no data will be available\n");
#endif
--- 3139,3145 ----
AH->compression = Z_DEFAULT_COMPRESSION;
#ifndef HAVE_LIBZ
! if (AH->compression > 0 && AH->compression <= 9)
write_msg(modulename, "WARNING: archive is compressed, but this installation does not support compression -- no data will be available\n");
#endif
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 705b2e4..3d15841 100644
*** a/src/bin/pg_dump/pg_backup_archiver.h
--- b/src/bin/pg_dump/pg_backup_archiver.h
*************** struct _archiveHandle;
*** 113,118 ****
--- 113,125 ----
struct _tocEntry;
struct _restoreList;
+ typedef enum
+ {
+ REQ_SCHEMA = 1,
+ REQ_DATA = 2,
+ REQ_ALL = REQ_SCHEMA + REQ_DATA
+ } teReqs;
+
typedef void (*ClosePtr) (struct _archiveHandle * AH);
typedef void (*ReopenPtr) (struct _archiveHandle * AH);
typedef void (*ArchiveEntryPtr) (struct _archiveHandle * AH, struct _tocEntry * te);
*************** typedef void (*WriteExtraTocPtr) (struct
*** 135,144 ****
--- 142,156 ----
typedef void (*ReadExtraTocPtr) (struct _archiveHandle * AH, struct _tocEntry * te);
typedef void (*PrintExtraTocPtr) (struct _archiveHandle * AH, struct _tocEntry * te);
typedef void (*PrintTocDataPtr) (struct _archiveHandle * AH, struct _tocEntry * te, RestoreOptions *ropt);
+ typedef void (*PrintExtraTocSummaryPtr) (struct _archiveHandle * AH);
typedef void (*ClonePtr) (struct _archiveHandle * AH);
typedef void (*DeClonePtr) (struct _archiveHandle * AH);
+ typedef bool (*StartCheckArchivePtr)(struct _archiveHandle * AH);
+ typedef bool (*CheckTocEntryPtr)(struct _archiveHandle * AH, struct _tocEntry * te, teReqs reqs);
+ typedef bool (*EndCheckArchivePtr)(struct _archiveHandle * AH);
+
typedef size_t (*CustomOutPtr) (struct _archiveHandle * AH, const void *buf, size_t len);
typedef struct _outputContext
*************** typedef enum
*** 177,189 ****
STAGE_FINALIZING
} ArchiverStage;
- typedef enum
- {
- REQ_SCHEMA = 1,
- REQ_DATA = 2,
- REQ_ALL = REQ_SCHEMA + REQ_DATA
- } teReqs;
-
typedef struct _archiveHandle
{
Archive public; /* Public part of archive */
--- 189,194 ----
*************** typedef struct _archiveHandle
*** 239,244 ****
--- 244,250 ----
* archie format */
PrintExtraTocPtr PrintExtraTocPtr; /* Extra TOC info for format */
PrintTocDataPtr PrintTocDataPtr;
+ PrintExtraTocSummaryPtr PrintExtraTocSummaryPtr;
StartBlobsPtr StartBlobsPtr;
EndBlobsPtr EndBlobsPtr;
*************** typedef struct _archiveHandle
*** 248,253 ****
--- 254,263 ----
ClonePtr ClonePtr; /* Clone format-specific fields */
DeClonePtr DeClonePtr; /* Clean up cloned fields */
+ StartCheckArchivePtr StartCheckArchivePtr;
+ CheckTocEntryPtr CheckTocEntryPtr;
+ EndCheckArchivePtr EndCheckArchivePtr;
+
CustomOutPtr CustomOutPtr; /* Alternative script output routine */
/* Stuff for direct DB connection */
*************** extern void EndRestoreBlob(ArchiveHandle
*** 383,388 ****
--- 393,399 ----
extern void EndRestoreBlobs(ArchiveHandle *AH);
extern void InitArchiveFmt_Custom(ArchiveHandle *AH);
+ extern void InitArchiveFmt_Directory(ArchiveHandle *AH);
extern void InitArchiveFmt_Files(ArchiveHandle *AH);
extern void InitArchiveFmt_Null(ArchiveHandle *AH);
extern void InitArchiveFmt_Tar(ArchiveHandle *AH);
diff --git a/src/bin/pg_dump/pg_backup_custom.c b/src/bin/pg_dump/pg_backup_custom.c
index 95135b7..e1f0daa 100644
*** a/src/bin/pg_dump/pg_backup_custom.c
--- b/src/bin/pg_dump/pg_backup_custom.c
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 120,125 ****
--- 120,126 ----
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = NULL;
AH->StartBlobsPtr = _StartBlobs;
AH->StartBlobPtr = _StartBlob;
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 128,133 ****
--- 129,138 ----
AH->ClonePtr = _Clone;
AH->DeClonePtr = _DeClone;
+ AH->StartCheckArchivePtr = NULL;
+ AH->CheckTocEntryPtr = NULL;
+ AH->EndCheckArchivePtr = NULL;
+
/*
* Set up some special context used in compressing data.
*/
diff --git a/src/bin/pg_dump/pg_backup_directory.c b/src/bin/pg_dump/pg_backup_directory.c
index ...b2d9edd .
*** a/src/bin/pg_dump/pg_backup_directory.c
--- b/src/bin/pg_dump/pg_backup_directory.c
***************
*** 0 ****
--- 1,1494 ----
+ /*-------------------------------------------------------------------------
+ *
+ * pg_backup_directory.c
+ *
+ * This file is copied from the 'files' format file and dumps data into
+ * separate files in a directory.
+ *
+ * See the headers to pg_backup_files & pg_restore for more details.
+ *
+ * Copyright (c) 2000, Philip Warner
+ * Rights are granted to use this software in any way so long
+ * as this notice is not removed.
+ *
+ * The author is not responsible for loss or damages that may
+ * result from it's use.
+ *
+ *
+ * IDENTIFICATION
+ * XXX
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include <dirent.h>
+ #include <sys/stat.h>
+
+ #include "compress_io.h"
+ #include "pg_backup_archiver.h"
+ #include "libpq/md5.h"
+ #include "utils/pg_crc.h"
+
+ #ifdef USE_SSL
+ /* for RAND_bytes() */
+ #include <openssl/rand.h>
+ #endif
+
+ #define TOC_FH_ACTIVE (ctx->dataFH == NULL && ctx->blobsTocFH == NULL && AH->FH != NULL)
+ #define BLOBS_TOC_FH_ACTIVE (ctx->dataFH == NULL && ctx->blobsTocFH != NULL)
+ #define DATA_FH_ACTIVE (ctx->dataFH != NULL)
+
+ struct _lclFileHeader;
+ struct _lclContext;
+
+ static void _ArchiveEntry(ArchiveHandle *AH, TocEntry *te);
+ static void _StartData(ArchiveHandle *AH, TocEntry *te);
+ static void _EndData(ArchiveHandle *AH, TocEntry *te);
+ static size_t _WriteData(ArchiveHandle *AH, const void *data, size_t dLen);
+ static int _WriteByte(ArchiveHandle *AH, const int i);
+ static int _ReadByte(ArchiveHandle *);
+ static size_t _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len);
+ static size_t _ReadBuf(ArchiveHandle *AH, void *buf, size_t len);
+ static void _CloseArchive(ArchiveHandle *AH);
+ static void _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
+
+ static void _WriteExtraToc(ArchiveHandle *AH, TocEntry *te);
+ static void _ReadExtraToc(ArchiveHandle *AH, TocEntry *te);
+ static void _PrintExtraToc(ArchiveHandle *AH, TocEntry *te);
+ static void _PrintExtraTocSummary(ArchiveHandle *AH);
+
+ static void _WriteExtraHead(ArchiveHandle *AH);
+ static void _ReadExtraHead(ArchiveHandle *AH);
+
+ static void WriteFileHeader(ArchiveHandle *AH, int type);
+ static int ReadFileHeader(ArchiveHandle *AH, struct _lclFileHeader *fileHeader);
+
+ static void _StartBlobs(ArchiveHandle *AH, TocEntry *te);
+ static void _StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid);
+ static void _EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid);
+ static void _EndBlobs(ArchiveHandle *AH, TocEntry *te);
+ static void _LoadBlobs(ArchiveHandle *AH, RestoreOptions *ropt);
+
+ static size_t _DirectoryReadFunction(ArchiveHandle *AH, void **buf, size_t sizeHint);
+
+ static bool _StartCheckArchive(ArchiveHandle *AH);
+ static bool _CheckTocEntry(ArchiveHandle *AH, TocEntry *te, teReqs reqs);
+ static bool _CheckFileContents(ArchiveHandle *AH, const char *fname, const char* idStr, bool terminateOnError);
+ static bool _CheckFileSize(ArchiveHandle *AH, const char *fname, pgoff_t pgSize, bool terminateOnError);
+ static bool _CheckBlob(ArchiveHandle *AH, Oid oid, pgoff_t size);
+ static bool _CheckBlobs(ArchiveHandle *AH, TocEntry *te, teReqs reqs);
+ static bool _EndCheckArchive(ArchiveHandle *AH);
+
+ static char *prependDirectory(ArchiveHandle *AH, const char *relativeFilename);
+ static char *prependBlobsDirectory(ArchiveHandle *AH, Oid oid);
+ static void createDirectory(const char *dir, const char *subdir);
+
+ static char *getRandomData(char *s, int len);
+
+ static void _StartDataCompressor(ArchiveHandle *AH, TocEntry *te);
+ static void _EndDataCompressor(ArchiveHandle *AH, TocEntry *te);
+
+ static bool isDirectory(const char *fname);
+ static bool isRegularFile(const char *fname);
+
+ #define K_STD_BUF_SIZE 1024
+ #define FILE_SUFFIX ".dat"
+
+ typedef struct _lclContext
+ {
+ /*
+ * Our archive location. This is basically what the user specified as his
+ * backup file but of course here it is a directory.
+ */
+ char *directory;
+
+ /*
+ * As a directory archive contains of several files we want to make sure
+ * that we do not interchange files of different backups. That's why we
+ * assign a (hopefully) unique ID to every set. This ID is written to the
+ * TOC and to every data file.
+ */
+ char idStr[33];
+
+ /*
+ * In the directory archive format we have three file handles:
+ *
+ * AH->FH points to the TOC
+ * ctx->blobsTocFH points to the TOC for the BLOBs
+ * ctx->dataFH points to data files (both BLOBs and regular)
+ *
+ * Instead of specifying where each I/O operation should go (which would
+ * require own prototypes anyway and wouldn't be that straightforward
+ * either), we rely on a hierarchy among the file descriptors.
+ *
+ * As a matter of fact we never access any of the TOCs when we are writing
+ * to a data file, only before or after that. Similarly we never access the
+ * general TOC when we have opened the TOC for BLOBs. Given these facts we
+ * can just write our I/O routines such that they access:
+ *
+ * if defined(ctx->dataFH) => access ctx->dataFH
+ * else if defined(ctx->blobsTocFH) => access ctx->blobsTocFH
+ * else => access AH->FH
+ *
+ * To make it more transparent what is going on, we use assertions like
+ *
+ * Assert(DATA_FH_ACTIVE); ...
+ *
+ */
+ FILE *dataFH;
+ pgoff_t dataFilePos;
+ FILE *blobsTocFH;
+ pgoff_t blobsTocFilePos;
+ pgoff_t tocFilePos; /* this counts the file position for AH->FH */
+
+ /* these are used for checking a directory archive */
+ DumpId *chkList;
+ int chkListSize;
+
+ CompressorState *cs;
+ } lclContext;
+
+ typedef struct
+ {
+ char *filename; /* filename excluding the directory (basename) */
+ pgoff_t fileSize;
+ } lclTocEntry;
+
+ typedef struct _lclFileHeader
+ {
+ int version;
+ int type; /* BLK_DATA or BLK_BLOB */
+ char *idStr;
+ } lclFileHeader;
+
+ static const char *modulename = gettext_noop("directory archiver");
+
+ /*
+ * Init routine required by ALL formats. This is a global routine
+ * and should be declared in pg_backup_archiver.h
+ *
+ * It's task is to create any extra archive context (using AH->formatData),
+ * and to initialize the supported function pointers.
+ *
+ * It should also prepare whatever it's input source is for reading/writing,
+ * and in the case of a read mode connection, it should load the Header & TOC.
+ */
+ void
+ InitArchiveFmt_Directory(ArchiveHandle *AH)
+ {
+ lclContext *ctx;
+
+ /* Assuming static functions, this can be copied for each format. */
+ AH->ArchiveEntryPtr = _ArchiveEntry;
+ AH->StartDataPtr = _StartData;
+ AH->WriteDataPtr = _WriteData;
+ AH->EndDataPtr = _EndData;
+ AH->WriteBytePtr = _WriteByte;
+ AH->ReadBytePtr = _ReadByte;
+ AH->WriteBufPtr = _WriteBuf;
+ AH->ReadBufPtr = _ReadBuf;
+ AH->ClosePtr = _CloseArchive;
+ AH->ReopenPtr = NULL;
+ AH->PrintTocDataPtr = _PrintTocData;
+ AH->ReadExtraTocPtr = _ReadExtraToc;
+ AH->WriteExtraTocPtr = _WriteExtraToc;
+ AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = _PrintExtraTocSummary;
+
+ AH->StartBlobsPtr = _StartBlobs;
+ AH->StartBlobPtr = _StartBlob;
+ AH->EndBlobPtr = _EndBlob;
+ AH->EndBlobsPtr = _EndBlobs;
+
+ AH->ClonePtr = NULL;
+ AH->DeClonePtr = NULL;
+
+ AH->StartCheckArchivePtr = _StartCheckArchive;
+ AH->CheckTocEntryPtr = _CheckTocEntry;
+ AH->EndCheckArchivePtr = _EndCheckArchive;
+
+ /*
+ * Set up some special context used in compressing data.
+ */
+ ctx = (lclContext *) calloc(1, sizeof(lclContext));
+ if (ctx == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+ AH->formatData = (void *) ctx;
+
+ ctx->dataFH = NULL;
+ ctx->blobsTocFH = NULL;
+ ctx->cs = NULL;
+
+ /* Initialize LO buffering */
+ AH->lo_buf_size = LOBBUFSIZE;
+ AH->lo_buf = (void *) malloc(LOBBUFSIZE);
+ if (AH->lo_buf == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+
+ /*
+ * Now open the TOC file
+ */
+
+ if (!AH->fSpec || strcmp(AH->fSpec, "") == 0)
+ die_horribly(AH, modulename, "no directory specified\n");
+
+ ctx->directory = AH->fSpec;
+
+ if (AH->mode == archModeWrite)
+ {
+ char *fname = prependDirectory(AH, "TOC");
+ char buf[256];
+
+ /*
+ * Create the ID string, basically a large random number that prevents that
+ * we mix files from different backups
+ */
+ getRandomData(buf, sizeof(buf));
+ if (!pg_md5_hash(buf, strlen(buf), ctx->idStr))
+ die_horribly(AH, modulename, "Error computing checksum");
+
+ /* Create the directory, errors are caught there */
+ createDirectory(ctx->directory, NULL);
+
+ ctx->cs = AllocateCompressorState(AH);
+
+ AH->FH = fopen(fname, PG_BINARY_W);
+ if (AH->FH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+ }
+ else
+ { /* Read Mode */
+ char *fname;
+
+ fname = prependDirectory(AH, "TOC");
+
+ AH->FH = fopen(fname, PG_BINARY_R);
+ if (AH->FH == NULL)
+ die_horribly(AH, modulename,
+ "could not open input file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(TOC_FH_ACTIVE);
+
+ ReadHead(AH);
+ _ReadExtraHead(AH);
+ ReadToc(AH);
+
+ /*
+ * We get the compression information from the TOC, hence no need to
+ * initialize the compressor earlier. Also, remember that the TOC file is
+ * always uncompressed. Compression is only used for the data files.
+ */
+ ctx->cs = AllocateCompressorState(AH);
+
+ /* Nothing else in the file, so close it again... */
+
+ if (fclose(AH->FH) != 0)
+ die_horribly(AH, modulename, "could not close TOC file: %s\n", strerror(errno));
+ }
+ }
+
+ /*
+ * Called by the Archiver when the dumper creates a new TOC entry.
+ *
+ * Optional.
+ *
+ * Set up extrac format-related TOC data.
+ */
+ static void
+ _ArchiveEntry(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx;
+ char fn[MAXPGPATH];
+
+ tctx = (lclTocEntry *) calloc(1, sizeof(lclTocEntry));
+ if (te->dataDumper)
+ {
+ sprintf(fn, "%d"FILE_SUFFIX, te->dumpId);
+ tctx->filename = strdup(fn);
+ }
+ else if (strcmp(te->desc, "BLOBS") == 0)
+ {
+ tctx->filename = strdup("BLOBS.TOC");
+ }
+ else
+ tctx->filename = NULL;
+
+ tctx->fileSize = 0;
+ te->formatData = (void *) tctx;
+ }
+
+ /*
+ * Called by the Archiver to save any extra format-related TOC entry
+ * data.
+ *
+ * Optional.
+ *
+ * Use the Archiver routines to write data - they are non-endian, and
+ * maintain other important file information.
+ */
+ static void
+ _WriteExtraToc(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ /*
+ * A dumpable object has set tctx->filename, any other object hasnt.
+ * (see _ArchiveEntry).
+ */
+ if (tctx->filename)
+ {
+ WriteStr(AH, tctx->filename);
+ WriteOffset(AH, tctx->fileSize, K_OFFSET_POS_SET);
+ }
+ else
+ WriteStr(AH, "");
+ }
+
+ /*
+ * Called by the Archiver to read any extra format-related TOC data.
+ *
+ * Optional.
+ *
+ * Needs to match the order defined in _WriteExtraToc, and sould also
+ * use the Archiver input routines.
+ */
+ static void
+ _ReadExtraToc(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ if (tctx == NULL)
+ {
+ tctx = (lclTocEntry *) calloc(1, sizeof(lclTocEntry));
+ te->formatData = (void *) tctx;
+ }
+
+ tctx->filename = ReadStr(AH);
+ if (strlen(tctx->filename) == 0)
+ {
+ free(tctx->filename);
+ tctx->filename = NULL;
+ }
+ else
+ ReadOffset(AH, &(tctx->fileSize));
+ }
+
+ /*
+ * Called by the Archiver when restoring an archive to output a comment
+ * that includes useful information about the TOC entry.
+ *
+ * Optional.
+ *
+ */
+ static void
+ _PrintExtraToc(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ if (AH->public.verbose && tctx->filename)
+ ahprintf(AH, "-- File: %s\n", tctx->filename);
+ }
+
+ /*
+ * Called by the Archiver when listing the contents of an archive to output a
+ * comment that includes useful information about the archive.
+ *
+ * Optional.
+ *
+ */
+ static void
+ _PrintExtraTocSummary(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ ahprintf(AH, "; ID: %s\n", ctx->idStr);
+ }
+
+
+ /*
+ * Called by the archiver when saving TABLE DATA (not schema). This routine
+ * should save whatever format-specific information is needed to read
+ * the archive back.
+ *
+ * It is called just prior to the dumper's 'DataDumper' routine being called.
+ *
+ * Optional, but strongly recommended.
+ *
+ */
+ static void
+ _StartData(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+
+ fname = prependDirectory(AH, tctx->filename);
+
+ ctx->dataFH = (FILE *) fopen(fname, PG_BINARY_W);
+ if (ctx->dataFH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(DATA_FH_ACTIVE);
+
+ ctx->dataFilePos = 0;
+
+ WriteFileHeader(AH, BLK_DATA);
+
+ _StartDataCompressor(AH, te);
+ }
+
+ static void
+ WriteFileHeader(ArchiveHandle *AH, int type)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ int compression = AH->compression;
+
+ /*
+ * We always write the header uncompressed. If any compression is active,
+ * switch it off for a moment and restore it after writing the header.
+ */
+ AH->compression = 0;
+ (*AH->WriteBufPtr) (AH, "PGDMP", 5); /* Magic code */
+ (*AH->WriteBytePtr) (AH, AH->vmaj);
+ (*AH->WriteBytePtr) (AH, AH->vmin);
+ (*AH->WriteBytePtr) (AH, AH->vrev);
+
+ _WriteByte(AH, type);
+ WriteStr(AH, ctx->idStr);
+
+ AH->compression = compression;
+ }
+
+ static int
+ ReadFileHeader(ArchiveHandle *AH, lclFileHeader *fileHeader)
+ {
+ char tmpMag[7];
+ int vmaj, vmin, vrev;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ int compression = AH->compression;
+ bool err = false;
+
+ Assert(ftell(ctx->dataFH ? ctx->dataFH : ctx->blobsTocFH ? ctx->blobsTocFH : AH->FH) == 0);
+
+ /* Read with compression switched off. See WriteFileHeader() */
+ AH->compression = 0;
+ if ((*AH->ReadBufPtr) (AH, tmpMag, 5) != 5)
+ die_horribly(AH, modulename, "unexpected end of file\n");
+
+ vmaj = (*AH->ReadBytePtr) (AH);
+ vmin = (*AH->ReadBytePtr) (AH);
+ vrev = (*AH->ReadBytePtr) (AH);
+
+ /* Make a convenient integer <maj><min><rev>00 */
+ fileHeader->version = ((vmaj * 256 + vmin) * 256 + vrev) * 256 + 0;
+ fileHeader->type = _ReadByte(AH);
+ if (fileHeader->type != BLK_BLOBS && fileHeader->type != BLK_DATA)
+ err = true;
+ if (!err)
+ {
+ fileHeader->idStr = ReadStr(AH);
+ if (fileHeader->idStr == NULL)
+ err = true;
+ }
+ if (!err)
+ {
+ if (strcmp(fileHeader->idStr, ctx->idStr) != 0)
+ err = true;
+ }
+ AH->compression = compression;
+
+ return err ? -1 : 0;
+ }
+
+ /*
+ * Called by archiver when dumper calls WriteData. This routine is
+ * called for both BLOB and TABLE data; it is the responsibility of
+ * the format to manage each kind of data using StartBlob/StartData.
+ *
+ * It should only be called from within a DataDumper routine.
+ *
+ * Mandatory.
+ */
+ static size_t
+ _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ return WriteDataToArchive(AH, cs, _WriteBuf, data, dLen);
+ }
+
+ /*
+ * Called by the archiver when a dumper's 'DataDumper' routine has
+ * finished.
+ *
+ * Optional.
+ *
+ */
+ static void
+ _EndData(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ lclContext *ctx = (lclContext *) AH->formatData;
+
+ _EndDataCompressor(AH, te);
+
+ Assert(DATA_FH_ACTIVE);
+
+ /* Close the file */
+ fclose(ctx->dataFH);
+
+ /* the file won't grow anymore. Record the size. */
+ tctx->fileSize = ctx->dataFilePos;
+
+ ctx->dataFH = NULL;
+ }
+
+ /*
+ * Print data for a given file (can be a BLOB as well)
+ */
+ static void
+ _PrintFileData(ArchiveHandle *AH, char *filename, pgoff_t expectedSize, RestoreOptions *ropt)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+ lclFileHeader fileHeader;
+
+ InitCompressorState(AH, cs, COMPRESSOR_INFLATE);
+
+ if (!filename)
+ return;
+
+ _CheckFileSize(AH, filename, expectedSize, true);
+ _CheckFileContents(AH, filename, ctx->idStr, true);
+
+ ctx->dataFH = fopen(filename, PG_BINARY_R);
+ if (!ctx->dataFH)
+ die_horribly(AH, modulename, "could not open input file \"%s\": %s\n",
+ filename, strerror(errno));
+
+ if (ReadFileHeader(AH, &fileHeader) != 0)
+ die_horribly(AH, modulename, "could not read valid file header from file \"%s\"\n",
+ filename);
+
+ Assert(DATA_FH_ACTIVE);
+
+ ReadDataFromArchive(AH, cs, _DirectoryReadFunction);
+
+ ctx->dataFH = NULL;
+ }
+
+
+ /*
+ * Print data for a given TOC entry
+ */
+ static void
+ _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ if (!tctx->filename)
+ return;
+
+ if (strcmp(te->desc, "BLOBS") == 0)
+ _LoadBlobs(AH, ropt);
+ else
+ {
+ char *fname = prependDirectory(AH, tctx->filename);
+ _PrintFileData(AH, fname, tctx->fileSize, ropt);
+ }
+ }
+
+ static void
+ _LoadBlobs(ArchiveHandle *AH, RestoreOptions *ropt)
+ {
+ Oid oid;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ lclFileHeader fileHeader;
+ char *fname;
+
+ StartRestoreBlobs(AH);
+
+ fname = prependDirectory(AH, "BLOBS.TOC");
+
+ ctx->blobsTocFH = fopen(fname, "rb");
+
+ if (ctx->blobsTocFH == NULL)
+ die_horribly(AH, modulename, "could not open large object TOC file \"%s\" for input: %s\n",
+ fname, strerror(errno));
+
+ ReadFileHeader(AH, &fileHeader);
+
+ /* we cannot test for feof() since EOF only shows up in the low
+ * level read functions. But they would die_horribly() anyway. */
+ while (1)
+ {
+ char *blobFname;
+ pgoff_t blobSize;
+
+ oid = ReadInt(AH);
+ /* oid == 0 is our end marker */
+ if (oid == 0)
+ break;
+ ReadOffset(AH, &blobSize);
+
+ StartRestoreBlob(AH, oid, ropt->dropSchema);
+ blobFname = prependBlobsDirectory(AH, oid);
+ _PrintFileData(AH, blobFname, blobSize, ropt);
+ EndRestoreBlob(AH, oid);
+ }
+
+ if (fclose(ctx->blobsTocFH) != 0)
+ die_horribly(AH, modulename, "could not close large object TOC file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ ctx->blobsTocFH = NULL;
+
+ EndRestoreBlobs(AH);
+ }
+
+
+ /*
+ * Write a byte of data to the archive.
+ *
+ * Mandatory.
+ *
+ * Called by the archiver to do integer & byte output to the archive.
+ * These routines are only used to read & write headers & TOC.
+ *
+ */
+ static int
+ _WriteByte(ArchiveHandle *AH, const int i)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ if (fputc(i, stream) == EOF)
+ die_horribly(AH, modulename, "could not write byte\n");
+
+ *filePos += 1;
+
+ return 1;
+ }
+
+ /*
+ * Read a byte of data from the archive.
+ *
+ * Mandatory
+ *
+ * Called by the archiver to read bytes & integers from the archive.
+ * These routines are only used to read & write headers & TOC.
+ * EOF should be treated as a fatal error.
+ */
+ static int
+ _ReadByte(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ int res;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ res = getc(stream);
+ if (res == EOF)
+ die_horribly(AH, modulename, "unexpected end of file\n");
+
+ *filePos += 1;
+
+ return res;
+ }
+
+ /*
+ * Write a buffer of data to the archive.
+ *
+ * Mandatory.
+ *
+ * Called by the archiver to write a block of bytes to the TOC and by the
+ * compressor to write compressed data to the data files.
+ *
+ */
+ static size_t
+ _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ size_t res;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ res = fwrite(buf, 1, len, stream);
+ if (res != len)
+ die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
+
+ *filePos += res;
+
+ return res;
+ }
+
+ /*
+ * Read a block of bytes from the archive.
+ *
+ * Mandatory.
+ *
+ * Called by the archiver to read a block of bytes from the archive
+ *
+ */
+ static size_t
+ _ReadBuf(ArchiveHandle *AH, void *buf, size_t len)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ size_t res;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ res = fread(buf, 1, len, stream);
+
+ *filePos += res;
+
+ return res;
+ }
+
+ /*
+ * Close the archive.
+ *
+ * Mandatory.
+ *
+ * When writing the archive, this is the routine that actually starts
+ * the process of saving it to files. No data should be written prior
+ * to this point, since the user could sort the TOC after creating it.
+ *
+ * If an archive is to be written, this routine must call:
+ * WriteHead to save the archive header
+ * WriteToc to save the TOC entries
+ * WriteDataChunks to save all DATA & BLOBs.
+ *
+ */
+ static void
+ _CloseArchive(ArchiveHandle *AH)
+ {
+ if (AH->mode == archModeWrite)
+ {
+ #ifdef USE_ASSERT_CHECKING
+ lclContext *ctx = (lclContext *) AH->formatData;
+ #endif
+
+ WriteDataChunks(AH);
+
+ Assert(TOC_FH_ACTIVE);
+
+ WriteHead(AH);
+ _WriteExtraHead(AH);
+ WriteToc(AH);
+
+ if (fclose(AH->FH) != 0)
+ die_horribly(AH, modulename, "could not close TOC file: %s\n", strerror(errno));
+ }
+ AH->FH = NULL;
+ }
+
+
+
+ /*
+ * BLOB support
+ */
+
+ /*
+ * Called by the archiver when starting to save all BLOB DATA (not schema).
+ * This routine should save whatever format-specific information is needed
+ * to read the BLOBs back into memory.
+ *
+ * It is called just prior to the dumper's DataDumper routine.
+ *
+ * Optional, but strongly recommended.
+ */
+ static void
+ _StartBlobs(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+
+ fname = prependDirectory(AH, "BLOBS.TOC");
+ createDirectory(ctx->directory, "blobs");
+
+ ctx->blobsTocFH = fopen(fname, "ab");
+ if (ctx->blobsTocFH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ ctx->blobsTocFilePos = 0;
+
+ WriteFileHeader(AH, BLK_BLOBS);
+ }
+
+ /*
+ * Called by the archiver when the dumper calls StartBlob.
+ *
+ * Mandatory.
+ *
+ * Must save the passed OID for retrieval at restore-time.
+ */
+ static void
+ _StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+
+ fname = prependBlobsDirectory(AH, oid);
+ ctx->dataFH = (FILE *) fopen(fname, PG_BINARY_W);
+
+ if (ctx->dataFH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(DATA_FH_ACTIVE);
+
+ ctx->dataFilePos = 0;
+
+ WriteFileHeader(AH, BLK_BLOBS);
+
+ _StartDataCompressor(AH, te);
+ }
+
+ /*
+ * Called by the archiver when the dumper calls EndBlob.
+ *
+ * Optional.
+ */
+ static void
+ _EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t save_filePos;
+
+ _EndDataCompressor(AH, te);
+
+ Assert(DATA_FH_ACTIVE);
+
+ save_filePos = ctx->dataFilePos;
+
+ /* Close the BLOB data file itself */
+ fclose(ctx->dataFH);
+ ctx->dataFH = NULL;
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ /* register the BLOB data file to BLOBS.TOC */
+ WriteInt(AH, oid);
+ WriteOffset(AH, save_filePos, K_OFFSET_POS_NOT_SET);
+ }
+
+ /*
+ * Called by the archiver when finishing saving all BLOB DATA.
+ *
+ * Optional.
+ */
+ static void
+ _EndBlobs(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ WriteInt(AH, 0);
+
+ fclose(ctx->blobsTocFH);
+ ctx->blobsTocFH = NULL;
+
+ tctx->fileSize = ctx->blobsTocFilePos;
+ }
+
+ /*
+ * The idea for the directory check is as follows: First we do a list of every
+ * file that we find in the directory. We reject filenames that don't fit our
+ * pattern outright. So at this stage we only accept all kinds of TOC data
+ * and our data files.
+ *
+ * If a filename looks good (like nnnnn.dat), we save its dumpId to ctx->chkList.
+ *
+ * Other checks then walk through the TOC and for every file they make sure
+ * that the file is what it is pretending to be. Once it passes the checks we
+ * take out its entry in chkList, i.e. replace its dumpId by InvalidDumpId.
+ *
+ * At the end what is left in chkList must be files that are not referenced
+ * from the TOC.
+ */
+ static bool
+ _StartCheckArchive(ArchiveHandle *AH)
+ {
+ bool checkOK = true;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ DIR *dir;
+ char *dname = ctx->directory;
+ struct dirent *entry;
+ int idx = 0;
+ char *suffix;
+ bool tocSeen = false;
+
+ dir = opendir(dname);
+ if (!dir)
+ {
+ printf("Could not open directory \"%s\": %s\n", dname, strerror(errno));
+ return false;
+ }
+
+ /*
+ * Actually we are just avoiding a linked list here by getting an upper
+ * limit of the number of elements in the directory.
+ */
+ while ((entry = readdir(dir)))
+ idx++;
+
+ ctx->chkListSize = idx;
+ ctx->chkList = (DumpId *) malloc(ctx->chkListSize * sizeof(DumpId));
+
+ /* seems that Windows doesn't have a rewinddir() equivalent */
+ closedir(dir);
+ dir = opendir(dname);
+ if (!dir)
+ {
+ printf("Could not open directory \"%s\": %s\n", dname, strerror(errno));
+ return false;
+ }
+
+
+ idx = 0;
+
+ for (;;)
+ {
+ errno = 0;
+ entry = readdir(dir);
+ if (!entry && errno == 0)
+ /* end of directory entries reached */
+ break;
+ if (!entry && errno)
+ {
+ printf("Error reading directory %s: %s\n",
+ entry->d_name, strerror(errno));
+ checkOK = false;
+ break;
+ }
+
+ if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0)
+ continue;
+ if (strcmp(entry->d_name, "blobs") == 0 &&
+ isDirectory(prependDirectory(AH, entry->d_name)))
+ continue;
+ if (strcmp(entry->d_name, "BLOBS.TOC") == 0 &&
+ isRegularFile(prependDirectory(AH, entry->d_name)))
+ continue;
+ if (strcmp(entry->d_name, "TOC") == 0 &&
+ isRegularFile(prependDirectory(AH, entry->d_name)))
+ {
+ tocSeen = true;
+ continue;
+ }
+ /* besides the above we only expect nnnn.dat, with nnnn being our numerical dumpID */
+ if ((suffix = strstr(entry->d_name, FILE_SUFFIX)) == NULL)
+ {
+ printf("Unexpected file \"%s\" in directory \"%s\"\n", entry->d_name, dname);
+ checkOK = false;
+ continue;
+ }
+ else
+ {
+ /* suffix now points into entry->d_name */
+ int dumpId;
+ int scBytes, scItems;
+
+ /* check if FILE_SUFFIX is really a suffix instead of just a
+ * substring. */
+ if (strlen(suffix) != strlen(FILE_SUFFIX))
+ {
+ printf("Unexpected file \"%s\" in directory \"%s\"\n",
+ entry->d_name, dname);
+ checkOK = false;
+ continue;
+ }
+
+ /* cut off the suffix, now entry->d_name contains the null terminated dumpId,
+ * and we parse it back. */
+ *suffix = '\0';
+ scItems = sscanf(entry->d_name, "%d%n", &dumpId, &scBytes);
+ if (scItems != 1 || scBytes != strlen(entry->d_name))
+ {
+ printf("Unexpected file \"%s\" in directory \"%s\"\n",
+ entry->d_name, dname);
+ checkOK = false;
+ continue;
+ }
+
+ /* Still here so this entry is good. Add the dumpId to our list. */
+ ctx->chkList[idx++] = (DumpId) dumpId;
+ }
+ }
+ closedir(dir);
+
+ /* we probably counted a few entries too much, just ignore them. */
+ while (idx < ctx->chkListSize)
+ ctx->chkList[idx++] = InvalidDumpId;
+
+ /* also return false if we haven't seen the TOC file */
+ return checkOK && tocSeen;
+ }
+
+ static bool
+ _CheckFileSize(ArchiveHandle *AH, const char *fname, pgoff_t pgSize, bool terminateOnError)
+ {
+ bool checkOK = true;
+ FILE *f;
+ unsigned long size = (unsigned long) pgSize;
+ struct stat st;
+
+ /*
+ * If terminateOnError is true, then we don't expect this to fail and if it
+ * does, we need to terminate. On the other hand, if it is false we are
+ * checking, go on then and present a report of all findings at the end.
+ * Accordingly write to either stderr or stdout.
+ */
+ if (terminateOnError)
+ f = stderr;
+ else
+ f = stdout;
+
+ if (!fname || fname[0] == '\0')
+ {
+ fprintf(f, "Invalid (empty) filename\n");
+ checkOK = false;
+ }
+ else if (stat(fname, &st) != 0)
+ {
+ fprintf(f, "File not found: \"%s\"\n", fname);
+ checkOK = false;
+ }
+ else if (st.st_size != (off_t) pgSize)
+ {
+ fprintf(f, "Size mismatch for file \"%s\" (expected: %lu bytes, actual %lu bytes)\n",
+ fname, size, (unsigned long) st.st_size);
+ checkOK = false;
+ }
+
+ if (!checkOK && terminateOnError)
+ {
+ if (AH->connection)
+ PQfinish(AH->connection);
+
+ exit(1);
+ }
+
+ return checkOK;
+ }
+
+ static bool
+ _CheckFileContents(ArchiveHandle *AH, const char *fname, const char* idStr, bool terminateOnError)
+ {
+ bool checkOK = true;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ FILE *file;
+ FILE *f;
+ lclFileHeader fileHeader;
+
+ Assert(ctx->dataFH == NULL);
+
+ if (terminateOnError)
+ f = stderr;
+ else
+ f = stdout;
+
+ if (!fname || fname[0] == '\0')
+ {
+ fprintf(f, "Invalid (empty) filename\n");
+ return false;
+ }
+
+ if (!(file = fopen(fname, PG_BINARY_R)))
+ {
+ fprintf(f, "Could not open file \"%s\": %s\n", fname, strerror(errno));
+ return false;
+ }
+
+ ctx->dataFH = file;
+ if (ReadFileHeader(AH, &fileHeader) != 0)
+ {
+ fprintf(f, "Could not read valid file header from file \"%s\"\n", fname);
+ checkOK = false;
+ }
+ else if (strcmp(fileHeader.idStr, idStr) != 0)
+ {
+ fprintf(f, "File \"%s\" belongs to different backup (expected id: %s, actual id: %s)\n",
+ fname, idStr, fileHeader.idStr);
+ checkOK = false;
+ }
+
+ if (file)
+ fclose(file);
+
+ ctx->dataFH = NULL;
+
+ if (!checkOK && terminateOnError)
+ {
+ if (AH->connection)
+ PQfinish(AH->connection);
+ exit(1);
+ }
+
+ return checkOK;
+ }
+
+ static bool
+ _CheckBlob(ArchiveHandle *AH, Oid oid, pgoff_t size)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname = prependBlobsDirectory(AH, oid);
+ bool checkOK = true;
+
+ if (!_CheckFileSize(AH, fname, size, false))
+ checkOK = false;
+ else if (!_CheckFileContents(AH, fname, ctx->idStr, false))
+ checkOK = false;
+
+ return checkOK;
+ }
+
+ static bool
+ _CheckBlobs(ArchiveHandle *AH, TocEntry *te, teReqs reqs)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+ bool checkOK = true;
+ lclFileHeader fileHeader;
+ pgoff_t size;
+ Oid oid;
+
+ /* check the BLOBS.TOC first */
+ fname = prependDirectory(AH, "BLOBS.TOC");
+
+ if (!fname)
+ {
+ printf("Could not find BLOBS.TOC. Check the archive!\n");
+ return false;
+ }
+
+ if (!_CheckFileSize(AH, fname, tctx->fileSize, false))
+ checkOK = false;
+ else if (!_CheckFileContents(AH, fname, ctx->idStr, false))
+ checkOK = false;
+
+ /* now check every single BLOB object */
+ ctx->blobsTocFH = fopen(fname, "rb");
+ if (ctx->blobsTocFH == NULL)
+ {
+ printf("could not open large object TOC for input: %s\n",
+ strerror(errno));
+ return false;
+ }
+ ReadFileHeader(AH, &fileHeader);
+
+ /* we cannot test for feof() since EOF only shows up in the low
+ * level read functions. But they would die_horribly() anyway. */
+ while ((oid = ReadInt(AH)))
+ {
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ ReadOffset(AH, &size);
+
+ if (!_CheckBlob(AH, oid, size))
+ checkOK = false;
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+ }
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ if (fclose(ctx->blobsTocFH) != 0)
+ {
+ printf("could not close large object TOC file: %s\n",
+ strerror(errno));
+ checkOK = false;
+ }
+
+ return checkOK;
+ }
+
+
+ static bool
+ _CheckTocEntry(ArchiveHandle *AH, TocEntry *te, teReqs reqs)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ int idx;
+ bool checkOK = true;
+
+ /* take out files from chkList as we see them */
+ for (idx = 0; idx < ctx->chkListSize; idx++)
+ {
+ if (ctx->chkList[idx] == te->dumpId && te->section == SECTION_DATA)
+ {
+ ctx->chkList[idx] = InvalidDumpId;
+ break;
+ }
+ }
+
+ /* see comment in _tocEntryRequired() for the special case of SEQUENCE SET */
+ if (reqs & REQ_DATA && strcmp(te->desc, "BLOBS") == 0)
+ {
+ if (!_CheckBlobs(AH, te, reqs))
+ checkOK = false;
+ }
+ else if (reqs & REQ_DATA && strcmp(te->desc, "SEQUENCE SET") != 0
+ && strcmp(te->desc, "BLOB") != 0
+ && strcmp(te->desc, "COMMENT") != 0)
+ {
+ char *fname;
+
+ fname = prependDirectory(AH, tctx->filename);
+ if (!fname)
+ {
+ printf("Could not find file %s\n", tctx->filename);
+ checkOK = false;
+ }
+ else if (!_CheckFileSize(AH, fname, tctx->fileSize, false))
+ checkOK = false;
+ else if (!_CheckFileContents(AH, fname, ctx->idStr, false))
+ checkOK = false;
+ }
+
+ return checkOK;
+ }
+
+ static bool
+ _EndCheckArchive(ArchiveHandle *AH)
+ {
+ /* check left over files */
+ lclContext *ctx = (lclContext *) AH->formatData;
+ int idx;
+ bool checkOK = true;
+
+ for (idx = 0; idx < ctx->chkListSize; idx++)
+ {
+ if (ctx->chkList[idx] != InvalidDumpId)
+ {
+ printf("Unexpected file: %d"FILE_SUFFIX"\n", ctx->chkList[idx]);
+ checkOK = false;
+ }
+ }
+
+ return checkOK;
+ }
+
+
+ static void
+ createDirectory(const char *dir, const char *subdir)
+ {
+ struct stat st;
+ char dirname[MAXPGPATH];
+
+ /* the directory must not yet exist, first check if it is existing */
+ if (subdir && strlen(dir) + 1 + strlen(subdir) + 1 > MAXPGPATH)
+ die_horribly(NULL, modulename, "directory name %s too long", dir);
+
+ strcpy(dirname, dir);
+
+ if (subdir)
+ {
+ strcat(dirname, "/");
+ strcat(dirname, subdir);
+ }
+
+ if (stat(dirname, &st) == 0)
+ {
+ if (S_ISDIR(st.st_mode))
+ die_horribly(NULL, modulename,
+ "Cannot create directory %s, it exists already\n", dirname);
+ else
+ die_horribly(NULL, modulename,
+ "Cannot create directory %s, a file with this name exists already\n", dirname);
+ }
+
+ /*
+ * Now we create the directory. Note that for some race condition we
+ * could also run into the situation that the directory has been created
+ * just between our two calls.
+ */
+ if (mkdir(dirname, 0700) < 0)
+ die_horribly(NULL, modulename, "Could not create directory %s: %s",
+ dirname, strerror(errno));
+ }
+
+
+ static char *
+ prependDirectory(ArchiveHandle *AH, const char *relativeFilename)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ static char buf[MAXPGPATH];
+ char *dname;
+
+ dname = ctx->directory;
+
+ if (strlen(dname) + 1 + strlen(relativeFilename) + 1 > MAXPGPATH)
+ die_horribly(AH, modulename, "path name too long: %s", dname);
+
+ strcpy(buf, dname);
+ strcat(buf, "/");
+ strcat(buf, relativeFilename);
+
+ return buf;
+ }
+
+ static char *
+ prependBlobsDirectory(ArchiveHandle *AH, Oid oid)
+ {
+ static char buf[MAXPGPATH];
+ char *dname;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ int r;
+
+ dname = ctx->directory;
+
+ r = snprintf(buf, MAXPGPATH, "%s/blobs/%d%s",
+ dname, oid, FILE_SUFFIX);
+
+ if (r < 0 || r >= MAXPGPATH)
+ die_horribly(AH, modulename, "path name too long: %s", dname);
+
+ return buf;
+ }
+
+ static void
+ _StartDataCompressor(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ InitCompressorState(AH, cs, COMPRESSOR_DEFLATE);
+ }
+
+
+ static void
+ _EndDataCompressor(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ FlushCompressorState(AH, cs, _WriteBuf);
+ }
+
+ static size_t
+ _DirectoryReadFunction(ArchiveHandle *AH, void **buf, size_t sizeHint)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ Assert(cs->comprInSize >= comprInInitSize);
+
+ if (sizeHint == 0)
+ sizeHint = comprInInitSize;
+
+ *buf = cs->comprIn;
+ return _ReadBuf(AH, cs->comprIn, sizeHint);
+ }
+
+ static void
+ _WriteExtraHead(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ WriteStr(AH, ctx->idStr);
+ }
+
+ static void
+ _ReadExtraHead(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *str = ReadStr(AH);
+
+ if (strlen(str) != 32)
+ die_horribly(AH, modulename, "Invalid ID of the backup set (corrupted TOC file?)\n");
+
+ strcpy(ctx->idStr, str);
+ }
+
+ static char *
+ getRandomData(char *s, int len)
+ {
+ int i;
+
+ #ifdef USE_SSL
+ if (RAND_bytes((unsigned char *)s, len) != 1)
+ #endif
+ for (i = 0; i < len; i++)
+ /* Use a lower strengh random number if OpenSSL is not available */
+ s[i] = random() % 255;
+
+ return s;
+ }
+
+ static bool
+ isDirectory(const char *fname)
+ {
+ struct stat st;
+
+ if (stat(fname, &st))
+ return false;
+
+ return S_ISDIR(st.st_mode);
+ }
+
+ static bool
+ isRegularFile(const char *fname)
+ {
+ struct stat st;
+
+ if (stat(fname, &st))
+ return false;
+
+ return S_ISREG(st.st_mode);
+ }
+
diff --git a/src/bin/pg_dump/pg_backup_files.c b/src/bin/pg_dump/pg_backup_files.c
index abc93b1..825c473 100644
*** a/src/bin/pg_dump/pg_backup_files.c
--- b/src/bin/pg_dump/pg_backup_files.c
*************** InitArchiveFmt_Files(ArchiveHandle *AH)
*** 92,97 ****
--- 92,98 ----
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = NULL;
AH->StartBlobsPtr = _StartBlobs;
AH->StartBlobPtr = _StartBlob;
*************** InitArchiveFmt_Files(ArchiveHandle *AH)
*** 100,105 ****
--- 101,110 ----
AH->ClonePtr = NULL;
AH->DeClonePtr = NULL;
+ AH->StartCheckArchivePtr = NULL;
+ AH->CheckTocEntryPtr = NULL;
+ AH->EndCheckArchivePtr = NULL;
+
/*
* Set up some special context used in compressing data.
*/
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index 006f7da..dcc13ee 100644
*** a/src/bin/pg_dump/pg_backup_tar.c
--- b/src/bin/pg_dump/pg_backup_tar.c
*************** InitArchiveFmt_Tar(ArchiveHandle *AH)
*** 144,149 ****
--- 144,150 ----
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = NULL;
AH->StartBlobsPtr = _StartBlobs;
AH->StartBlobPtr = _StartBlob;
*************** InitArchiveFmt_Tar(ArchiveHandle *AH)
*** 152,157 ****
--- 153,162 ----
AH->ClonePtr = NULL;
AH->DeClonePtr = NULL;
+ AH->StartCheckArchivePtr = NULL;
+ AH->CheckTocEntryPtr = NULL;
+ AH->EndCheckArchivePtr = NULL;
+
/*
* Set up some special context used in compressing data.
*/
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 04ded33..bea97ea 100644
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
*************** static int no_security_label = 0;
*** 138,143 ****
--- 138,144 ----
static void help(const char *progname);
+ static ArchiveFormat parseArchiveFormat(const char *format);
static void expand_schema_name_patterns(SimpleStringList *patterns,
SimpleOidList *oids);
static void expand_table_name_patterns(SimpleStringList *patterns,
*************** main(int argc, char **argv)
*** 267,272 ****
--- 268,274 ----
int my_version;
int optindex;
RestoreOptions *ropt;
+ ArchiveFormat archiveFormat = archUnknown;
static int disable_triggers = 0;
static int outputNoTablespaces = 0;
*************** main(int argc, char **argv)
*** 542,575 ****
if (compressLevel == COMPRESSION_UNKNOWN)
compressLevel = Z_DEFAULT_COMPRESSION;
! /* open the output file */
! if (pg_strcasecmp(format, "a") == 0 || pg_strcasecmp(format, "append") == 0)
! {
! /* This is used by pg_dumpall, and is not documented */
plainText = 1;
! g_fout = CreateArchive(filename, archNull, 0, archModeAppend);
! }
! else if (pg_strcasecmp(format, "c") == 0 || pg_strcasecmp(format, "custom") == 0)
! g_fout = CreateArchive(filename, archCustom, compressLevel, archModeWrite);
! else if (pg_strcasecmp(format, "f") == 0 || pg_strcasecmp(format, "file") == 0)
! {
! /*
! * Dump files into the current directory; for demonstration only, not
! * documented.
! */
! g_fout = CreateArchive(filename, archFiles, compressLevel, archModeWrite);
! }
! else if (pg_strcasecmp(format, "p") == 0 || pg_strcasecmp(format, "plain") == 0)
{
! plainText = 1;
! g_fout = CreateArchive(filename, archNull, 0, archModeWrite);
}
! else if (pg_strcasecmp(format, "t") == 0 || pg_strcasecmp(format, "tar") == 0)
! g_fout = CreateArchive(filename, archTar, compressLevel, archModeWrite);
! else
{
! write_msg(NULL, "invalid output format \"%s\" specified\n", format);
! exit(1);
}
if (g_fout == NULL)
--- 544,607 ----
if (compressLevel == COMPRESSION_UNKNOWN)
compressLevel = Z_DEFAULT_COMPRESSION;
! archiveFormat = parseArchiveFormat(format);
!
! /* archiveFormat specific setup */
! if (archiveFormat == archNull || archiveFormat == archNullAppend)
plainText = 1;
!
! /*
! * If AH->compression == UNKNOWN_COMPRESSION then it has not been set to some
! * value explicitly.
! *
! * Fall back to default:
! *
! * zlib with Z_DEFAULT_COMPRESSION for those formats that support it.
! * If either one is not available: use no compression at all.
! */
!
! if (compressLevel == COMPRESSION_UNKNOWN)
{
! #ifdef HAVE_LIBZ
! if (archiveFormat == archCustom || archiveFormat == archDirectory)
! compressLevel = Z_DEFAULT_COMPRESSION;
! else
! compressLevel = 0;
! #else
! compressLevel = 0;
! #endif
}
!
! /* open the output file */
! switch(archiveFormat)
{
! case archCustom:
! g_fout = CreateArchive(filename, archCustom, compressLevel,
! archModeWrite);
! break;
! case archDirectory:
! g_fout = CreateArchive(filename, archDirectory, compressLevel,
! archModeWrite);
! break;
! case archFiles:
! g_fout = CreateArchive(filename, archFiles, compressLevel,
! archModeWrite);
! break;
! case archNull:
! g_fout = CreateArchive(filename, archNull, 0, archModeWrite);
! break;
! case archNullAppend:
! g_fout = CreateArchive(filename, archNull, 0, archModeAppend);
! break;
! case archTar:
! g_fout = CreateArchive(filename, archTar, compressLevel,
! archModeWrite);
! break;
!
! default:
! /* we never reach here, because we check in parseArchiveFormat()
! * already. */
! break;
}
if (g_fout == NULL)
*************** main(int argc, char **argv)
*** 678,684 ****
*/
do_sql_command(g_conn, "BEGIN");
! do_sql_command(g_conn, "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE");
/* Select the appropriate subquery to convert user IDs to names */
if (g_fout->remoteVersion >= 80100)
--- 710,716 ----
*/
do_sql_command(g_conn, "BEGIN");
! do_sql_command(g_conn, "SET TRANSACTION READ ONLY ISOLATION LEVEL SERIALIZABLE");
/* Select the appropriate subquery to convert user IDs to names */
if (g_fout->remoteVersion >= 80100)
*************** help(const char *progname)
*** 839,847 ****
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|t|p output file format (custom, tar, plain text)\n"));
printf(_(" -v, --verbose verbose mode\n"));
! printf(_(" -Z, --compress=0-9 compression level for compressed formats\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
--- 871,879 ----
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar, plain text)\n"));
printf(_(" -v, --verbose verbose mode\n"));
! printf(_(" -Z, --compress=0-9 compression level of libz for compressed formats\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
*************** exit_nicely(void)
*** 896,901 ****
--- 928,971 ----
exit(1);
}
+ static ArchiveFormat
+ parseArchiveFormat(const char *format)
+ {
+ ArchiveFormat archiveFormat;
+
+ if (pg_strcasecmp(format, "a") == 0 || pg_strcasecmp(format, "append") == 0)
+ /* This is used by pg_dumpall, and is not documented */
+ archiveFormat = archNullAppend;
+ else if (pg_strcasecmp(format, "c") == 0)
+ archiveFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archiveFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archiveFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archiveFormat = archDirectory;
+ else if (pg_strcasecmp(format, "f") == 0 || pg_strcasecmp(format, "file") == 0)
+ /*
+ * Dump files into the current directory; for demonstration only, not
+ * documented.
+ */
+ archiveFormat = archFiles;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archiveFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archiveFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archiveFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archiveFormat = archTar;
+ else
+ {
+ write_msg(NULL, "invalid output format \"%s\" specified\n", format);
+ exit(1);
+ }
+ return archiveFormat;
+ }
+
/*
* Find the OIDs of all schemas matching the given list of patterns,
* and append them to the given OID list.
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7885535..0f643b9 100644
*** a/src/bin/pg_dump/pg_dump.h
--- b/src/bin/pg_dump/pg_dump.h
*************** typedef struct
*** 39,44 ****
--- 39,45 ----
} CatalogId;
typedef int DumpId;
+ #define InvalidDumpId (-1)
/*
* Data structures for simple lists of OIDs and strings. The support for
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 1ddba72..3fbe264 100644
*** a/src/bin/pg_dump/pg_restore.c
--- b/src/bin/pg_dump/pg_restore.c
*************** main(int argc, char **argv)
*** 79,84 ****
--- 79,85 ----
static int skip_seclabel = 0;
struct option cmdopts[] = {
+ {"check", 0, NULL, 'k'},
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
*************** main(int argc, char **argv)
*** 144,150 ****
}
}
! while ((c = getopt_long(argc, argv, "acCd:ef:F:h:iI:j:lL:n:Op:P:RsS:t:T:U:vwWxX:1",
cmdopts, NULL)) != -1)
{
switch (c)
--- 145,151 ----
}
}
! while ((c = getopt_long(argc, argv, "acCd:ef:F:h:iI:j:klL:n:Op:P:RsS:t:T:U:vwWxX:1",
cmdopts, NULL)) != -1)
{
switch (c)
*************** main(int argc, char **argv)
*** 182,188 ****
case 'j': /* number of restore jobs */
opts->number_of_jobs = atoi(optarg);
break;
!
case 'l': /* Dump the TOC summary */
opts->tocSummary = 1;
break;
--- 183,191 ----
case 'j': /* number of restore jobs */
opts->number_of_jobs = atoi(optarg);
break;
! case 'k': /* check the archive */
! opts->checkArchive = 1;
! break;
case 'l': /* Dump the TOC summary */
opts->tocSummary = 1;
break;
*************** main(int argc, char **argv)
*** 352,357 ****
--- 355,365 ----
opts->format = archCustom;
break;
+ case 'd':
+ case 'D':
+ opts->format = archDirectory;
+ break;
+
case 'f':
case 'F':
opts->format = archFiles;
*************** main(int argc, char **argv)
*** 363,369 ****
break;
default:
! write_msg(NULL, "unrecognized archive format \"%s\"; please specify \"c\" or \"t\"\n",
opts->formatName);
exit(1);
}
--- 371,377 ----
break;
default:
! write_msg(NULL, "unrecognized archive format \"%s\"; please specify \"c\", \"d\" or \"t\"\n",
opts->formatName);
exit(1);
}
*************** main(int argc, char **argv)
*** 392,397 ****
--- 400,413 ----
if (opts->tocSummary)
PrintTOCSummary(AH, opts);
+ else if (opts->checkArchive)
+ {
+ bool checkOK;
+ checkOK = CheckArchive(AH, opts);
+ CloseArchive(AH);
+ if (!checkOK)
+ exit(1);
+ }
else
RestoreArchive(AH, opts);
*************** usage(const char *progname)
*** 418,425 ****
printf(_("\nGeneral options:\n"));
printf(_(" -d, --dbname=NAME connect to database name\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|t backup file format (should be automatic)\n"));
printf(_(" -l, --list print summarized TOC of the archive\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
--- 434,442 ----
printf(_("\nGeneral options:\n"));
printf(_(" -d, --dbname=NAME connect to database name\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|d|t backup file format (should be automatic)\n"));
printf(_(" -l, --list print summarized TOC of the archive\n"));
+ printf(_(" -k check the directory archive\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
On 20.11.2010 06:10, Joachim Wieland wrote:
2010/11/19 Jos� Arthur Benetasso Villanova<jose.arthur@gmail.com>:
The md5.c and kwlookup.c reuse using a link doesn't look nice either.
This way you need to compile twice, among others things, but I think
that its temporary, right?No, it isn't. md5.c is used in the same way by e.g. libpq and there
are other examples for links in core, check out src/bin/psql for
example.
It seems like overkill to include md5 just for hashing the random bytes
that getRandomData() generates. And if random() doesn't produce unique
values, it's not going to get better by hashing it. How about using a
timestamp instead of the hash?
If you don't initialize random() with srandom(), BTW, it will always
return the same value.
But I'm not actually sure we should be preventing mix & match of files
from different dumps. It might be very useful to do just that sometimes,
like restoring a recent backup, with the contents of one table replaced
with older data. A warning would be ok, though.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
But I'm not actually sure we should be preventing mix & match of files
from different dumps. It might be very useful to do just that sometimes,
like restoring a recent backup, with the contents of one table replaced
with older data. A warning would be ok, though.
+1. This mechanism seems like a solution in search of a problem.
Just lose the whole thing, and instead fix pg_dump to complain if
the target directory isn't empty. That should be sufficient to guard
against accidental mixing of different dumps, and as Heikki says
there's not a good reason to prevent intentional mixing.
regards, tom lane
On 22.11.2010 19:07, Tom Lane wrote:
Heikki Linnakangas<heikki.linnakangas@enterprisedb.com> writes:
But I'm not actually sure we should be preventing mix& match of files
from different dumps. It might be very useful to do just that sometimes,
like restoring a recent backup, with the contents of one table replaced
with older data. A warning would be ok, though.+1. This mechanism seems like a solution in search of a problem.
Just lose the whole thing, and instead fix pg_dump to complain if
the target directory isn't empty. That should be sufficient to guard
against accidental mixing of different dumps, and as Heikki says
there's not a good reason to prevent intentional mixing.
Extending that thought a bit, it would be nice if the per-file header
would carry the info if the file is compressed or not, instead of just
one such flag in the TOC. You could then also mix & match files from
compressed and non-compressed archives.
Other than the md5 thing, the patch looks fine to me. There's many quite
levels of indirection, it took me a while to get my head around the call
chains like DataDumper->_WriteData->WriteDataToArchive->_WriteBuf, but I
don't have any ideas on how to improve that.
However, docs are missing, so I'm marking this as "Waiting on Author".
There's some cosmetic changes I'd like to have fixed or do myself before
committing:
* wrap long lines
* use extern in function prototypes in header files
* "inline" some functions like _StartDataCompressor, _EndDataCompressor,
_DoInflate/_DoDeflate that aren't doing anything but call some other
function.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Mon, Nov 22, 2010 at 3:44 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
* wrap long lines
* use extern in function prototypes in header files
* "inline" some functions like _StartDataCompressor, _EndDataCompressor,
_DoInflate/_DoDeflate that aren't doing anything but call some other
function.
So here is a new round of patches. It turned out that the feature to
allow to also restore files from a different dump and with a different
compression required some changes in the compressor API. And in the
end I didn't like all the #ifdefs either and made a less #ifdef-rich
version using function pointers. The downside now is that I have
created quite a few one-line functions that Heikki doesn't like all
that much, but I assume that they are okay in this case on the grounds
that the public compressor interface is calling the private
implementation of a certain compressor.
Joachim
Attachments:
pg_dump-compression-refactor.difftext/x-patch; charset=US-ASCII; name=pg_dump-compression-refactor.diffDownload
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 0367466..efb031a 100644
*** a/src/bin/pg_dump/Makefile
--- b/src/bin/pg_dump/Makefile
*************** override CPPFLAGS := -I$(libpq_srcdir) $
*** 20,26 ****
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
--- 20,26 ----
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o compress_io.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
diff --git a/src/bin/pg_dump/compress_io.c b/src/bin/pg_dump/compress_io.c
index ...9fb4438 .
*** a/src/bin/pg_dump/compress_io.c
--- b/src/bin/pg_dump/compress_io.c
***************
*** 0 ****
--- 1,451 ----
+ /*-------------------------------------------------------------------------
+ *
+ * compress_io.c
+ * Routines for archivers to write an uncompressed or compressed data
+ * stream.
+ *
+ * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * pg_dump will read the system catalogs in a database and dump out a
+ * script that reproduces the schema in terms of SQL that is understood
+ * by PostgreSQL
+ *
+ * IDENTIFICATION
+ * XXX
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include "compress_io.h"
+
+ static const char *modulename = gettext_noop("compress_io");
+
+ /*
+ * Routines that are called from other parts of the code (non-static functions)
+ * are not declared here but in the header.
+ */
+
+ /* Routines that are private to this file (static functions) */
+ static void SetCompressorAlgorithm(CompressorState *cs, int compression);
+
+ /* Routines that are private to a specific compressor (static functions) */
+ #ifdef HAVE_LIBZ
+ /* Routines that support zlib compressed data I/O */
+ static void AllocateCompressorZlib(CompressorState *cs);
+ static void InitCompressorZlib(CompressorState *cs, int compression);
+ static void DeflateCompressorZlib(ArchiveHandle *AH, CompressorState *cs,
+ bool flush, WriteFunc writeF);
+ static void ReadDataFromArchiveZlib(ArchiveHandle *AH, CompressorState *cs,
+ ReadFunc readF);
+ static size_t WriteDataToArchiveZlib(ArchiveHandle *AH, CompressorState *cs,
+ WriteFunc writeF, const void *data, size_t dLen);
+ static void FlushCompressorZlib(ArchiveHandle *AH, CompressorState *cs,
+ WriteFunc writeF);
+ static void FreeCompressorZlib(CompressorState *cs);
+ #endif
+
+ /* Routines that support uncompressed data I/O */
+ static void AllocateCompressorNone(CompressorState *cs);
+ static void ReadDataFromArchiveNone(ArchiveHandle *AH, CompressorState *cs,
+ ReadFunc readF);
+ static size_t WriteDataToArchiveNone(ArchiveHandle *AH, CompressorState *cs,
+ WriteFunc writeF, const void *data, size_t dLen);
+ static void FreeCompressorNone(CompressorState *cs);
+
+
+ static void
+ SetCompressorAlgorithm(CompressorState *cs, int compression)
+ {
+ CompressorAlgorithm alg;
+
+ if (compression == Z_DEFAULT_COMPRESSION ||
+ (compression > 0 && compression <= 9))
+ alg = COMPR_ALG_LIBZ;
+ else if (compression == COMPRESSION_NONE)
+ alg = COMPR_ALG_NONE;
+ else
+ die_horribly(NULL, modulename, "Invalid compression code: %d\n",
+ compression);
+
+ #ifndef HAVE_LIBZ
+ /*
+ * So here we are not built with libz support.
+ * For a dump, if no compression was specified issue a warning, and fall
+ * back to no compression.
+ */
+ if ((compression > 0 && compression <= 9)
+ || compression == Z_DEFAULT_COMPRESSION)
+ if (cs->action == COMPRESSOR_DEFLATE)
+ {
+ write_msg(modulename, "WARNING: requested compression not available in "
+ "this installation -- archive will be uncompressed\n");
+ compression = 0;
+ alg = COMPR_ALG_NONE;
+ }
+
+ if (alg == COMPR_ALG_LIBZ)
+ die_horribly(NULL, modulename, "not built with zlib support\n");
+ #endif
+
+ if (alg != cs->comprAlg)
+ {
+ /*
+ * A change of the algorithm has been requested (including
+ * initialization which is a change from COMPR_ALG_UNKNOWN to some
+ * other algorithm).
+ */
+ cs->comprAlg = alg;
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_LIBZ:
+ #ifdef HAVE_LIBZ
+ {
+ CompressorFuncs cfs = {
+ AllocateCompressorZlib,
+ InitCompressorZlib,
+ ReadDataFromArchiveZlib,
+ WriteDataToArchiveZlib,
+ FlushCompressorZlib,
+ FreeCompressorZlib
+ };
+ cs->funcs = cfs;
+ }
+ #endif
+ break;
+ case COMPR_ALG_NONE:
+ {
+ CompressorFuncs cfs = {
+ AllocateCompressorNone,
+ NULL,
+ ReadDataFromArchiveNone,
+ WriteDataToArchiveNone,
+ NULL,
+ FreeCompressorNone
+ };
+ cs->funcs = cfs;
+ }
+ break;
+ default:
+ break;
+ }
+ cs->funcs.allocateCompressor(cs);
+ }
+
+ Assert(compression == 0 ?
+ (cs->comprAlg == COMPR_ALG_NONE) :
+ (cs->comprAlg != COMPR_ALG_NONE));
+ }
+
+ CompressorState *
+ AllocateCompressorState(CompressorAction action)
+ {
+ CompressorState *cs;
+
+ cs = (CompressorState *) malloc(sizeof(CompressorState));
+ if (cs == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+ memset(cs, 0, sizeof(CompressorState));
+
+ cs->comprAlg = COMPR_ALG_UNKNOWN;
+ cs->action = action;
+
+ return cs;
+ }
+
+ /*
+ * If a compression library is in use, then startit up. This is called from
+ * StartData & StartBlob. The buffers are setup in the Init routine.
+ */
+ void
+ InitCompressorState(CompressorState *cs, int compression)
+ {
+ /*
+ * The compression is set either on the commandline when creating
+ * an archive or by ReadHead() when restoring an archive. It can also be
+ * set on a per-data item basis in the directory archive format.
+ */
+ SetCompressorAlgorithm(cs, compression);
+
+ if (cs->funcs.initCompressor)
+ cs->funcs.initCompressor(cs, compression);
+ }
+
+ /*
+ * Read compressed data from the input stream (via readF).
+ */
+ void
+ ReadDataFromArchive(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF)
+ {
+ cs->funcs.readDataFromArchive(AH, cs, readF);
+ }
+
+ /*
+ * Send compressed data to the output stream (via writeF).
+ */
+ size_t
+ WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF,
+ const void *data, size_t dLen)
+ {
+ return cs->funcs.writeDataToArchive(AH, cs, writeF, data, dLen);
+ }
+
+ /*
+ * Terminate compression library context and flush its buffers. If no
+ * compression library is in use then just return.
+ */
+ void
+ FlushCompressorState(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF)
+ {
+ if (cs->funcs.flushCompressor)
+ cs->funcs.flushCompressor(AH, cs, writeF);
+ }
+
+ void
+ FreeCompressorState(CompressorState *cs)
+ {
+ if (cs->funcs.freeCompressor)
+ cs->funcs.freeCompressor(cs);
+
+ free(cs);
+ }
+
+
+ #ifdef HAVE_LIBZ
+ /*
+ * Functions for zlib compressed output.
+ */
+
+ static void
+ AllocateCompressorZlib(CompressorState *cs)
+ {
+ cs->zp = (z_streamp) malloc(sizeof(z_stream));
+ if (cs->zp == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+
+ /*
+ * comprOutInitSize is the buffer size we tell zlib it can output
+ * to. We actually allocate one extra byte because some routines
+ * want to append a trailing zero byte to the zlib output. The
+ * input buffer is expansible and is always of size
+ * cs->comprInSize; comprInInitSize is just the initial default
+ * size for it.
+ */
+ cs->comprOut = (char *) realloc(cs->comprOut, comprOutInitSize + 1);
+ cs->comprIn = (char *) realloc(cs->comprIn, comprInInitSize);
+ cs->comprInSize = comprInInitSize;
+ cs->comprOutSize = comprOutInitSize;
+
+ if (cs->comprOut == NULL || cs->comprIn == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+ }
+
+ static void
+ InitCompressorZlib(CompressorState *cs, int compression)
+ {
+ z_streamp zp = cs->zp;
+
+ zp->zalloc = Z_NULL;
+ zp->zfree = Z_NULL;
+ zp->opaque = Z_NULL;
+
+ if (cs->action == COMPRESSOR_DEFLATE)
+ if (deflateInit(zp, compression) != Z_OK)
+ die_horribly(NULL, modulename,
+ "could not initialize compression library: %s\n",
+ zp->msg);
+ if (cs->action == COMPRESSOR_INFLATE)
+ if (inflateInit(zp) != Z_OK)
+ die_horribly(NULL, modulename,
+ "could not initialize compression library: %s\n",
+ zp->msg);
+
+ /* Just be paranoid - maybe End is called after Start, with no Write */
+ zp->next_out = (void *) cs->comprOut;
+ zp->avail_out = comprOutInitSize;
+ }
+
+ static void
+ FlushCompressorZlib(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF)
+ {
+ z_streamp zp = cs->zp;
+
+ zp->next_in = NULL;
+ zp->avail_in = 0;
+
+ DeflateCompressorZlib(AH, cs, true, writeF);
+
+ if (deflateEnd(zp) != Z_OK)
+ die_horribly(AH, modulename,
+ "could not close compression stream: %s\n", zp->msg);
+ }
+
+ static void
+ DeflateCompressorZlib(ArchiveHandle *AH, CompressorState *cs,
+ bool flush, WriteFunc writeF)
+ {
+ z_streamp zp = cs->zp;
+ char *out = cs->comprOut;
+ int res = Z_OK;
+
+ while (cs->zp->avail_in != 0 || flush)
+ {
+ res = deflate(zp, flush ? Z_FINISH : Z_NO_FLUSH);
+ if (res == Z_STREAM_ERROR)
+ die_horribly(AH, modulename,
+ "could not compress data: %s\n", zp->msg);
+ if ((flush && (zp->avail_out < comprOutInitSize))
+ || (zp->avail_out == 0)
+ || (zp->avail_in != 0)
+ )
+ {
+ /*
+ * Extra paranoia: avoid zero-length chunks, since a zero length
+ * chunk is the EOF marker in the custom format. This should never
+ * happen but...
+ */
+ if (zp->avail_out < comprOutInitSize)
+ {
+ /*
+ * Any write function shoud do its own error checking but
+ * to make sure we do a check here as well...
+ */
+ size_t len = comprOutInitSize - zp->avail_out;
+ if (writeF(AH, out, len) != len)
+ die_horribly(AH, modulename,
+ "could not write to output file: %s\n",
+ strerror(errno));
+ }
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+ }
+
+ if (res == Z_STREAM_END)
+ break;
+ }
+ }
+
+ static void
+ ReadDataFromArchiveZlib(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF)
+ {
+ z_streamp zp = cs->zp;
+ char *out = cs->comprOut;
+ int res = Z_OK;
+ size_t cnt;
+ void *in;
+
+ /* no minimal chunk size for zlib */
+ while ((cnt = readF(AH, &in, 0)))
+ {
+ zp->next_in = (void *) in;
+ zp->avail_in = cnt;
+
+ while (zp->avail_in > 0)
+ {
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+
+ res = inflate(zp, 0);
+ if (res != Z_OK && res != Z_STREAM_END)
+ die_horribly(AH, modulename,
+ "could not uncompress data: %s\n", zp->msg);
+
+ out[comprOutInitSize - zp->avail_out] = '\0';
+ ahwrite(out, 1, comprOutInitSize - zp->avail_out, AH);
+ }
+ }
+
+ zp->next_in = NULL;
+ zp->avail_in = 0;
+ while (res != Z_STREAM_END)
+ {
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+ res = inflate(zp, 0);
+ if (res != Z_OK && res != Z_STREAM_END)
+ die_horribly(AH, modulename,
+ "could not uncompress data: %s\n", zp->msg);
+
+ out[comprOutInitSize - zp->avail_out] = '\0';
+ ahwrite(out, 1, comprOutInitSize - zp->avail_out, AH);
+ }
+
+ if (inflateEnd(zp) != Z_OK)
+ die_horribly(AH, modulename,
+ "could not close compression library: %s\n", zp->msg);
+ }
+
+ static size_t
+ WriteDataToArchiveZlib(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF,
+ const void *data, size_t dLen)
+ {
+ cs->zp->next_in = (void *) data;
+ cs->zp->avail_in = dLen;
+ DeflateCompressorZlib(AH, cs, false, writeF);
+ /* we have either succeeded in writing dLen bytes or we have called
+ * die_horribly() */
+ return dLen;
+ }
+
+ static void
+ FreeCompressorZlib(CompressorState *cs)
+ {
+ free(cs->comprOut);
+ free(cs->comprIn);
+
+ free(cs->zp);
+ }
+ #endif /* HAVE_LIBZ */
+
+
+ /*
+ * Functions for uncompressed output.
+ */
+ static void
+ AllocateCompressorNone(CompressorState *cs)
+ {
+ cs->comprOut = (char *) realloc(cs->comprOut, comprOutInitSize + 1);
+ cs->comprIn = (char *) realloc(cs->comprIn, comprInInitSize);
+ cs->comprInSize = comprInInitSize;
+ cs->comprOutSize = comprOutInitSize;
+
+ if (cs->comprOut == NULL || cs->comprIn == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+ }
+
+ static void
+ ReadDataFromArchiveNone(ArchiveHandle *AH, CompressorState *cs, ReadFunc readF)
+ {
+ size_t cnt;
+ void *in;
+
+ /* no minimal chunk size for uncompressed data */
+ while ((cnt = readF(AH, &in, 0)))
+ {
+ ahwrite(in, 1, cnt, AH);
+ }
+ }
+
+ static size_t
+ WriteDataToArchiveNone(ArchiveHandle *AH, CompressorState *cs, WriteFunc writeF,
+ const void *data, size_t dLen)
+ {
+ /*
+ * Any write function shoud do its own error checking but to make
+ * sure we do a check here as well...
+ */
+ if (writeF(AH, data, dLen) != dLen)
+ die_horribly(AH, modulename,
+ "could not write to output file: %s\n",
+ strerror(errno));
+ return dLen;
+ }
+
+ static void
+ FreeCompressorNone(CompressorState *cs)
+ {
+ free(cs->comprOut);
+ free(cs->comprIn);
+ }
+
+
diff --git a/src/bin/pg_dump/compress_io.h b/src/bin/pg_dump/compress_io.h
index ...6574cfa .
*** a/src/bin/pg_dump/compress_io.h
--- b/src/bin/pg_dump/compress_io.h
***************
*** 0 ****
--- 1,99 ----
+ /*-------------------------------------------------------------------------
+ *
+ * compress_io.h
+ * Routines for archivers to write an uncompressed or compressed data
+ * stream.
+ *
+ * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * pg_dump will read the system catalogs in a database and dump out a
+ * script that reproduces the schema in terms of SQL that is understood
+ * by PostgreSQL
+ *
+ * IDENTIFICATION
+ * XXX
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include "pg_backup_archiver.h"
+
+ #define comprOutInitSize 65536
+ #define comprInInitSize 65536
+
+ struct _CompressorState;
+
+ typedef enum
+ {
+ COMPRESSOR_INFLATE,
+ COMPRESSOR_DEFLATE
+ } CompressorAction;
+
+ typedef enum
+ {
+ COMPR_ALG_UNKNOWN,
+ COMPR_ALG_NONE,
+ COMPR_ALG_LIBZ
+ } CompressorAlgorithm;
+
+ typedef size_t (*WriteFunc)(ArchiveHandle *AH, const void *buf, size_t len);
+ /*
+ * The sizeHint parameter tells the format which size is required for the
+ * algorithm. If the format doesn't know better it should send back that many
+ * bytes of input. If the format was written by blocks however, then the
+ * format already knows the block size and can deliver exactly the size of the
+ * next block.
+ *
+ * The custom archive is written in such blocks. The directory archive however
+ * is just a continuous stream of data. Other compressed formats than libz
+ * however deal with blocks on the algorithm level and then the algorithm is
+ * able to tell the format the amount of data that it is ready to consume next.
+ */
+ typedef size_t (*ReadFunc)(ArchiveHandle *AH, void **buf, size_t sizeHint);
+
+ typedef void (*AllocateCompressorPtr)(struct _CompressorState *cs);
+ typedef void (*InitCompressorPtr)(struct _CompressorState *cs, int compression);
+ typedef void (*ReadDataFromArchivePtr)(ArchiveHandle *AH,
+ struct _CompressorState *cs, ReadFunc readF);
+ typedef size_t (*WriteDataToArchivePtr)(ArchiveHandle *AH,
+ struct _CompressorState *cs, WriteFunc writeF,
+ const void *data, size_t dLen);
+ typedef void (*FlushCompressorPtr)(ArchiveHandle *AH,
+ struct _CompressorState *cs, WriteFunc writeF);
+ typedef void (*FreeCompressorPtr)(struct _CompressorState *cs);
+
+ typedef struct
+ {
+ AllocateCompressorPtr allocateCompressor;
+ InitCompressorPtr initCompressor;
+ ReadDataFromArchivePtr readDataFromArchive;
+ WriteDataToArchivePtr writeDataToArchive;
+ FlushCompressorPtr flushCompressor;
+ FreeCompressorPtr freeCompressor;
+ } CompressorFuncs;
+
+ typedef struct _CompressorState
+ {
+ CompressorAlgorithm comprAlg;
+ #ifdef HAVE_LIBZ
+ z_streamp zp;
+ #endif
+ char *comprOut;
+ char *comprIn;
+ size_t comprInSize;
+ size_t comprOutSize;
+ CompressorAction action;
+ CompressorFuncs funcs;
+ } CompressorState;
+
+ extern CompressorState *AllocateCompressorState(CompressorAction action);
+ extern void InitCompressorState(CompressorState *cs, int compression);
+ extern void ReadDataFromArchive(ArchiveHandle *AH, CompressorState *cs,
+ ReadFunc readF);
+ extern size_t WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs,
+ WriteFunc writeF, const void *data, size_t dLen);
+ extern void FlushCompressorState(ArchiveHandle *AH, CompressorState *cs,
+ WriteFunc writeF);
+ extern void FreeCompressorState(CompressorState *cs);
+
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index d1a9c54..a28956b 100644
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
***************
*** 22,27 ****
--- 22,28 ----
#include "pg_backup_db.h"
#include "dumputils.h"
+ #include "compress_io.h"
#include <ctype.h>
#include <unistd.h>
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index ae0c6e0..705b2e4 100644
*** a/src/bin/pg_dump/pg_backup_archiver.h
--- b/src/bin/pg_dump/pg_backup_archiver.h
***************
*** 49,54 ****
--- 49,55 ----
#define GZCLOSE(fh) fclose(fh)
#define GZWRITE(p, s, n, fh) (fwrite(p, s, n, fh) * (s))
#define GZREAD(p, s, n, fh) fread(p, s, n, fh)
+ /* this is just the redefinition of a libz constant */
#define Z_DEFAULT_COMPRESSION (-1)
typedef struct _z_stream
*************** typedef struct _z_stream
*** 61,66 ****
--- 62,76 ----
typedef z_stream *z_streamp;
#endif
+ /* XXX eventually this should be an enum. However if we want something
+ * pluggable in the long run it can get hard to add values to a central
+ * enum from the plugins... */
+ #define COMPRESSION_UNKNOWN (-2)
+ #define COMPRESSION_NONE 0
+
+ /* XXX should we change the archive version for pg_dump with directory support?
+ * XXX We are not actually modifying the existing formats, but on the other hand
+ * XXX a file could now be compressed with liblzf. */
/* Current archive version number (the format we can output) */
#define K_VERS_MAJOR 1
#define K_VERS_MINOR 12
*************** typedef struct _archiveHandle
*** 267,272 ****
--- 277,288 ----
struct _tocEntry *currToc; /* Used when dumping data */
int compression; /* Compression requested on open */
+ /* Possible values for compression:
+ -2 COMPRESSION_UNKNOWN
+ -1 Z_DEFAULT_COMPRESSION
+ 0 COMPRESSION_NONE
+ 1-9 levels for gzip compression
+ */
ArchiveMode mode; /* File mode - r or w */
void *formatData; /* Header data specific to file format */
*************** int ahprintf(ArchiveHandle *AH, const
*** 381,384 ****
--- 397,411 ----
void ahlog(ArchiveHandle *AH, int level, const char *fmt,...) __attribute__((format(printf, 3, 4)));
+ #ifdef USE_ASSERT_CHECKING
+ #define Assert(condition) \
+ if (!(condition)) \
+ { \
+ write_msg(NULL, "Failed assertion in %s, line %d\n", \
+ __FILE__, __LINE__); \
+ abort();\
+ }
+ #else
+ #define Assert(condition)
+ #endif
#endif
diff --git a/src/bin/pg_dump/pg_backup_custom.c b/src/bin/pg_dump/pg_backup_custom.c
index 2bc7e8f..f3f41b5 100644
*** a/src/bin/pg_dump/pg_backup_custom.c
--- b/src/bin/pg_dump/pg_backup_custom.c
***************
*** 25,30 ****
--- 25,31 ----
*/
#include "pg_backup_archiver.h"
+ #include "compress_io.h"
/*--------
* Routines in the format interface
*************** static void _LoadBlobs(ArchiveHandle *AH
*** 58,77 ****
static void _Clone(ArchiveHandle *AH);
static void _DeClone(ArchiveHandle *AH);
! /*------------
! * Buffers used in zlib compression and extra data stored in archive and
! * in TOC entries.
! *------------
! */
! #define zlibOutSize 4096
! #define zlibInSize 4096
typedef struct
{
! z_streamp zp;
! char *zlibOut;
! char *zlibIn;
! size_t inSize;
int hasSeek;
pgoff_t filePos;
pgoff_t dataStart;
--- 59,70 ----
static void _Clone(ArchiveHandle *AH);
static void _DeClone(ArchiveHandle *AH);
! static size_t _CustomWriteFunc(ArchiveHandle *AH, const void *buf, size_t len);
! static size_t _CustomReadFunction(ArchiveHandle *AH, void **buf, size_t sizeHint);
typedef struct
{
! CompressorState *cs;
int hasSeek;
pgoff_t filePos;
pgoff_t dataStart;
*************** typedef struct
*** 89,98 ****
*------
*/
static void _readBlockHeader(ArchiveHandle *AH, int *type, int *id);
- static void _StartDataCompressor(ArchiveHandle *AH, TocEntry *te);
- static void _EndDataCompressor(ArchiveHandle *AH, TocEntry *te);
static pgoff_t _getFilePos(ArchiveHandle *AH, lclContext *ctx);
- static int _DoDeflate(ArchiveHandle *AH, lclContext *ctx, int flush);
static const char *modulename = gettext_noop("custom archiver");
--- 82,88 ----
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 144,179 ****
die_horribly(AH, modulename, "out of memory\n");
AH->formatData = (void *) ctx;
- ctx->zp = (z_streamp) malloc(sizeof(z_stream));
- if (ctx->zp == NULL)
- die_horribly(AH, modulename, "out of memory\n");
-
/* Initialize LO buffering */
AH->lo_buf_size = LOBBUFSIZE;
AH->lo_buf = (void *) malloc(LOBBUFSIZE);
if (AH->lo_buf == NULL)
die_horribly(AH, modulename, "out of memory\n");
- /*
- * zlibOutSize is the buffer size we tell zlib it can output to. We
- * actually allocate one extra byte because some routines want to append a
- * trailing zero byte to the zlib output. The input buffer is expansible
- * and is always of size ctx->inSize; zlibInSize is just the initial
- * default size for it.
- */
- ctx->zlibOut = (char *) malloc(zlibOutSize + 1);
- ctx->zlibIn = (char *) malloc(zlibInSize);
- ctx->inSize = zlibInSize;
ctx->filePos = 0;
- if (ctx->zlibOut == NULL || ctx->zlibIn == NULL)
- die_horribly(AH, modulename, "out of memory\n");
-
/*
* Now open the file
*/
if (AH->mode == archModeWrite)
{
if (AH->fSpec && strcmp(AH->fSpec, "") != 0)
{
AH->FH = fopen(AH->fSpec, PG_BINARY_W);
--- 134,154 ----
die_horribly(AH, modulename, "out of memory\n");
AH->formatData = (void *) ctx;
/* Initialize LO buffering */
AH->lo_buf_size = LOBBUFSIZE;
AH->lo_buf = (void *) malloc(LOBBUFSIZE);
if (AH->lo_buf == NULL)
die_horribly(AH, modulename, "out of memory\n");
ctx->filePos = 0;
/*
* Now open the file
*/
if (AH->mode == archModeWrite)
{
+ ctx->cs = AllocateCompressorState(COMPRESSOR_DEFLATE);
+
if (AH->fSpec && strcmp(AH->fSpec, "") != 0)
{
AH->FH = fopen(AH->fSpec, PG_BINARY_W);
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 211,216 ****
--- 186,193 ----
ctx->hasSeek = checkSeek(AH->FH);
ReadHead(AH);
+ ctx->cs = AllocateCompressorState(COMPRESSOR_INFLATE);
+
ReadToc(AH);
ctx->dataStart = _getFilePos(AH, ctx);
}
*************** _StartData(ArchiveHandle *AH, TocEntry *
*** 324,330 ****
_WriteByte(AH, BLK_DATA); /* Block type */
WriteInt(AH, te->dumpId); /* For sanity check */
! _StartDataCompressor(AH, te);
}
/*
--- 301,307 ----
_WriteByte(AH, BLK_DATA); /* Block type */
WriteInt(AH, te->dumpId); /* For sanity check */
! InitCompressorState(ctx->cs, AH->compression);
}
/*
*************** static size_t
*** 340,356 ****
_WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
{
lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
!
! zp->next_in = (void *) data;
! zp->avail_in = dLen;
! while (zp->avail_in != 0)
! {
! /* printf("Deflating %lu bytes\n", (unsigned long) dLen); */
! _DoDeflate(AH, ctx, 0);
! }
! return dLen;
}
/*
--- 317,325 ----
_WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
{
lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! return WriteDataToArchive(AH, cs, _CustomWriteFunc, data, dLen);
}
/*
*************** _WriteData(ArchiveHandle *AH, const void
*** 363,372 ****
static void
_EndData(ArchiveHandle *AH, TocEntry *te)
{
! /* lclContext *ctx = (lclContext *) AH->formatData; */
! /* lclTocEntry *tctx = (lclTocEntry *) te->formatData; */
! _EndDataCompressor(AH, te);
}
/*
--- 332,342 ----
static void
_EndData(ArchiveHandle *AH, TocEntry *te)
{
! lclContext *ctx = (lclContext *) AH->formatData;
! FlushCompressorState(AH, ctx->cs, _CustomWriteFunc);
! /* Send the end marker */
! WriteInt(AH, 0);
}
/*
*************** _StartBlobs(ArchiveHandle *AH, TocEntry
*** 401,411 ****
static void
_StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
if (oid == 0)
die_horribly(AH, modulename, "invalid OID for large object\n");
WriteInt(AH, oid);
! _StartDataCompressor(AH, te);
}
/*
--- 371,383 ----
static void
_StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
+ lclContext *ctx = (lclContext *) AH->formatData;
+
if (oid == 0)
die_horribly(AH, modulename, "invalid OID for large object\n");
WriteInt(AH, oid);
! InitCompressorState(ctx->cs, AH->compression);
}
/*
*************** _StartBlob(ArchiveHandle *AH, TocEntry *
*** 416,422 ****
static void
_EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
! _EndDataCompressor(AH, te);
}
/*
--- 388,398 ----
static void
_EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
! lclContext *ctx = (lclContext *) AH->formatData;
!
! FlushCompressorState(AH, ctx->cs, _CustomWriteFunc);
! /* Send the end marker */
! WriteInt(AH, 0);
}
/*
*************** static void
*** 533,639 ****
_PrintData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
- z_streamp zp = ctx->zp;
- size_t blkLen;
- char *in = ctx->zlibIn;
- size_t cnt;
-
- #ifdef HAVE_LIBZ
- int res;
- char *out = ctx->zlibOut;
- #endif
-
- #ifdef HAVE_LIBZ
-
- res = Z_OK;
-
- if (AH->compression != 0)
- {
- zp->zalloc = Z_NULL;
- zp->zfree = Z_NULL;
- zp->opaque = Z_NULL;
-
- if (inflateInit(zp) != Z_OK)
- die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
- }
- #endif
-
- blkLen = ReadInt(AH);
- while (blkLen != 0)
- {
- if (blkLen + 1 > ctx->inSize)
- {
- free(ctx->zlibIn);
- ctx->zlibIn = NULL;
- ctx->zlibIn = (char *) malloc(blkLen + 1);
- if (!ctx->zlibIn)
- die_horribly(AH, modulename, "out of memory\n");
-
- ctx->inSize = blkLen + 1;
- in = ctx->zlibIn;
- }
-
- cnt = fread(in, 1, blkLen, AH->FH);
- if (cnt != blkLen)
- {
- if (feof(AH->FH))
- die_horribly(AH, modulename,
- "could not read from input file: end of file\n");
- else
- die_horribly(AH, modulename,
- "could not read from input file: %s\n", strerror(errno));
- }
! ctx->filePos += blkLen;
!
! zp->next_in = (void *) in;
! zp->avail_in = blkLen;
!
! #ifdef HAVE_LIBZ
! if (AH->compression != 0)
! {
! while (zp->avail_in != 0)
! {
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! res = inflate(zp, 0);
! if (res != Z_OK && res != Z_STREAM_END)
! die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
!
! out[zlibOutSize - zp->avail_out] = '\0';
! ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
! }
! }
! else
! #endif
! {
! in[zp->avail_in] = '\0';
! ahwrite(in, 1, zp->avail_in, AH);
! zp->avail_in = 0;
! }
! blkLen = ReadInt(AH);
! }
!
! #ifdef HAVE_LIBZ
! if (AH->compression != 0)
! {
! zp->next_in = NULL;
! zp->avail_in = 0;
! while (res != Z_STREAM_END)
! {
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! res = inflate(zp, 0);
! if (res != Z_OK && res != Z_STREAM_END)
! die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
!
! out[zlibOutSize - zp->avail_out] = '\0';
! ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
! }
! if (inflateEnd(zp) != Z_OK)
! die_horribly(AH, modulename, "could not close compression library: %s\n", zp->msg);
! }
! #endif
}
static void
--- 509,517 ----
_PrintData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
! InitCompressorState(ctx->cs, AH->compression);
! ReadDataFromArchive(AH, ctx->cs, _CustomReadFunction);
}
static void
*************** static void
*** 683,701 ****
_skipData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
size_t blkLen;
! char *in = ctx->zlibIn;
size_t cnt;
blkLen = ReadInt(AH);
while (blkLen != 0)
{
! if (blkLen > ctx->inSize)
{
! free(ctx->zlibIn);
! ctx->zlibIn = (char *) malloc(blkLen);
! ctx->inSize = blkLen;
! in = ctx->zlibIn;
}
cnt = fread(in, 1, blkLen, AH->FH);
if (cnt != blkLen)
--- 561,580 ----
_skipData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
size_t blkLen;
! char *in = cs->comprIn;
size_t cnt;
blkLen = ReadInt(AH);
while (blkLen != 0)
{
! if (blkLen > cs->comprInSize)
{
! free(cs->comprIn);
! cs->comprIn = (char *) malloc(blkLen);
! cs->comprInSize = blkLen;
! in = cs->comprIn;
}
cnt = fread(in, 1, blkLen, AH->FH);
if (cnt != blkLen)
*************** _readBlockHeader(ArchiveHandle *AH, int
*** 960,1105 ****
*id = ReadInt(AH);
}
! /*
! * If zlib is available, then startit up. This is called from
! * StartData & StartBlob. The buffers are setup in the Init routine.
! */
! static void
! _StartDataCompressor(ArchiveHandle *AH, TocEntry *te)
{
! lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
!
! #ifdef HAVE_LIBZ
!
! if (AH->compression < 0 || AH->compression > 9)
! AH->compression = Z_DEFAULT_COMPRESSION;
!
! if (AH->compression != 0)
! {
! zp->zalloc = Z_NULL;
! zp->zfree = Z_NULL;
! zp->opaque = Z_NULL;
!
! if (deflateInit(zp, AH->compression) != Z_OK)
! die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
! }
! #else
! AH->compression = 0;
! #endif
! /* Just be paranoid - maybe End is called after Start, with no Write */
! zp->next_out = (void *) ctx->zlibOut;
! zp->avail_out = zlibOutSize;
}
! /*
! * Send compressed data to the output stream (via ahwrite).
! * Each data chunk is preceded by it's length.
! * In the case of Z0, or no zlib, just write the raw data.
! *
! */
! static int
! _DoDeflate(ArchiveHandle *AH, lclContext *ctx, int flush)
{
! z_streamp zp = ctx->zp;
! #ifdef HAVE_LIBZ
! char *out = ctx->zlibOut;
! int res = Z_OK;
! if (AH->compression != 0)
{
! res = deflate(zp, flush);
! if (res == Z_STREAM_ERROR)
! die_horribly(AH, modulename, "could not compress data: %s\n", zp->msg);
! if (((flush == Z_FINISH) && (zp->avail_out < zlibOutSize))
! || (zp->avail_out == 0)
! || (zp->avail_in != 0)
! )
! {
! /*
! * Extra paranoia: avoid zero-length chunks since a zero length
! * chunk is the EOF marker. This should never happen but...
! */
! if (zp->avail_out < zlibOutSize)
! {
! /*
! * printf("Wrote %lu byte deflated chunk\n", (unsigned long)
! * (zlibOutSize - zp->avail_out));
! */
! WriteInt(AH, zlibOutSize - zp->avail_out);
! if (fwrite(out, 1, zlibOutSize - zp->avail_out, AH->FH) != (zlibOutSize - zp->avail_out))
! die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
! ctx->filePos += zlibOutSize - zp->avail_out;
! }
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! }
}
! else
! #endif
{
! if (zp->avail_in > 0)
! {
! WriteInt(AH, zp->avail_in);
! if (fwrite(zp->next_in, 1, zp->avail_in, AH->FH) != zp->avail_in)
! die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
! ctx->filePos += zp->avail_in;
! zp->avail_in = 0;
! }
else
! {
! #ifdef HAVE_LIBZ
! if (flush == Z_FINISH)
! res = Z_STREAM_END;
! #endif
! }
! }
!
! #ifdef HAVE_LIBZ
! return res;
! #else
! return 1;
! #endif
! }
!
! /*
! * Terminate zlib context and flush it's buffers. If no zlib
! * then just return.
! */
! static void
! _EndDataCompressor(ArchiveHandle *AH, TocEntry *te)
! {
!
! #ifdef HAVE_LIBZ
! lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
! int res;
!
! if (AH->compression != 0)
! {
! zp->next_in = NULL;
! zp->avail_in = 0;
!
! do
! {
! /* printf("Ending data output\n"); */
! res = _DoDeflate(AH, ctx, Z_FINISH);
! } while (res != Z_STREAM_END);
!
! if (deflateEnd(zp) != Z_OK)
! die_horribly(AH, modulename, "could not close compression stream: %s\n", zp->msg);
}
! #endif
!
! /* Send the end marker */
! WriteInt(AH, 0);
}
-
/*
* Clone format-specific fields during parallel restoration.
*/
--- 839,899 ----
*id = ReadInt(AH);
}
! static size_t
! _CustomWriteFunc(ArchiveHandle *AH, const void *buf, size_t len)
{
! Assert(len != 0);
! /* never write 0-byte blocks (this should not happen) */
! if (len == 0)
! return 0;
! WriteInt(AH, len);
! return _WriteBuf(AH, buf, len);
}
! static size_t
! _CustomReadFunction(ArchiveHandle *AH, void **buf, size_t sizeHint)
{
! lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! size_t blkLen;
! size_t cnt;
! /*
! * We deliberately ignore the sizeHint parameter because we know
! * the exact size of the next compressed block (=blkLen).
! */
! blkLen = ReadInt(AH);
!
! if (blkLen == 0)
! return 0;
!
! if (blkLen + 1 > cs->comprInSize)
{
! free(cs->comprIn);
! cs->comprIn = NULL;
! cs->comprIn = (char *) malloc(blkLen + 1);
! if (!cs->comprIn)
! die_horribly(AH, modulename, "out of memory\n");
! cs->comprInSize = blkLen + 1;
}
! cnt = _ReadBuf(AH, cs->comprIn, blkLen);
! if (cnt != blkLen)
{
! if (feof(AH->FH))
! die_horribly(AH, modulename,
! "could not read from input file: end of file\n");
else
! die_horribly(AH, modulename,
! "could not read from input file: %s\n", strerror(errno));
}
! *buf = cs->comprIn;
! return cnt;
}
/*
* Clone format-specific fields during parallel restoration.
*/
*************** static void
*** 1107,1112 ****
--- 901,907 ----
_Clone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorAction action = ctx->cs->action;
AH->formatData = (lclContext *) malloc(sizeof(lclContext));
if (AH->formatData == NULL)
*************** _Clone(ArchiveHandle *AH)
*** 1114,1125 ****
memcpy(AH->formatData, ctx, sizeof(lclContext));
ctx = (lclContext *) AH->formatData;
! ctx->zp = (z_streamp) malloc(sizeof(z_stream));
! ctx->zlibOut = (char *) malloc(zlibOutSize + 1);
! ctx->zlibIn = (char *) malloc(ctx->inSize);
!
! if (ctx->zp == NULL || ctx->zlibOut == NULL || ctx->zlibIn == NULL)
! die_horribly(AH, modulename, "out of memory\n");
/*
* Note: we do not make a local lo_buf because we expect at most one BLOBS
--- 909,915 ----
memcpy(AH->formatData, ctx, sizeof(lclContext));
ctx = (lclContext *) AH->formatData;
! ctx->cs = AllocateCompressorState(action);
/*
* Note: we do not make a local lo_buf because we expect at most one BLOBS
*************** static void
*** 1133,1141 ****
_DeClone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
- free(ctx->zlibOut);
- free(ctx->zlibIn);
- free(ctx->zp);
free(ctx);
}
--- 923,932 ----
_DeClone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ FreeCompressorState(cs);
free(ctx);
}
+
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 66274b4..8a71b99 100644
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
***************
*** 56,61 ****
--- 56,62 ----
#include "pg_backup_archiver.h"
#include "dumputils.h"
+ #include "compress_io.h"
extern char *optarg;
extern int optind,
*************** main(int argc, char **argv)
*** 255,261 ****
int numObjs;
int i;
enum trivalue prompt_password = TRI_DEFAULT;
! int compressLevel = -1;
int plainText = 0;
int outputClean = 0;
int outputCreateDB = 0;
--- 256,262 ----
int numObjs;
int i;
enum trivalue prompt_password = TRI_DEFAULT;
! int compressLevel = COMPRESSION_UNKNOWN;
int plainText = 0;
int outputClean = 0;
int outputCreateDB = 0;
*************** main(int argc, char **argv)
*** 535,540 ****
--- 536,547 ----
exit(1);
}
+ /* actually we are using a zlib constant here but formats that don't
+ * support compression won't care and if we are not compiled with zlib
+ * compression we will be forced to no compression anyway. */
+ if (compressLevel == COMPRESSION_UNKNOWN)
+ compressLevel = Z_DEFAULT_COMPRESSION;
+
/* open the output file */
if (pg_strcasecmp(format, "a") == 0 || pg_strcasecmp(format, "append") == 0)
{
*************** dumpBlobs(Archive *AH, void *arg)
*** 2174,2180 ****
exit_nicely();
}
! WriteData(AH, buf, cnt);
} while (cnt > 0);
lo_close(g_conn, loFd);
--- 2181,2189 ----
exit_nicely();
}
! /* we try to avoid writing empty chunks */
! if (cnt > 0)
! WriteData(AH, buf, cnt);
} while (cnt > 0);
lo_close(g_conn, loFd);
pg_dump-directory.difftext/x-patch; charset=US-ASCII; name=pg_dump-directory.diffDownload
diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml
index 8242b53..2278147 100644
*** a/doc/src/sgml/ref/pg_dump.sgml
--- b/doc/src/sgml/ref/pg_dump.sgml
*************** PostgreSQL documentation
*** 194,201 ****
<term><option>--file=<replaceable class="parameter">file</replaceable></option></term>
<listitem>
<para>
! Send output to the specified file. If this is omitted, the
! standard output is used.
</para>
</listitem>
</varlistentry>
--- 194,203 ----
<term><option>--file=<replaceable class="parameter">file</replaceable></option></term>
<listitem>
<para>
! Send output to the specified file. If this is omitted for the file
! based output formats, then the standard output is used. This parameter must
! be given for the directory output format where it specifies the target
! directory instead of a file.
</para>
</listitem>
</varlistentry>
*************** PostgreSQL documentation
*** 226,234 ****
<para>
Output a custom-format archive suitable for input into
<application>pg_restore</application>.
! This is the most flexible output format in that it allows manual
! selection and reordering of archived items during restore.
! This format is also compressed by default.
</para>
</listitem>
</varlistentry>
--- 228,251 ----
<para>
Output a custom-format archive suitable for input into
<application>pg_restore</application>.
! Together with the directory output format, this is the most flexible
! output format in that it allows manual selection and reordering of
! archived items during restore. This format is also compressed by
! default.
! </para>
! </listitem>
! </varlistentry>
!
! <varlistentry>
! <term><literal>d</></term>
! <term><literal>directory</></term>
! <listitem>
! <para>
! Output a directory-format archive suitable for input into
! <application>pg_restore</application>. This will create a directory
! instead of a file and this directory will contain one file for each
! table and BLOB of the database that is being dumped. This format is
! compressed by default.
</para>
</listitem>
</varlistentry>
*************** PostgreSQL documentation
*** 267,272 ****
--- 284,300 ----
</varlistentry>
<varlistentry>
+ <term><option>-k</></term>
+ <listitem>
+ <para>
+ Check a directory archive. This will walk through all files of a
+ directory-format archive and check that all required files exist and
+ that they are from the same dump.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>-n <replaceable class="parameter">schema</replaceable></option></term>
<term><option>--schema=<replaceable class="parameter">schema</replaceable></option></term>
<listitem>
*************** CREATE DATABASE foo WITH TEMPLATE templa
*** 937,942 ****
--- 965,978 ----
</para>
<para>
+ To dump a database into a directory-format archive:
+
+ <screen>
+ <prompt>$</prompt> <userinput>pg_dump -Fd mydb -f dumpdir</userinput>
+ </screen>
+ </para>
+
+ <para>
To reload an archive file into a (freshly created) database named
<literal>newdb</>:
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index efb031a..9a1c025 100644
*** a/src/bin/pg_dump/Makefile
--- b/src/bin/pg_dump/Makefile
*************** override CPPFLAGS := -I$(libpq_srcdir) $
*** 20,26 ****
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o compress_io.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
--- 20,26 ----
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o pg_backup_directory.o compress_io.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h
index 8fa9a57..08fb910 100644
*** a/src/bin/pg_dump/pg_backup.h
--- b/src/bin/pg_dump/pg_backup.h
*************** typedef enum _archiveFormat
*** 48,56 ****
{
archUnknown = 0,
archCustom = 1,
! archFiles = 2,
! archTar = 3,
! archNull = 4
} ArchiveFormat;
typedef enum _archiveMode
--- 48,58 ----
{
archUnknown = 0,
archCustom = 1,
! archDirectory = 2,
! archFiles = 3,
! archTar = 4,
! archNull = 5,
! archNullAppend = 6
} ArchiveFormat;
typedef enum _archiveMode
*************** typedef struct _restoreOptions
*** 112,117 ****
--- 114,120 ----
int schemaOnly;
int verbose;
int aclsSkip;
+ int checkArchive;
int tocSummary;
char *tocFile;
int format;
*************** extern Archive *CreateArchive(const char
*** 195,200 ****
--- 198,206 ----
/* The --list option */
extern void PrintTOCSummary(Archive *AH, RestoreOptions *ropt);
+ /* Check an existing archive */
+ extern bool CheckArchive(Archive *AH, RestoreOptions *ropt);
+
extern RestoreOptions *NewRestoreOptions(void);
/* Rearrange and filter TOC entries */
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index a28956b..75a4b6d 100644
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
***************
*** 26,31 ****
--- 26,32 ----
#include <ctype.h>
#include <unistd.h>
+ #include <sys/stat.h>
#include <sys/types.h>
#include <sys/wait.h>
*************** static int _discoverArchiveFormat(Archiv
*** 109,114 ****
--- 110,116 ----
static void dump_lo_buf(ArchiveHandle *AH);
static void _write_msg(const char *modulename, const char *fmt, va_list ap);
static void _die_horribly(ArchiveHandle *AH, const char *modulename, const char *fmt, va_list ap);
+ static void outputSummaryHeaderText(Archive *AHX);
static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
static OutputContext SetOutput(ArchiveHandle *AH, char *filename, int compression);
*************** PrintTOCSummary(Archive *AHX, RestoreOpt
*** 779,818 ****
ArchiveHandle *AH = (ArchiveHandle *) AHX;
TocEntry *te;
OutputContext sav;
- char *fmtName;
if (ropt->filename)
sav = SetOutput(AH, ropt->filename, 0 /* no compression */ );
! ahprintf(AH, ";\n; Archive created at %s", ctime(&AH->createDate));
! ahprintf(AH, "; dbname: %s\n; TOC Entries: %d\n; Compression: %d\n",
! AH->archdbname, AH->tocCount, AH->compression);
!
! switch (AH->format)
! {
! case archFiles:
! fmtName = "FILES";
! break;
! case archCustom:
! fmtName = "CUSTOM";
! break;
! case archTar:
! fmtName = "TAR";
! break;
! default:
! fmtName = "UNKNOWN";
! }
!
! ahprintf(AH, "; Dump Version: %d.%d-%d\n", AH->vmaj, AH->vmin, AH->vrev);
! ahprintf(AH, "; Format: %s\n", fmtName);
! ahprintf(AH, "; Integer: %d bytes\n", (int) AH->intSize);
! ahprintf(AH, "; Offset: %d bytes\n", (int) AH->offSize);
! if (AH->archiveRemoteVersion)
! ahprintf(AH, "; Dumped from database version: %s\n",
! AH->archiveRemoteVersion);
! if (AH->archiveDumpVersion)
! ahprintf(AH, "; Dumped by pg_dump version: %s\n",
! AH->archiveDumpVersion);
ahprintf(AH, ";\n;\n; Selected TOC Entries:\n;\n");
--- 781,791 ----
ArchiveHandle *AH = (ArchiveHandle *) AHX;
TocEntry *te;
OutputContext sav;
if (ropt->filename)
sav = SetOutput(AH, ropt->filename, 0 /* no compression */ );
! outputSummaryHeaderText(AHX);
ahprintf(AH, ";\n;\n; Selected TOC Entries:\n;\n");
*************** PrintTOCSummary(Archive *AHX, RestoreOpt
*** 841,846 ****
--- 814,856 ----
ResetOutput(AH, sav);
}
+ bool
+ CheckArchive(Archive *AHX, RestoreOptions *ropt)
+ {
+ ArchiveHandle *AH = (ArchiveHandle *) AHX;
+ TocEntry *te;
+ teReqs reqs;
+ bool checkOK;
+
+ outputSummaryHeaderText(AHX);
+
+ checkOK = (*AH->StartCheckArchivePtr)(AH);
+
+ /* this gets only called from the commandline so we write to stdout as
+ * usual */
+ printf(";\n; Performing Checks...\n;\n");
+
+ for (te = AH->toc->next; te != AH->toc; te = te->next)
+ {
+ if (!(reqs = _tocEntryRequired(te, ropt, true)))
+ continue;
+
+ if (!(*AH->CheckTocEntryPtr)(AH, te, reqs))
+ checkOK = false;
+
+ /* do not dump the contents but only the errors */
+ }
+
+ if (!(*AH->EndCheckArchivePtr)(AH))
+ checkOK = false;
+
+ printf("; Check result: %s\n", checkOK ? "OK" : "FAILED");
+
+ return checkOK;
+ }
+
+
+
/***********
* BLOB Archival
***********/
*************** archprintf(Archive *AH, const char *fmt,
*** 1116,1121 ****
--- 1126,1174 ----
* Stuff below here should be 'private' to the archiver routines
*******************************/
+ static void
+ outputSummaryHeaderText(Archive *AHX)
+ {
+ ArchiveHandle *AH = (ArchiveHandle *) AHX;
+ const char *fmtName;
+
+ ahprintf(AH, ";\n; Archive created at %s", ctime(&AH->createDate));
+ ahprintf(AH, "; dbname: %s\n; TOC Entries: %d\n; Compression: %d\n",
+ AH->archdbname, AH->tocCount, AH->compression);
+
+ switch (AH->format)
+ {
+ case archCustom:
+ fmtName = "CUSTOM";
+ break;
+ case archDirectory:
+ fmtName = "DIRECTORY";
+ break;
+ case archFiles:
+ fmtName = "FILES";
+ break;
+ case archTar:
+ fmtName = "TAR";
+ break;
+ default:
+ fmtName = "UNKNOWN";
+ }
+
+ ahprintf(AH, "; Dump Version: %d.%d-%d\n", AH->vmaj, AH->vmin, AH->vrev);
+ ahprintf(AH, "; Format: %s\n", fmtName);
+ ahprintf(AH, "; Integer: %d bytes\n", (int) AH->intSize);
+ ahprintf(AH, "; Offset: %d bytes\n", (int) AH->offSize);
+ if (AH->archiveRemoteVersion)
+ ahprintf(AH, "; Dumped from database version: %s\n",
+ AH->archiveRemoteVersion);
+ if (AH->archiveDumpVersion)
+ ahprintf(AH, "; Dumped by pg_dump version: %s\n",
+ AH->archiveDumpVersion);
+
+ if (AH->PrintExtraTocSummaryPtr != NULL)
+ (*AH->PrintExtraTocSummaryPtr) (AH);
+ }
+
static OutputContext
SetOutput(ArchiveHandle *AH, char *filename, int compression)
{
*************** _discoverArchiveFormat(ArchiveHandle *AH
*** 1721,1726 ****
--- 1774,1781 ----
char sig[6]; /* More than enough */
size_t cnt;
int wantClose = 0;
+ char buf[MAXPGPATH];
+ struct stat st;
#if 0
write_msg(modulename, "attempting to ascertain archive format\n");
*************** _discoverArchiveFormat(ArchiveHandle *AH
*** 1737,1743 ****
if (AH->fSpec)
{
wantClose = 1;
! fh = fopen(AH->fSpec, PG_BINARY_R);
if (!fh)
die_horribly(AH, modulename, "could not open input file \"%s\": %s\n",
AH->fSpec, strerror(errno));
--- 1792,1813 ----
if (AH->fSpec)
{
wantClose = 1;
! /*
! * Check if the specified archive is a directory actually. If so, we open
! * the TOC file instead.
! */
! buf[0] = '\0';
! if (stat(AH->fSpec, &st) == 0 && S_ISDIR(st.st_mode))
! {
! if (snprintf(buf, MAXPGPATH, "%s/%s", AH->fSpec, "TOC") >= MAXPGPATH)
! die_horribly(AH, modulename, "directory name too long: \"%s\"\n",
! AH->fSpec);
! }
!
! if (strlen(buf) == 0)
! strcpy(buf, AH->fSpec);
!
! fh = fopen(buf, PG_BINARY_R);
if (!fh)
die_horribly(AH, modulename, "could not open input file \"%s\": %s\n",
AH->fSpec, strerror(errno));
*************** _allocAH(const char *FileSpec, const Arc
*** 1950,1955 ****
--- 2020,2029 ----
InitArchiveFmt_Custom(AH);
break;
+ case archDirectory:
+ InitArchiveFmt_Directory(AH);
+ break;
+
case archFiles:
InitArchiveFmt_Files(AH);
break;
*************** WriteHead(ArchiveHandle *AH)
*** 2975,2985 ****
(*AH->WriteBytePtr) (AH, AH->format);
#ifndef HAVE_LIBZ
! if (AH->compression != 0)
write_msg(modulename, "WARNING: requested compression not available in this "
"installation -- archive will be uncompressed\n");
! AH->compression = 0;
#endif
WriteInt(AH, AH->compression);
--- 3049,3061 ----
(*AH->WriteBytePtr) (AH, AH->format);
#ifndef HAVE_LIBZ
! if (AH->compression > 0 && AH->compression <= 9)
! {
write_msg(modulename, "WARNING: requested compression not available in this "
"installation -- archive will be uncompressed\n");
! AH->compression = 0;
! }
#endif
WriteInt(AH, AH->compression);
*************** ReadHead(ArchiveHandle *AH)
*** 3063,3069 ****
AH->compression = Z_DEFAULT_COMPRESSION;
#ifndef HAVE_LIBZ
! if (AH->compression != 0)
write_msg(modulename, "WARNING: archive is compressed, but this installation does not support compression -- no data will be available\n");
#endif
--- 3139,3145 ----
AH->compression = Z_DEFAULT_COMPRESSION;
#ifndef HAVE_LIBZ
! if (AH->compression > 0 && AH->compression <= 9)
write_msg(modulename, "WARNING: archive is compressed, but this installation does not support compression -- no data will be available\n");
#endif
*************** checkSeek(FILE *fp)
*** 3128,3155 ****
return true;
}
!
! /*
! * dumpTimestamp
! */
! static void
! dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim)
{
- char buf[256];
-
/*
* We don't print the timezone on Win32, because the names are long and
* localized, which means they may contain characters in various random
* encodings; this has been seen to cause encoding errors when reading the
* dump script.
*/
! if (strftime(buf, sizeof(buf),
#ifndef WIN32
"%Y-%m-%d %H:%M:%S %Z",
#else
"%Y-%m-%d %H:%M:%S",
#endif
! localtime(&tim)) != 0)
ahprintf(AH, "-- %s %s\n\n", msg, buf);
}
--- 3204,3235 ----
return true;
}
! bool
! getTimestampString(char *buf, size_t buflen, time_t tim)
{
/*
* We don't print the timezone on Win32, because the names are long and
* localized, which means they may contain characters in various random
* encodings; this has been seen to cause encoding errors when reading the
* dump script.
*/
! return strftime(buf, buflen,
#ifndef WIN32
"%Y-%m-%d %H:%M:%S %Z",
#else
"%Y-%m-%d %H:%M:%S",
#endif
! localtime(&tim)) != 0;
! }
!
! /*
! * dumpTimestamp
! */
! static void
! dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim)
! {
! char buf[256];
! if (getTimestampString(buf, sizeof(buf), tim))
ahprintf(AH, "-- %s %s\n\n", msg, buf);
}
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 705b2e4..9795eac 100644
*** a/src/bin/pg_dump/pg_backup_archiver.h
--- b/src/bin/pg_dump/pg_backup_archiver.h
*************** typedef z_stream *z_streamp;
*** 112,117 ****
--- 112,118 ----
struct _archiveHandle;
struct _tocEntry;
struct _restoreList;
+ enum _teReqs;
typedef void (*ClosePtr) (struct _archiveHandle * AH);
typedef void (*ReopenPtr) (struct _archiveHandle * AH);
*************** typedef void (*WriteExtraTocPtr) (struct
*** 135,144 ****
--- 136,151 ----
typedef void (*ReadExtraTocPtr) (struct _archiveHandle * AH, struct _tocEntry * te);
typedef void (*PrintExtraTocPtr) (struct _archiveHandle * AH, struct _tocEntry * te);
typedef void (*PrintTocDataPtr) (struct _archiveHandle * AH, struct _tocEntry * te, RestoreOptions *ropt);
+ typedef void (*PrintExtraTocSummaryPtr) (struct _archiveHandle * AH);
typedef void (*ClonePtr) (struct _archiveHandle * AH);
typedef void (*DeClonePtr) (struct _archiveHandle * AH);
+ typedef bool (*StartCheckArchivePtr)(struct _archiveHandle * AH);
+ typedef bool (*CheckTocEntryPtr)(struct _archiveHandle * AH, struct _tocEntry * te,
+ enum _teReqs reqs);
+ typedef bool (*EndCheckArchivePtr)(struct _archiveHandle * AH);
+
typedef size_t (*CustomOutPtr) (struct _archiveHandle * AH, const void *buf, size_t len);
typedef struct _outputContext
*************** typedef enum
*** 177,183 ****
STAGE_FINALIZING
} ArchiverStage;
! typedef enum
{
REQ_SCHEMA = 1,
REQ_DATA = 2,
--- 184,190 ----
STAGE_FINALIZING
} ArchiverStage;
! typedef enum _teReqs
{
REQ_SCHEMA = 1,
REQ_DATA = 2,
*************** typedef struct _archiveHandle
*** 239,244 ****
--- 246,252 ----
* archie format */
PrintExtraTocPtr PrintExtraTocPtr; /* Extra TOC info for format */
PrintTocDataPtr PrintTocDataPtr;
+ PrintExtraTocSummaryPtr PrintExtraTocSummaryPtr;
StartBlobsPtr StartBlobsPtr;
EndBlobsPtr EndBlobsPtr;
*************** typedef struct _archiveHandle
*** 248,253 ****
--- 256,265 ----
ClonePtr ClonePtr; /* Clone format-specific fields */
DeClonePtr DeClonePtr; /* Clean up cloned fields */
+ StartCheckArchivePtr StartCheckArchivePtr;
+ CheckTocEntryPtr CheckTocEntryPtr;
+ EndCheckArchivePtr EndCheckArchivePtr;
+
CustomOutPtr CustomOutPtr; /* Alternative script output routine */
/* Stuff for direct DB connection */
*************** extern void EndRestoreBlob(ArchiveHandle
*** 383,388 ****
--- 395,401 ----
extern void EndRestoreBlobs(ArchiveHandle *AH);
extern void InitArchiveFmt_Custom(ArchiveHandle *AH);
+ extern void InitArchiveFmt_Directory(ArchiveHandle *AH);
extern void InitArchiveFmt_Files(ArchiveHandle *AH);
extern void InitArchiveFmt_Null(ArchiveHandle *AH);
extern void InitArchiveFmt_Tar(ArchiveHandle *AH);
*************** int ahprintf(ArchiveHandle *AH, const
*** 397,402 ****
--- 410,417 ----
void ahlog(ArchiveHandle *AH, int level, const char *fmt,...) __attribute__((format(printf, 3, 4)));
+ extern bool getTimestampString(char *buf, size_t buflen, time_t tim);
+
#ifdef USE_ASSERT_CHECKING
#define Assert(condition) \
if (!(condition)) \
diff --git a/src/bin/pg_dump/pg_backup_custom.c b/src/bin/pg_dump/pg_backup_custom.c
index f3f41b5..e40a57c 100644
*** a/src/bin/pg_dump/pg_backup_custom.c
--- b/src/bin/pg_dump/pg_backup_custom.c
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 118,123 ****
--- 118,124 ----
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = NULL;
AH->StartBlobsPtr = _StartBlobs;
AH->StartBlobPtr = _StartBlob;
*************** InitArchiveFmt_Custom(ArchiveHandle *AH)
*** 126,131 ****
--- 127,136 ----
AH->ClonePtr = _Clone;
AH->DeClonePtr = _DeClone;
+ AH->StartCheckArchivePtr = NULL;
+ AH->CheckTocEntryPtr = NULL;
+ AH->EndCheckArchivePtr = NULL;
+
/*
* Set up some special context used in compressing data.
*/
diff --git a/src/bin/pg_dump/pg_backup_directory.c b/src/bin/pg_dump/pg_backup_directory.c
index ...d8c550a .
*** a/src/bin/pg_dump/pg_backup_directory.c
--- b/src/bin/pg_dump/pg_backup_directory.c
***************
*** 0 ****
--- 1,1469 ----
+ /*-------------------------------------------------------------------------
+ *
+ * pg_backup_directory.c
+ *
+ * This file is copied from the 'files' format file and dumps data into
+ * separate files in a directory.
+ *
+ * See the headers to pg_backup_files & pg_restore for more details.
+ *
+ * Copyright (c) 2000, Philip Warner
+ * Rights are granted to use this software in any way so long
+ * as this notice is not removed.
+ *
+ * The author is not responsible for loss or damages that may
+ * result from it's use.
+ *
+ *
+ * IDENTIFICATION
+ * XXX
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include <dirent.h>
+ #include <sys/stat.h>
+
+ #include "compress_io.h"
+ #include "pg_backup_archiver.h"
+ #include "utils/pg_crc.h"
+
+ #define TOC_FH_ACTIVE \
+ (ctx->dataFH == NULL && ctx->blobsTocFH == NULL && AH->FH != NULL)
+ #define BLOBS_TOC_FH_ACTIVE \
+ (ctx->dataFH == NULL && ctx->blobsTocFH != NULL)
+ #define DATA_FH_ACTIVE \
+ (ctx->dataFH != NULL)
+
+ struct _lclFileHeader;
+ struct _lclContext;
+
+ static void _ArchiveEntry(ArchiveHandle *AH, TocEntry *te);
+ static void _StartData(ArchiveHandle *AH, TocEntry *te);
+ static void _EndData(ArchiveHandle *AH, TocEntry *te);
+ static size_t _WriteData(ArchiveHandle *AH, const void *data, size_t dLen);
+ static int _WriteByte(ArchiveHandle *AH, const int i);
+ static int _ReadByte(ArchiveHandle *);
+ static size_t _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len);
+ static size_t _ReadBuf(ArchiveHandle *AH, void *buf, size_t len);
+ static void _CloseArchive(ArchiveHandle *AH);
+ static void _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt);
+
+ static void _WriteExtraToc(ArchiveHandle *AH, TocEntry *te);
+ static void _ReadExtraToc(ArchiveHandle *AH, TocEntry *te);
+ static void _PrintExtraToc(ArchiveHandle *AH, TocEntry *te);
+ static void _PrintExtraTocSummary(ArchiveHandle *AH);
+
+ static void _WriteExtraHead(ArchiveHandle *AH);
+ static void _ReadExtraHead(ArchiveHandle *AH);
+
+ static void WriteFileHeader(ArchiveHandle *AH, int type);
+ static int ReadFileHeader(ArchiveHandle *AH, struct _lclFileHeader *fileHeader);
+
+ static void _StartBlobs(ArchiveHandle *AH, TocEntry *te);
+ static void _StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid);
+ static void _EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid);
+ static void _EndBlobs(ArchiveHandle *AH, TocEntry *te);
+ static void _LoadBlobs(ArchiveHandle *AH, RestoreOptions *ropt);
+
+ static size_t _DirectoryReadFunction(ArchiveHandle *AH, void **buf,
+ size_t sizeHint);
+
+ static void _checkOutput(ArchiveHandle *AH, bool isChecking, const char *fmt, ...);
+ static bool _StartCheckArchive(ArchiveHandle *AH);
+ static bool _CheckTocEntry(ArchiveHandle *AH, TocEntry *te, teReqs reqs);
+ static bool _CheckFileContents(ArchiveHandle *AH, const char *fname,
+ bool dieOnError);
+ static bool _CheckFileSize(ArchiveHandle *AH, const char *fname, pgoff_t pgSize,
+ bool isChecking);
+ static bool _CheckBlob(ArchiveHandle *AH, Oid oid, pgoff_t size);
+ static bool _CheckBlobs(ArchiveHandle *AH, TocEntry *te, teReqs reqs);
+ static bool _EndCheckArchive(ArchiveHandle *AH);
+
+ static char *prependDirectory(ArchiveHandle *AH, const char *relativeFilename);
+ static char *prependBlobsDirectory(ArchiveHandle *AH, Oid oid);
+ static void createDirectory(const char *dir, const char *subdir);
+
+ static bool isDirectory(const char *fname);
+ static bool isRegularFile(const char *fname);
+
+ #define K_STD_BUF_SIZE 1024
+ #define FILE_SUFFIX ".dat"
+
+ typedef struct _lclContext
+ {
+ /*
+ * Our archive location. This is basically what the user specified as his
+ * backup file but of course here it is a directory.
+ */
+ char *directory;
+
+ /*
+ * To prevent that we accidentially mix the different files of a backup
+ * set, we keep an id in each of the files. If pg_restore finds out that it
+ * should restore files from different backup sets, it issues a warning.
+ */
+ char idStr[256];
+
+ /*
+ * In the directory archive format we have three file handles:
+ *
+ * AH->FH points to the TOC
+ * ctx->blobsTocFH points to the TOC for the BLOBs
+ * ctx->dataFH points to data files (both BLOBs and regular)
+ *
+ * Instead of specifying where each I/O operation should go (which would
+ * require own prototypes anyway and wouldn't be that straightforward
+ * either), we rely on a hierarchy among the file descriptors.
+ *
+ * As a matter of fact we never access any of the TOCs when we are writing
+ * to a data file, only before or after that. Similarly we never access the
+ * general TOC when we have opened the TOC for BLOBs. Given these facts we
+ * can just write our I/O routines such that they access:
+ *
+ * if defined(ctx->dataFH) => access ctx->dataFH
+ * else if defined(ctx->blobsTocFH) => access ctx->blobsTocFH
+ * else => access AH->FH
+ *
+ * To make it more transparent what is going on, we use assertions like
+ *
+ * Assert(DATA_FH_ACTIVE); ...
+ *
+ */
+ FILE *dataFH; /* file position for a data file */
+ pgoff_t dataFilePos;
+ FILE *blobsTocFH; /* file position for BLOBS.TOC */
+ pgoff_t blobsTocFilePos;
+ pgoff_t tocFilePos; /* file position for AH->FH */
+
+ /* these are used for checking a directory archive */
+ DumpId *chkList;
+ int chkListSize;
+
+ CompressorState *cs;
+ } lclContext;
+
+ typedef struct
+ {
+ char *filename; /* filename excluding the directory (basename) */
+ pgoff_t fileSize;
+ } lclTocEntry;
+
+ typedef struct _lclFileHeader
+ {
+ int version;
+ int type; /* BLK_DATA or BLK_BLOB */
+ char *idStr;
+ int compression;
+ } lclFileHeader;
+
+ static const char *modulename = gettext_noop("directory archiver");
+
+ /*
+ * Init routine required by ALL formats. This is a global routine
+ * and should be declared in pg_backup_archiver.h
+ *
+ * It's task is to create any extra archive context (using AH->formatData),
+ * and to initialize the supported function pointers.
+ *
+ * It should also prepare whatever it's input source is for reading/writing,
+ * and in the case of a read mode connection, it should load the Header & TOC.
+ */
+ void
+ InitArchiveFmt_Directory(ArchiveHandle *AH)
+ {
+ lclContext *ctx;
+
+ /* Assuming static functions, this can be copied for each format. */
+ AH->ArchiveEntryPtr = _ArchiveEntry;
+ AH->StartDataPtr = _StartData;
+ AH->WriteDataPtr = _WriteData;
+ AH->EndDataPtr = _EndData;
+ AH->WriteBytePtr = _WriteByte;
+ AH->ReadBytePtr = _ReadByte;
+ AH->WriteBufPtr = _WriteBuf;
+ AH->ReadBufPtr = _ReadBuf;
+ AH->ClosePtr = _CloseArchive;
+ AH->ReopenPtr = NULL;
+ AH->PrintTocDataPtr = _PrintTocData;
+ AH->ReadExtraTocPtr = _ReadExtraToc;
+ AH->WriteExtraTocPtr = _WriteExtraToc;
+ AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = _PrintExtraTocSummary;
+
+ AH->StartBlobsPtr = _StartBlobs;
+ AH->StartBlobPtr = _StartBlob;
+ AH->EndBlobPtr = _EndBlob;
+ AH->EndBlobsPtr = _EndBlobs;
+
+ AH->ClonePtr = NULL;
+ AH->DeClonePtr = NULL;
+
+ AH->StartCheckArchivePtr = _StartCheckArchive;
+ AH->CheckTocEntryPtr = _CheckTocEntry;
+ AH->EndCheckArchivePtr = _EndCheckArchive;
+
+ /*
+ * Set up some special context used in compressing data.
+ */
+ ctx = (lclContext *) calloc(1, sizeof(lclContext));
+ if (ctx == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+ AH->formatData = (void *) ctx;
+
+ ctx->dataFH = NULL;
+ ctx->blobsTocFH = NULL;
+ ctx->cs = NULL;
+
+ /* Initialize LO buffering */
+ AH->lo_buf_size = LOBBUFSIZE;
+ AH->lo_buf = (void *) malloc(LOBBUFSIZE);
+ if (AH->lo_buf == NULL)
+ die_horribly(AH, modulename, "out of memory\n");
+
+ /*
+ * Now open the TOC file
+ */
+
+ if (!AH->fSpec || strcmp(AH->fSpec, "") == 0)
+ die_horribly(AH, modulename, "no directory specified\n");
+
+ ctx->directory = AH->fSpec;
+
+ if (AH->mode == archModeWrite)
+ {
+ char *fname = prependDirectory(AH, "TOC");
+
+ /*
+ * Create the ID string, basically the current timestamp that prevents
+ * that we accidently mix files from different backups.
+ */
+ if (!getTimestampString(ctx->idStr, sizeof(ctx->idStr), time(NULL)))
+ die_horribly(AH, modulename, "could not get timestamp\n");
+
+ /* Create the directory, errors are caught there */
+ createDirectory(ctx->directory, NULL);
+
+ ctx->cs = AllocateCompressorState(COMPRESSOR_DEFLATE);
+
+ AH->FH = fopen(fname, PG_BINARY_W);
+ if (AH->FH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+ }
+ else
+ { /* Read Mode */
+ char *fname;
+
+ fname = prependDirectory(AH, "TOC");
+
+ AH->FH = fopen(fname, PG_BINARY_R);
+ if (AH->FH == NULL)
+ die_horribly(AH, modulename,
+ "could not open input file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(TOC_FH_ACTIVE);
+
+ ReadHead(AH);
+ _ReadExtraHead(AH);
+ ReadToc(AH);
+
+ /*
+ * We get the compression information from the TOC, hence no need to
+ * initialize the compressor earlier. Also, remember that the TOC file
+ * is always uncompressed. Compression is only used for the data files.
+ */
+ ctx->cs = AllocateCompressorState(COMPRESSOR_INFLATE);
+
+ /* Nothing else in the file, so close it again... */
+
+ if (fclose(AH->FH) != 0)
+ die_horribly(AH, modulename, "could not close TOC file: %s\n",
+ strerror(errno));
+ }
+ }
+
+ /*
+ * Called by the Archiver when the dumper creates a new TOC entry.
+ *
+ * Optional.
+ *
+ * Set up extrac format-related TOC data.
+ */
+ static void
+ _ArchiveEntry(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx;
+ char fn[MAXPGPATH];
+
+ tctx = (lclTocEntry *) calloc(1, sizeof(lclTocEntry));
+ if (te->dataDumper)
+ {
+ sprintf(fn, "%d"FILE_SUFFIX, te->dumpId);
+ tctx->filename = strdup(fn);
+ }
+ else if (strcmp(te->desc, "BLOBS") == 0)
+ {
+ tctx->filename = strdup("BLOBS.TOC");
+ }
+ else
+ tctx->filename = NULL;
+
+ tctx->fileSize = 0;
+ te->formatData = (void *) tctx;
+ }
+
+ /*
+ * Called by the Archiver to save any extra format-related TOC entry
+ * data.
+ *
+ * Optional.
+ *
+ * Use the Archiver routines to write data - they are non-endian, and
+ * maintain other important file information.
+ */
+ static void
+ _WriteExtraToc(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ /*
+ * A dumpable object has set tctx->filename, any other object hasnt.
+ * (see _ArchiveEntry).
+ */
+ if (tctx->filename)
+ {
+ WriteStr(AH, tctx->filename);
+ WriteOffset(AH, tctx->fileSize, K_OFFSET_POS_SET);
+ }
+ else
+ WriteStr(AH, "");
+ }
+
+ /*
+ * Called by the Archiver to read any extra format-related TOC data.
+ *
+ * Optional.
+ *
+ * Needs to match the order defined in _WriteExtraToc, and sould also
+ * use the Archiver input routines.
+ */
+ static void
+ _ReadExtraToc(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ if (tctx == NULL)
+ {
+ tctx = (lclTocEntry *) calloc(1, sizeof(lclTocEntry));
+ te->formatData = (void *) tctx;
+ }
+
+ tctx->filename = ReadStr(AH);
+ if (strlen(tctx->filename) == 0)
+ {
+ free(tctx->filename);
+ tctx->filename = NULL;
+ }
+ else
+ ReadOffset(AH, &(tctx->fileSize));
+ }
+
+ /*
+ * Called by the Archiver when restoring an archive to output a comment
+ * that includes useful information about the TOC entry.
+ *
+ * Optional.
+ *
+ */
+ static void
+ _PrintExtraToc(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ if (AH->public.verbose && tctx->filename)
+ ahprintf(AH, "-- File: %s\n", tctx->filename);
+ }
+
+ /*
+ * Called by the Archiver when listing the contents of an archive to output a
+ * comment that includes useful information about the archive.
+ *
+ * Optional.
+ *
+ */
+ static void
+ _PrintExtraTocSummary(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ ahprintf(AH, "; ID: %s\n", ctx->idStr);
+ }
+
+
+ /*
+ * Called by the archiver when saving TABLE DATA (not schema). This routine
+ * should save whatever format-specific information is needed to read
+ * the archive back.
+ *
+ * It is called just prior to the dumper's 'DataDumper' routine being called.
+ *
+ * Optional, but strongly recommended.
+ *
+ */
+ static void
+ _StartData(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+
+ fname = prependDirectory(AH, tctx->filename);
+
+ ctx->dataFH = (FILE *) fopen(fname, PG_BINARY_W);
+ if (ctx->dataFH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(DATA_FH_ACTIVE);
+
+ ctx->dataFilePos = 0;
+
+ WriteFileHeader(AH, BLK_DATA);
+
+ InitCompressorState(ctx->cs, AH->compression);
+ }
+
+ static void
+ WriteFileHeader(ArchiveHandle *AH, int type)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ int compression = AH->compression;
+
+ /*
+ * We always write the header uncompressed. If any compression is active,
+ * switch it off for a moment and restore it after writing the header.
+ */
+ AH->compression = 0;
+ (*AH->WriteBufPtr) (AH, "PGDMP", 5); /* Magic code */
+ (*AH->WriteBytePtr) (AH, AH->vmaj);
+ (*AH->WriteBytePtr) (AH, AH->vmin);
+ (*AH->WriteBytePtr) (AH, AH->vrev);
+
+ _WriteByte(AH, type);
+ WriteInt(AH, compression);
+ WriteStr(AH, ctx->idStr);
+
+ AH->compression = compression;
+ }
+
+ static int
+ ReadFileHeader(ArchiveHandle *AH, lclFileHeader *fileHeader)
+ {
+ char tmpMag[7];
+ int vmaj, vmin, vrev;
+ int compression = AH->compression;
+ bool err = false;
+
+ #ifdef USE_ASSERT_CHECKING
+ lclContext *ctx = (lclContext *) AH->formatData;
+ #endif
+ Assert(ftell(ctx->dataFH ? ctx->dataFH :
+ ctx->blobsTocFH ? ctx->blobsTocFH :
+ AH->FH) == 0);
+
+ /* Read with compression switched off. See WriteFileHeader() */
+ AH->compression = 0;
+ if ((*AH->ReadBufPtr) (AH, tmpMag, 5) != 5)
+ die_horribly(AH, modulename, "unexpected end of file\n");
+
+ vmaj = (*AH->ReadBytePtr) (AH);
+ vmin = (*AH->ReadBytePtr) (AH);
+ vrev = (*AH->ReadBytePtr) (AH);
+
+ /* Make a convenient integer <maj><min><rev>00 */
+ fileHeader->version = ((vmaj * 256 + vmin) * 256 + vrev) * 256 + 0;
+ fileHeader->type = _ReadByte(AH);
+ if (fileHeader->type != BLK_BLOBS && fileHeader->type != BLK_DATA)
+ err = true;
+ if (!err)
+ fileHeader->compression = ReadInt(AH);
+ if (!err)
+ {
+ fileHeader->idStr = ReadStr(AH);
+ if (fileHeader->idStr == NULL)
+ err = true;
+ }
+
+ /* we do not check fileHeader->idStr against ctx->idStr, this is left to
+ * the caller. */
+
+ AH->compression = compression;
+
+ return err ? -1 : 0;
+ }
+
+ /*
+ * Called by archiver when dumper calls WriteData. This routine is
+ * called for both BLOB and TABLE data; it is the responsibility of
+ * the format to manage each kind of data using StartBlob/StartData.
+ *
+ * It should only be called from within a DataDumper routine.
+ *
+ * Mandatory.
+ */
+ static size_t
+ _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+
+ return WriteDataToArchive(AH, ctx->cs, _WriteBuf, data, dLen);
+ }
+
+ /*
+ * Called by the archiver when a dumper's 'DataDumper' routine has
+ * finished.
+ *
+ * Optional.
+ *
+ */
+ static void
+ _EndData(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ lclContext *ctx = (lclContext *) AH->formatData;
+
+ FlushCompressorState(AH, ctx->cs, _WriteBuf);
+
+ Assert(DATA_FH_ACTIVE);
+
+ /* Close the file */
+ fclose(ctx->dataFH);
+
+ /* the file won't grow anymore. Record the size. */
+ tctx->fileSize = ctx->dataFilePos;
+
+ ctx->dataFH = NULL;
+ }
+
+ /*
+ * Print data for a given file (can be a BLOB as well)
+ */
+ static void
+ _PrintFileData(ArchiveHandle *AH, char *filename, pgoff_t expectedSize,
+ RestoreOptions *ropt)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ lclFileHeader fileHeader;
+
+ if (!filename)
+ return;
+
+ _CheckFileContents(AH, filename, false);
+ _CheckFileSize(AH, filename, expectedSize, false);
+
+ ctx->dataFH = fopen(filename, PG_BINARY_R);
+ if (!ctx->dataFH)
+ die_horribly(AH, modulename, "could not open input file \"%s\": %s\n",
+ filename, strerror(errno));
+
+ if (ReadFileHeader(AH, &fileHeader) != 0)
+ die_horribly(AH, modulename,
+ "could not read valid file header from file \"%s\"\n",
+ filename);
+
+ Assert(DATA_FH_ACTIVE);
+
+ InitCompressorState(ctx->cs, fileHeader.compression);
+ ReadDataFromArchive(AH, ctx->cs, _DirectoryReadFunction);
+
+ ctx->dataFH = NULL;
+ }
+
+
+ /*
+ * Print data for a given TOC entry
+ */
+ static void
+ _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ if (!tctx->filename)
+ return;
+
+ if (strcmp(te->desc, "BLOBS") == 0)
+ _LoadBlobs(AH, ropt);
+ else
+ {
+ char *fname = prependDirectory(AH, tctx->filename);
+ _PrintFileData(AH, fname, tctx->fileSize, ropt);
+ }
+ }
+
+ static void
+ _LoadBlobs(ArchiveHandle *AH, RestoreOptions *ropt)
+ {
+ Oid oid;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ lclFileHeader fileHeader;
+ char *fname;
+
+ StartRestoreBlobs(AH);
+
+ fname = prependDirectory(AH, "BLOBS.TOC");
+
+ ctx->blobsTocFH = fopen(fname, "rb");
+
+ if (ctx->blobsTocFH == NULL)
+ die_horribly(AH, modulename, "could not open large object TOC file \"%s\" for input: %s\n",
+ fname, strerror(errno));
+
+ ReadFileHeader(AH, &fileHeader);
+
+ /* we cannot test for feof() since EOF only shows up in the low
+ * level read functions. But they would die_horribly() anyway. */
+ while (1)
+ {
+ char *blobFname;
+ pgoff_t blobSize;
+
+ oid = ReadInt(AH);
+ /* oid == 0 is our end marker */
+ if (oid == 0)
+ break;
+ ReadOffset(AH, &blobSize);
+
+ StartRestoreBlob(AH, oid, ropt->dropSchema);
+ blobFname = prependBlobsDirectory(AH, oid);
+ _PrintFileData(AH, blobFname, blobSize, ropt);
+ EndRestoreBlob(AH, oid);
+ }
+
+ if (fclose(ctx->blobsTocFH) != 0)
+ die_horribly(AH, modulename, "could not close large object TOC file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ ctx->blobsTocFH = NULL;
+
+ EndRestoreBlobs(AH);
+ }
+
+
+ /*
+ * Write a byte of data to the archive.
+ *
+ * Mandatory.
+ *
+ * Called by the archiver to do integer & byte output to the archive.
+ * These routines are only used to read & write headers & TOC.
+ *
+ */
+ static int
+ _WriteByte(ArchiveHandle *AH, const int i)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ if (fputc(i, stream) == EOF)
+ die_horribly(AH, modulename, "could not write byte\n");
+
+ *filePos += 1;
+
+ return 1;
+ }
+
+ /*
+ * Read a byte of data from the archive.
+ *
+ * Mandatory
+ *
+ * Called by the archiver to read bytes & integers from the archive.
+ * These routines are only used to read & write headers & TOC.
+ * EOF should be treated as a fatal error.
+ */
+ static int
+ _ReadByte(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ int res;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ res = getc(stream);
+ if (res == EOF)
+ die_horribly(AH, modulename, "unexpected end of file\n");
+
+ *filePos += 1;
+
+ return res;
+ }
+
+ /*
+ * Write a buffer of data to the archive.
+ *
+ * Mandatory.
+ *
+ * Called by the archiver to write a block of bytes to the TOC and by the
+ * compressor to write compressed data to the data files.
+ *
+ */
+ static size_t
+ _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ size_t res;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ res = fwrite(buf, 1, len, stream);
+ if (res != len)
+ die_horribly(AH, modulename, "could not write to output file: %s\n",
+ strerror(errno));
+
+ *filePos += res;
+
+ return res;
+ }
+
+ /*
+ * Read a block of bytes from the archive.
+ *
+ * Mandatory.
+ *
+ * Called by the archiver to read a block of bytes from the archive
+ *
+ */
+ static size_t
+ _ReadBuf(ArchiveHandle *AH, void *buf, size_t len)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t *filePos = &ctx->tocFilePos;
+ size_t res;
+ FILE *stream = AH->FH;
+
+ if (ctx->dataFH)
+ {
+ stream = ctx->dataFH;
+ filePos = &ctx->dataFilePos;
+ }
+ else if (ctx->blobsTocFH)
+ {
+ stream = ctx->blobsTocFH;
+ filePos = &ctx->blobsTocFilePos;
+ }
+
+ res = fread(buf, 1, len, stream);
+
+ *filePos += res;
+
+ return res;
+ }
+
+ /*
+ * Close the archive.
+ *
+ * Mandatory.
+ *
+ * When writing the archive, this is the routine that actually starts
+ * the process of saving it to files. No data should be written prior
+ * to this point, since the user could sort the TOC after creating it.
+ *
+ * If an archive is to be written, this routine must call:
+ * WriteHead to save the archive header
+ * WriteToc to save the TOC entries
+ * WriteDataChunks to save all DATA & BLOBs.
+ *
+ */
+ static void
+ _CloseArchive(ArchiveHandle *AH)
+ {
+ if (AH->mode == archModeWrite)
+ {
+ #ifdef USE_ASSERT_CHECKING
+ lclContext *ctx = (lclContext *) AH->formatData;
+ #endif
+
+ WriteDataChunks(AH);
+
+ Assert(TOC_FH_ACTIVE);
+
+ WriteHead(AH);
+ _WriteExtraHead(AH);
+ WriteToc(AH);
+
+ if (fclose(AH->FH) != 0)
+ die_horribly(AH, modulename, "could not close TOC file: %s\n",
+ strerror(errno));
+ }
+ AH->FH = NULL;
+ }
+
+
+
+ /*
+ * BLOB support
+ */
+
+ /*
+ * Called by the archiver when starting to save all BLOB DATA (not schema).
+ * This routine should save whatever format-specific information is needed
+ * to read the BLOBs back into memory.
+ *
+ * It is called just prior to the dumper's DataDumper routine.
+ *
+ * Optional, but strongly recommended.
+ */
+ static void
+ _StartBlobs(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+
+ fname = prependDirectory(AH, "BLOBS.TOC");
+ createDirectory(ctx->directory, "blobs");
+
+ ctx->blobsTocFH = fopen(fname, "ab");
+ if (ctx->blobsTocFH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ ctx->blobsTocFilePos = 0;
+
+ WriteFileHeader(AH, BLK_BLOBS);
+ }
+
+ /*
+ * Called by the archiver when the dumper calls StartBlob.
+ *
+ * Mandatory.
+ *
+ * Must save the passed OID for retrieval at restore-time.
+ */
+ static void
+ _StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+
+ fname = prependBlobsDirectory(AH, oid);
+ ctx->dataFH = (FILE *) fopen(fname, PG_BINARY_W);
+
+ if (ctx->dataFH == NULL)
+ die_horribly(AH, modulename, "could not open output file \"%s\": %s\n",
+ fname, strerror(errno));
+
+ Assert(DATA_FH_ACTIVE);
+
+ ctx->dataFilePos = 0;
+
+ WriteFileHeader(AH, BLK_BLOBS);
+
+ InitCompressorState(ctx->cs, AH->compression);
+ }
+
+ /*
+ * Called by the archiver when the dumper calls EndBlob.
+ *
+ * Optional.
+ */
+ static void
+ _EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ pgoff_t save_filePos;
+
+ FlushCompressorState(AH, ctx->cs, _WriteBuf);
+
+ Assert(DATA_FH_ACTIVE);
+
+ save_filePos = ctx->dataFilePos;
+
+ /* Close the BLOB data file itself */
+ fclose(ctx->dataFH);
+ ctx->dataFH = NULL;
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ /* register the BLOB data file to BLOBS.TOC */
+ WriteInt(AH, oid);
+ WriteOffset(AH, save_filePos, K_OFFSET_POS_NOT_SET);
+ }
+
+ /*
+ * Called by the archiver when finishing saving all BLOB DATA.
+ *
+ * Optional.
+ */
+ static void
+ _EndBlobs(ArchiveHandle *AH, TocEntry *te)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ WriteInt(AH, 0);
+
+ fclose(ctx->blobsTocFH);
+ ctx->blobsTocFH = NULL;
+
+ tctx->fileSize = ctx->blobsTocFilePos;
+ }
+
+ static void
+ _checkOutput(ArchiveHandle *AH, bool isChecking, const char *fmt, ...)
+ {
+ va_list ap;
+ /*
+ * We use isChecking to find out where our output should go to.
+ * If it is set, then we are running from an explicit check routine and
+ * write to stdout. If we are not checking explicitly, we write any issues
+ * to stderr.
+ */
+
+ va_start(ap, fmt);
+ if (isChecking)
+ vprintf(fmt, ap);
+ else
+ {
+ char buf[512];
+ vsnprintf(buf, sizeof(buf), fmt, ap);
+ warn_or_die_horribly(AH, modulename, "%s", buf);
+ }
+ va_end(ap);
+ }
+
+ /*
+ * The idea for the directory check is as follows: First we do a list of every
+ * file that we find in the directory. We reject filenames that don't fit our
+ * pattern outright. So at this stage we only accept all kinds of TOC data and
+ * our data files.
+ *
+ * If a filename looks good (like nnnnn.dat), we save its dumpId to
+ * ctx->chkList.
+ *
+ * Other checks then walk through the TOC and for every file they make sure
+ * that the file is what it is pretending to be. Once it passes the checks we
+ * take out its entry in chkList, i.e. replace its dumpId by InvalidDumpId.
+ *
+ * At the end what is left in chkList must be files that are not referenced
+ * from the TOC.
+ */
+ static bool
+ _StartCheckArchive(ArchiveHandle *AH)
+ {
+ bool checkOK = true;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ DIR *dir;
+ char *dname = ctx->directory;
+ struct dirent *entry;
+ int idx = 0;
+ char *suffix;
+ bool tocSeen = false;
+
+ dir = opendir(dname);
+ if (!dir)
+ {
+ _checkOutput(AH, true, "Could not open directory \"%s\": %s\n",
+ dname, strerror(errno));
+ return false;
+ }
+
+ /*
+ * Actually we are just avoiding a linked list here by getting an upper
+ * limit of the number of elements in the directory.
+ */
+ while ((entry = readdir(dir)))
+ idx++;
+
+ ctx->chkListSize = idx;
+ ctx->chkList = (DumpId *) malloc(ctx->chkListSize * sizeof(DumpId));
+
+ /* seems that Windows doesn't have a rewinddir() equivalent */
+ closedir(dir);
+ dir = opendir(dname);
+ if (!dir)
+ {
+ _checkOutput(AH, true, "Could not open directory \"%s\": %s\n",
+ dname, strerror(errno));
+ return false;
+ }
+
+ idx = 0;
+
+ for (;;)
+ {
+ errno = 0;
+ entry = readdir(dir);
+ if (!entry && errno == 0)
+ /* end of directory entries reached */
+ break;
+ if (!entry && errno)
+ {
+ _checkOutput(AH, true, "Error reading directory %s: %s\n",
+ entry->d_name, strerror(errno));
+ checkOK = false;
+ break;
+ }
+
+ if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0)
+ continue;
+ if (strcmp(entry->d_name, "blobs") == 0 &&
+ isDirectory(prependDirectory(AH, entry->d_name)))
+ continue;
+ if (strcmp(entry->d_name, "BLOBS.TOC") == 0 &&
+ isRegularFile(prependDirectory(AH, entry->d_name)))
+ continue;
+ if (strcmp(entry->d_name, "TOC") == 0 &&
+ isRegularFile(prependDirectory(AH, entry->d_name)))
+ {
+ tocSeen = true;
+ continue;
+ }
+ /* besides the above we only expect nnnn.dat, with nnnn being our
+ * numerical dumpID */
+ if ((suffix = strstr(entry->d_name, FILE_SUFFIX)) == NULL)
+ {
+ _checkOutput(AH, true, "Unexpected file \"%s\" in directory \"%s\"\n",
+ entry->d_name, dname);
+ checkOK = false;
+ continue;
+ }
+ else
+ {
+ /* suffix now points into entry->d_name */
+ int dumpId;
+ int scBytes, scItems;
+
+ /* check if FILE_SUFFIX is really a suffix instead of just a
+ * substring. */
+ if (strlen(suffix) != strlen(FILE_SUFFIX))
+ {
+ _checkOutput(AH, true, "Unexpected file \"%s\" in directory \"%s\"\n",
+ entry->d_name, dname);
+ checkOK = false;
+ continue;
+ }
+
+ /* cut off the suffix, now entry->d_name contains the null
+ * terminated dumpId, and we parse it back. */
+ *suffix = '\0';
+ scItems = sscanf(entry->d_name, "%d%n", &dumpId, &scBytes);
+ if (scItems != 1 || scBytes != strlen(entry->d_name))
+ {
+ _checkOutput(AH, true, "Unexpected file \"%s\" in directory \"%s\"\n",
+ entry->d_name, dname);
+ checkOK = false;
+ continue;
+ }
+
+ /* Still here so this entry is good. Add the dumpId to our list. */
+ ctx->chkList[idx++] = (DumpId) dumpId;
+ }
+ }
+ closedir(dir);
+
+ /* we probably counted a few entries too much, just ignore them. */
+ while (idx < ctx->chkListSize)
+ ctx->chkList[idx++] = InvalidDumpId;
+
+ /* also return false if we haven't seen the TOC file */
+ return checkOK && tocSeen;
+ }
+
+ static bool
+ _CheckFileSize(ArchiveHandle *AH, const char *fname, pgoff_t pgSize,
+ bool isChecking)
+ {
+ bool checkOK = true;
+ unsigned long size = (unsigned long) pgSize;
+ struct stat st;
+
+ if (!fname || fname[0] == '\0')
+ {
+ _checkOutput(AH, isChecking, "Invalid (empty) filename\n");
+ checkOK = false;
+ }
+ else if (stat(fname, &st) != 0)
+ {
+ _checkOutput(AH, isChecking, "File not found: \"%s\"\n", fname);
+ checkOK = false;
+ }
+ else if (st.st_size != (off_t) pgSize)
+ {
+ _checkOutput(AH, isChecking, "Size mismatch for file \"%s\" "
+ "(expected: %lu bytes, actual %lu bytes)\n",
+ fname, size, (unsigned long) st.st_size);
+ checkOK = false;
+ }
+
+ return checkOK;
+ }
+
+ static bool
+ _CheckFileContents(ArchiveHandle *AH, const char *fname, bool isChecking)
+ {
+ bool checkOK = true;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ FILE *file;
+ lclFileHeader fileHeader;
+
+ Assert(ctx->dataFH == NULL);
+
+ /* see comment at _CheckFileSize() */
+
+ if (!fname || fname[0] == '\0')
+ {
+ _checkOutput(AH, isChecking, "Invalid (empty) filename\n");
+ return false;
+ }
+
+ if (!(file = fopen(fname, PG_BINARY_R)))
+ {
+ _checkOutput(AH, isChecking, "Could not open file \"%s\": %s\n",
+ fname, strerror(errno));
+ return false;
+ }
+
+ ctx->dataFH = file;
+ if (ReadFileHeader(AH, &fileHeader) != 0)
+ {
+ _checkOutput(AH, isChecking, "Could not read valid file header from file \"%s\"\n",
+ fname);
+ checkOK = false;
+ }
+
+ if (file)
+ fclose(file);
+ ctx->dataFH = NULL;
+
+ if (!checkOK && !isChecking)
+ {
+ /* this error is serious enough that we cannot go on */
+ if (AH->connection)
+ PQfinish(AH->connection);
+ exit(1);
+ }
+
+ if (strcmp(fileHeader.idStr, ctx->idStr) != 0)
+ {
+ _checkOutput(AH, isChecking, "File \"%s\" belongs to a different backup "
+ "(expected id: %s, actual id: %s)\n",
+ fname, ctx->idStr, fileHeader.idStr);
+ checkOK = false;
+ }
+
+ return checkOK;
+ }
+
+ static bool
+ _CheckBlob(ArchiveHandle *AH, Oid oid, pgoff_t size)
+ {
+ char *fname = prependBlobsDirectory(AH, oid);
+ bool checkOK = true;
+
+ if (!_CheckFileSize(AH, fname, size, true))
+ checkOK = false;
+ else if (!_CheckFileContents(AH, fname, true))
+ checkOK = false;
+
+ return checkOK;
+ }
+
+ static bool
+ _CheckBlobs(ArchiveHandle *AH, TocEntry *te, teReqs reqs)
+ {
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *fname;
+ bool checkOK = true;
+ lclFileHeader fileHeader;
+ pgoff_t size;
+ Oid oid;
+
+ /* check the BLOBS.TOC first */
+ fname = prependDirectory(AH, "BLOBS.TOC");
+
+ if (!fname)
+ {
+ _checkOutput(AH, true, "Could not find BLOBS.TOC. Check the archive!\n");
+ return false;
+ }
+
+ if (!_CheckFileSize(AH, fname, tctx->fileSize, true))
+ checkOK = false;
+ else if (!_CheckFileContents(AH, fname, true))
+ checkOK = false;
+
+ /* now check every single BLOB object */
+ ctx->blobsTocFH = fopen(fname, "rb");
+ if (ctx->blobsTocFH == NULL)
+ {
+ _checkOutput(AH, true, "could not open large object TOC for input: %s\n",
+ strerror(errno));
+ return false;
+ }
+ ReadFileHeader(AH, &fileHeader);
+
+ /* we cannot test for feof() since EOF only shows up in the low
+ * level read functions. But they would die_horribly() anyway. */
+ while ((oid = ReadInt(AH)))
+ {
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ ReadOffset(AH, &size);
+
+ if (!_CheckBlob(AH, oid, size))
+ checkOK = false;
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+ }
+
+ Assert(BLOBS_TOC_FH_ACTIVE);
+
+ if (fclose(ctx->blobsTocFH) != 0)
+ {
+ _checkOutput(AH, true, "could not close large object TOC file: %s\n",
+ strerror(errno));
+ checkOK = false;
+ }
+
+ return checkOK;
+ }
+
+
+ static bool
+ _CheckTocEntry(ArchiveHandle *AH, TocEntry *te, teReqs reqs)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ int idx;
+ bool checkOK = true;
+
+ /* take out files from chkList as we see them */
+ for (idx = 0; idx < ctx->chkListSize; idx++)
+ {
+ if (ctx->chkList[idx] == te->dumpId && te->section == SECTION_DATA)
+ {
+ ctx->chkList[idx] = InvalidDumpId;
+ break;
+ }
+ }
+
+ /* see comment in _tocEntryRequired() for the special case of SEQUENCE SET */
+ if (reqs & REQ_DATA && strcmp(te->desc, "BLOBS") == 0)
+ {
+ if (!_CheckBlobs(AH, te, reqs))
+ checkOK = false;
+ }
+ else if (reqs & REQ_DATA && strcmp(te->desc, "SEQUENCE SET") != 0
+ && strcmp(te->desc, "BLOB") != 0
+ && strcmp(te->desc, "COMMENT") != 0)
+ {
+ char *fname;
+
+ fname = prependDirectory(AH, tctx->filename);
+ if (!fname)
+ {
+ printf("Could not find file %s\n", tctx->filename);
+ checkOK = false;
+ }
+ else if (!_CheckFileContents(AH, fname, true))
+ checkOK = false;
+ else if (!_CheckFileSize(AH, fname, tctx->fileSize, true))
+ checkOK = false;
+ }
+
+ return checkOK;
+ }
+
+ static bool
+ _EndCheckArchive(ArchiveHandle *AH)
+ {
+ /* check left over files */
+ lclContext *ctx = (lclContext *) AH->formatData;
+ int idx;
+ bool checkOK = true;
+
+ for (idx = 0; idx < ctx->chkListSize; idx++)
+ {
+ if (ctx->chkList[idx] != InvalidDumpId)
+ {
+ printf("Unexpected file: %d"FILE_SUFFIX"\n", ctx->chkList[idx]);
+ checkOK = false;
+ }
+ }
+
+ return checkOK;
+ }
+
+
+ static void
+ createDirectory(const char *dir, const char *subdir)
+ {
+ struct stat st;
+ char dirname[MAXPGPATH];
+
+ /* the directory must not yet exist, first check if it is existing */
+ if (subdir && strlen(dir) + 1 + strlen(subdir) + 1 > MAXPGPATH)
+ die_horribly(NULL, modulename, "directory name %s too long", dir);
+
+ strcpy(dirname, dir);
+
+ if (subdir)
+ {
+ strcat(dirname, "/");
+ strcat(dirname, subdir);
+ }
+
+ if (stat(dirname, &st) == 0)
+ {
+ if (S_ISDIR(st.st_mode))
+ die_horribly(NULL, modulename,
+ "Cannot create directory %s, it exists already\n",
+ dirname);
+ else
+ die_horribly(NULL, modulename,
+ "Cannot create directory %s, a file with this name "
+ "exists already\n", dirname);
+ }
+
+ /*
+ * Now we create the directory. Note that for some race condition we could
+ * also run into the situation that the directory has been created just
+ * between our two calls.
+ */
+ if (mkdir(dirname, 0700) < 0)
+ die_horribly(NULL, modulename, "Could not create directory %s: %s",
+ dirname, strerror(errno));
+ }
+
+
+ static char *
+ prependDirectory(ArchiveHandle *AH, const char *relativeFilename)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ static char buf[MAXPGPATH];
+ char *dname;
+
+ dname = ctx->directory;
+
+ if (strlen(dname) + 1 + strlen(relativeFilename) + 1 > MAXPGPATH)
+ die_horribly(AH, modulename, "path name too long: %s", dname);
+
+ strcpy(buf, dname);
+ strcat(buf, "/");
+ strcat(buf, relativeFilename);
+
+ return buf;
+ }
+
+ static char *
+ prependBlobsDirectory(ArchiveHandle *AH, Oid oid)
+ {
+ static char buf[MAXPGPATH];
+ char *dname;
+ lclContext *ctx = (lclContext *) AH->formatData;
+ int r;
+
+ dname = ctx->directory;
+
+ r = snprintf(buf, MAXPGPATH, "%s/blobs/%d%s",
+ dname, oid, FILE_SUFFIX);
+
+ if (r < 0 || r >= MAXPGPATH)
+ die_horribly(AH, modulename, "path name too long: %s", dname);
+
+ return buf;
+ }
+
+ static size_t
+ _DirectoryReadFunction(ArchiveHandle *AH, void **buf, size_t sizeHint)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ Assert(cs->comprInSize >= comprInInitSize);
+
+ if (sizeHint == 0)
+ sizeHint = comprInInitSize;
+
+ *buf = cs->comprIn;
+ return _ReadBuf(AH, cs->comprIn, sizeHint);
+ }
+
+ static void
+ _WriteExtraHead(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ WriteStr(AH, ctx->idStr);
+ }
+
+ static void
+ _ReadExtraHead(ArchiveHandle *AH)
+ {
+ lclContext *ctx = (lclContext *) AH->formatData;
+ char *str = ReadStr(AH);
+
+ strcpy(ctx->idStr, str);
+ }
+
+ static bool
+ isDirectory(const char *fname)
+ {
+ struct stat st;
+
+ if (stat(fname, &st))
+ return false;
+
+ return S_ISDIR(st.st_mode);
+ }
+
+ static bool
+ isRegularFile(const char *fname)
+ {
+ struct stat st;
+
+ if (stat(fname, &st))
+ return false;
+
+ return S_ISREG(st.st_mode);
+ }
+
diff --git a/src/bin/pg_dump/pg_backup_files.c b/src/bin/pg_dump/pg_backup_files.c
index abc93b1..825c473 100644
*** a/src/bin/pg_dump/pg_backup_files.c
--- b/src/bin/pg_dump/pg_backup_files.c
*************** InitArchiveFmt_Files(ArchiveHandle *AH)
*** 92,97 ****
--- 92,98 ----
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = NULL;
AH->StartBlobsPtr = _StartBlobs;
AH->StartBlobPtr = _StartBlob;
*************** InitArchiveFmt_Files(ArchiveHandle *AH)
*** 100,105 ****
--- 101,110 ----
AH->ClonePtr = NULL;
AH->DeClonePtr = NULL;
+ AH->StartCheckArchivePtr = NULL;
+ AH->CheckTocEntryPtr = NULL;
+ AH->EndCheckArchivePtr = NULL;
+
/*
* Set up some special context used in compressing data.
*/
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index 006f7da..dcc13ee 100644
*** a/src/bin/pg_dump/pg_backup_tar.c
--- b/src/bin/pg_dump/pg_backup_tar.c
*************** InitArchiveFmt_Tar(ArchiveHandle *AH)
*** 144,149 ****
--- 144,150 ----
AH->ReadExtraTocPtr = _ReadExtraToc;
AH->WriteExtraTocPtr = _WriteExtraToc;
AH->PrintExtraTocPtr = _PrintExtraToc;
+ AH->PrintExtraTocSummaryPtr = NULL;
AH->StartBlobsPtr = _StartBlobs;
AH->StartBlobPtr = _StartBlob;
*************** InitArchiveFmt_Tar(ArchiveHandle *AH)
*** 152,157 ****
--- 153,162 ----
AH->ClonePtr = NULL;
AH->DeClonePtr = NULL;
+ AH->StartCheckArchivePtr = NULL;
+ AH->CheckTocEntryPtr = NULL;
+ AH->EndCheckArchivePtr = NULL;
+
/*
* Set up some special context used in compressing data.
*/
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 8a71b99..dcd77da 100644
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
*************** static int no_security_label = 0;
*** 138,143 ****
--- 138,144 ----
static void help(const char *progname);
+ static ArchiveFormat parseArchiveFormat(const char *format);
static void expand_schema_name_patterns(SimpleStringList *patterns,
SimpleOidList *oids);
static void expand_table_name_patterns(SimpleStringList *patterns,
*************** main(int argc, char **argv)
*** 267,272 ****
--- 268,274 ----
int my_version;
int optindex;
RestoreOptions *ropt;
+ ArchiveFormat archiveFormat = archUnknown;
static int disable_triggers = 0;
static int outputNoTablespaces = 0;
*************** main(int argc, char **argv)
*** 542,575 ****
if (compressLevel == COMPRESSION_UNKNOWN)
compressLevel = Z_DEFAULT_COMPRESSION;
! /* open the output file */
! if (pg_strcasecmp(format, "a") == 0 || pg_strcasecmp(format, "append") == 0)
! {
! /* This is used by pg_dumpall, and is not documented */
plainText = 1;
! g_fout = CreateArchive(filename, archNull, 0, archModeAppend);
! }
! else if (pg_strcasecmp(format, "c") == 0 || pg_strcasecmp(format, "custom") == 0)
! g_fout = CreateArchive(filename, archCustom, compressLevel, archModeWrite);
! else if (pg_strcasecmp(format, "f") == 0 || pg_strcasecmp(format, "file") == 0)
! {
! /*
! * Dump files into the current directory; for demonstration only, not
! * documented.
! */
! g_fout = CreateArchive(filename, archFiles, compressLevel, archModeWrite);
! }
! else if (pg_strcasecmp(format, "p") == 0 || pg_strcasecmp(format, "plain") == 0)
{
! plainText = 1;
! g_fout = CreateArchive(filename, archNull, 0, archModeWrite);
}
! else if (pg_strcasecmp(format, "t") == 0 || pg_strcasecmp(format, "tar") == 0)
! g_fout = CreateArchive(filename, archTar, compressLevel, archModeWrite);
! else
{
! write_msg(NULL, "invalid output format \"%s\" specified\n", format);
! exit(1);
}
if (g_fout == NULL)
--- 544,607 ----
if (compressLevel == COMPRESSION_UNKNOWN)
compressLevel = Z_DEFAULT_COMPRESSION;
! archiveFormat = parseArchiveFormat(format);
!
! /* archiveFormat specific setup */
! if (archiveFormat == archNull || archiveFormat == archNullAppend)
plainText = 1;
!
! /*
! * If AH->compression == UNKNOWN_COMPRESSION then it has not been set to some
! * value explicitly.
! *
! * Fall back to default:
! *
! * zlib with Z_DEFAULT_COMPRESSION for those formats that support it.
! * If either one is not available: use no compression at all.
! */
!
! if (compressLevel == COMPRESSION_UNKNOWN)
{
! #ifdef HAVE_LIBZ
! if (archiveFormat == archCustom || archiveFormat == archDirectory)
! compressLevel = Z_DEFAULT_COMPRESSION;
! else
! compressLevel = 0;
! #else
! compressLevel = 0;
! #endif
}
!
! /* open the output file */
! switch(archiveFormat)
{
! case archCustom:
! g_fout = CreateArchive(filename, archCustom, compressLevel,
! archModeWrite);
! break;
! case archDirectory:
! g_fout = CreateArchive(filename, archDirectory, compressLevel,
! archModeWrite);
! break;
! case archFiles:
! g_fout = CreateArchive(filename, archFiles, compressLevel,
! archModeWrite);
! break;
! case archNull:
! g_fout = CreateArchive(filename, archNull, 0, archModeWrite);
! break;
! case archNullAppend:
! g_fout = CreateArchive(filename, archNull, 0, archModeAppend);
! break;
! case archTar:
! g_fout = CreateArchive(filename, archTar, compressLevel,
! archModeWrite);
! break;
!
! default:
! /* we never reach here, because we check in parseArchiveFormat()
! * already. */
! break;
}
if (g_fout == NULL)
*************** help(const char *progname)
*** 839,847 ****
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|t|p output file format (custom, tar, plain text)\n"));
printf(_(" -v, --verbose verbose mode\n"));
! printf(_(" -Z, --compress=0-9 compression level for compressed formats\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
--- 871,879 ----
printf(_("\nGeneral options:\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar, plain text)\n"));
printf(_(" -v, --verbose verbose mode\n"));
! printf(_(" -Z, --compress=0-9 compression level of libz for compressed formats\n"));
printf(_(" --lock-wait-timeout=TIMEOUT fail after waiting TIMEOUT for a table lock\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
*************** exit_nicely(void)
*** 896,901 ****
--- 928,971 ----
exit(1);
}
+ static ArchiveFormat
+ parseArchiveFormat(const char *format)
+ {
+ ArchiveFormat archiveFormat;
+
+ if (pg_strcasecmp(format, "a") == 0 || pg_strcasecmp(format, "append") == 0)
+ /* This is used by pg_dumpall, and is not documented */
+ archiveFormat = archNullAppend;
+ else if (pg_strcasecmp(format, "c") == 0)
+ archiveFormat = archCustom;
+ else if (pg_strcasecmp(format, "custom") == 0)
+ archiveFormat = archCustom;
+ else if (pg_strcasecmp(format, "d") == 0)
+ archiveFormat = archDirectory;
+ else if (pg_strcasecmp(format, "directory") == 0)
+ archiveFormat = archDirectory;
+ else if (pg_strcasecmp(format, "f") == 0 || pg_strcasecmp(format, "file") == 0)
+ /*
+ * Dump files into the current directory; for demonstration only, not
+ * documented.
+ */
+ archiveFormat = archFiles;
+ else if (pg_strcasecmp(format, "p") == 0)
+ archiveFormat = archNull;
+ else if (pg_strcasecmp(format, "plain") == 0)
+ archiveFormat = archNull;
+ else if (pg_strcasecmp(format, "t") == 0)
+ archiveFormat = archTar;
+ else if (pg_strcasecmp(format, "tar") == 0)
+ archiveFormat = archTar;
+ else
+ {
+ write_msg(NULL, "invalid output format \"%s\" specified\n", format);
+ exit(1);
+ }
+ return archiveFormat;
+ }
+
/*
* Find the OIDs of all schemas matching the given list of patterns,
* and append them to the given OID list.
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 7885535..0f643b9 100644
*** a/src/bin/pg_dump/pg_dump.h
--- b/src/bin/pg_dump/pg_dump.h
*************** typedef struct
*** 39,44 ****
--- 39,45 ----
} CatalogId;
typedef int DumpId;
+ #define InvalidDumpId (-1)
/*
* Data structures for simple lists of OIDs and strings. The support for
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 1ddba72..3fbe264 100644
*** a/src/bin/pg_dump/pg_restore.c
--- b/src/bin/pg_dump/pg_restore.c
*************** main(int argc, char **argv)
*** 79,84 ****
--- 79,85 ----
static int skip_seclabel = 0;
struct option cmdopts[] = {
+ {"check", 0, NULL, 'k'},
{"clean", 0, NULL, 'c'},
{"create", 0, NULL, 'C'},
{"data-only", 0, NULL, 'a'},
*************** main(int argc, char **argv)
*** 144,150 ****
}
}
! while ((c = getopt_long(argc, argv, "acCd:ef:F:h:iI:j:lL:n:Op:P:RsS:t:T:U:vwWxX:1",
cmdopts, NULL)) != -1)
{
switch (c)
--- 145,151 ----
}
}
! while ((c = getopt_long(argc, argv, "acCd:ef:F:h:iI:j:klL:n:Op:P:RsS:t:T:U:vwWxX:1",
cmdopts, NULL)) != -1)
{
switch (c)
*************** main(int argc, char **argv)
*** 182,188 ****
case 'j': /* number of restore jobs */
opts->number_of_jobs = atoi(optarg);
break;
!
case 'l': /* Dump the TOC summary */
opts->tocSummary = 1;
break;
--- 183,191 ----
case 'j': /* number of restore jobs */
opts->number_of_jobs = atoi(optarg);
break;
! case 'k': /* check the archive */
! opts->checkArchive = 1;
! break;
case 'l': /* Dump the TOC summary */
opts->tocSummary = 1;
break;
*************** main(int argc, char **argv)
*** 352,357 ****
--- 355,365 ----
opts->format = archCustom;
break;
+ case 'd':
+ case 'D':
+ opts->format = archDirectory;
+ break;
+
case 'f':
case 'F':
opts->format = archFiles;
*************** main(int argc, char **argv)
*** 363,369 ****
break;
default:
! write_msg(NULL, "unrecognized archive format \"%s\"; please specify \"c\" or \"t\"\n",
opts->formatName);
exit(1);
}
--- 371,377 ----
break;
default:
! write_msg(NULL, "unrecognized archive format \"%s\"; please specify \"c\", \"d\" or \"t\"\n",
opts->formatName);
exit(1);
}
*************** main(int argc, char **argv)
*** 392,397 ****
--- 400,413 ----
if (opts->tocSummary)
PrintTOCSummary(AH, opts);
+ else if (opts->checkArchive)
+ {
+ bool checkOK;
+ checkOK = CheckArchive(AH, opts);
+ CloseArchive(AH);
+ if (!checkOK)
+ exit(1);
+ }
else
RestoreArchive(AH, opts);
*************** usage(const char *progname)
*** 418,425 ****
printf(_("\nGeneral options:\n"));
printf(_(" -d, --dbname=NAME connect to database name\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|t backup file format (should be automatic)\n"));
printf(_(" -l, --list print summarized TOC of the archive\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
--- 434,442 ----
printf(_("\nGeneral options:\n"));
printf(_(" -d, --dbname=NAME connect to database name\n"));
printf(_(" -f, --file=FILENAME output file name\n"));
! printf(_(" -F, --format=c|d|t backup file format (should be automatic)\n"));
printf(_(" -l, --list print summarized TOC of the archive\n"));
+ printf(_(" -k check the directory archive\n"));
printf(_(" -v, --verbose verbose mode\n"));
printf(_(" --help show this help, then exit\n"));
printf(_(" --version output version information, then exit\n"));
On 29.11.2010 07:11, Joachim Wieland wrote:
On Mon, Nov 22, 2010 at 3:44 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:* wrap long lines
* use extern in function prototypes in header files
* "inline" some functions like _StartDataCompressor, _EndDataCompressor,
_DoInflate/_DoDeflate that aren't doing anything but call some other
function.So here is a new round of patches. It turned out that the feature to
allow to also restore files from a different dump and with a different
compression required some changes in the compressor API. And in the
end I didn't like all the #ifdefs either and made a less #ifdef-rich
version using function pointers. The downside now is that I have
created quite a few one-line functions that Heikki doesn't like all
that much, but I assume that they are okay in this case on the grounds
that the public compressor interface is calling the private
implementation of a certain compressor.
Thanks, I'll take a look.
BTW, I know you wanted to have support for other compression algorithms;
I think the best way to achieve that is to make it possible to specify
an external command to be used for compression. pg_dump would fork() and
exec() that, and pipe the data to be compressed/decompressed to
stdin/stdout of the external command. We're not going to add support for
every new compression algorithm that's in vogue, but generic external
command support should make happy those who want it. I'd be particularly
excited about using something like pbzip2, to speed up the compression
on multi-core systems.
That should be a separate patch, but it's something to keep in mind with
these refactorings.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Mon, Nov 29, 2010 at 10:49 AM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
On 29.11.2010 07:11, Joachim Wieland wrote:
On Mon, Nov 22, 2010 at 3:44 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:* wrap long lines
* use extern in function prototypes in header files
* "inline" some functions like _StartDataCompressor, _EndDataCompressor,
_DoInflate/_DoDeflate that aren't doing anything but call some other
function.So here is a new round of patches. It turned out that the feature to
allow to also restore files from a different dump and with a different
compression required some changes in the compressor API. And in the
end I didn't like all the #ifdefs either and made a less #ifdef-rich
version using function pointers. The downside now is that I have
created quite a few one-line functions that Heikki doesn't like all
that much, but I assume that they are okay in this case on the grounds
that the public compressor interface is calling the private
implementation of a certain compressor.Thanks, I'll take a look.
BTW, I know you wanted to have support for other compression algorithms; I
think the best way to achieve that is to make it possible to specify an
external command to be used for compression. pg_dump would fork() and exec()
that, and pipe the data to be compressed/decompressed to stdin/stdout of the
external command. We're not going to add support for every new compression
algorithm that's in vogue, but generic external command support should make
happy those who want it. I'd be particularly excited about using something
like pbzip2, to speed up the compression on multi-core systems.That should be a separate patch, but it's something to keep in mind with
these refactorings.
That would also ease licensing concerns, since we wouldn't have to
redistribute or bundle anything.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 29.11.2010 07:11, Joachim Wieland wrote:
On Mon, Nov 22, 2010 at 3:44 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:* wrap long lines
* use extern in function prototypes in header files
* "inline" some functions like _StartDataCompressor, _EndDataCompressor,
_DoInflate/_DoDeflate that aren't doing anything but call some other
function.So here is a new round of patches. It turned out that the feature to
allow to also restore files from a different dump and with a different
compression required some changes in the compressor API. And in the
end I didn't like all the #ifdefs either and made a less #ifdef-rich
version using function pointers.
Ok. The separate InitCompressorState() and AllocateCompressorState()
functions seem unnecessary. As the code stands, there's little
performance gain from re-using the same CompressorState, just
re-initializing it, and I can't see any other justification for them either.
I combined those, and the Free/Flush steps, and did a bunch of other
editorializations and cleanups. Here's an updated patch, also available
in my git repository at
git://git.postgresql.org/git/users/heikki/postgres.git, branch
"pg_dump-dir". I'm going to continue reviewing this later, tomorrow
hopefully.
The downside now is that I have
created quite a few one-line functions that Heikki doesn't like all
that much, but I assume that they are okay in this case on the grounds
that the public compressor interface is calling the private
implementation of a certain compressor.
You could avoid the wrapper functions by calling the function pointers
directly, but I agree it seems neater the way you did it.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Attachments:
pg_dump-compression-refactor-2.difftext/x-diff; name=pg_dump-compression-refactor-2.diffDownload
*** a/src/bin/pg_dump/Makefile
--- b/src/bin/pg_dump/Makefile
***************
*** 20,26 **** override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
--- 20,26 ----
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o compress_io.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
*** /dev/null
--- b/src/bin/pg_dump/compress_io.c
***************
*** 0 ****
--- 1,415 ----
+ /*-------------------------------------------------------------------------
+ *
+ * compress_io.c
+ * Routines for archivers to write an uncompressed or compressed data
+ * stream.
+ *
+ * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/compress_io.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include "compress_io.h"
+
+ static const char *modulename = gettext_noop("compress_io");
+
+ /* Routines that are private to a specific compressor (static functions) */
+ #ifdef HAVE_LIBZ
+ /* Routines that support zlib compressed data I/O */
+ static void InitCompressorZlib(CompressorState *cs, int compression);
+ static void DeflateCompressorZlib(ArchiveHandle *AH, CompressorState *cs,
+ bool flush);
+ static void ReadDataFromArchiveZlib(ArchiveHandle *AH, CompressorState *cs);
+ static size_t WriteDataToArchiveZlib(ArchiveHandle *AH, CompressorState *cs,
+ const void *data, size_t dLen);
+ static void EndCompressorZlib(ArchiveHandle *AH, CompressorState *cs);
+ static CompressorState *AllocateCompressorState(CompressorAction action,
+ int compression);
+
+ static CompressorFuncs cfs_zlib = {
+ InitCompressorZlib,
+ ReadDataFromArchiveZlib,
+ WriteDataToArchiveZlib,
+ EndCompressorZlib
+ };
+ #endif
+
+ /* Routines that support uncompressed data I/O */
+ static void InitCompressorNone(CompressorState *cs, int compression);
+ static void ReadDataFromArchiveNone(ArchiveHandle *AH, CompressorState *cs);
+ static size_t WriteDataToArchiveNone(ArchiveHandle *AH, CompressorState *cs,
+ const void *data, size_t dLen);
+ static void EndCompressorNone(ArchiveHandle *AH, CompressorState *cs);
+
+ static CompressorFuncs cfs_none = {
+ InitCompressorNone,
+ ReadDataFromArchiveNone,
+ WriteDataToArchiveNone,
+ EndCompressorNone
+ };
+
+ /* Allocate a new decompressor */
+ CompressorState *
+ AllocateInflator(int compression, ReadFunc readF)
+ {
+ CompressorState *cs;
+
+ cs = AllocateCompressorState(COMPRESSOR_INFLATE, compression);
+ cs->readF = readF;
+
+ return cs;
+ }
+
+ /* Allocate a new compressor */
+ CompressorState *
+ AllocateDeflator(int compression, WriteFunc writeF)
+ {
+ CompressorState *cs;
+
+ cs = AllocateCompressorState(COMPRESSOR_DEFLATE, compression);
+ cs->writeF = writeF;
+
+ return cs;
+ }
+
+ static CompressorState *
+ AllocateCompressorState(CompressorAction action, int compression)
+ {
+ CompressorState *cs;
+ CompressorAlgorithm alg;
+
+ cs = (CompressorState *) malloc(sizeof(CompressorState));
+ if (cs == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+ memset(cs, 0, sizeof(CompressorState));
+
+ cs->action = action;
+
+ /*
+ * The compression is set either on the commandline when creating
+ * an archive or by ReadHead() when restoring an archive. It can also be
+ * set on a per-data item basis in the directory archive format.
+ */
+ if (compression == Z_DEFAULT_COMPRESSION ||
+ (compression > 0 && compression <= 9))
+ alg = COMPR_ALG_LIBZ;
+ else if (compression == COMPRESSION_NONE)
+ alg = COMPR_ALG_NONE;
+ else
+ {
+ die_horribly(NULL, modulename, "Invalid compression code: %d\n",
+ compression);
+ alg = COMPR_ALG_NONE; /* keep compiler quiet */
+ }
+
+ #ifndef HAVE_LIBZ
+ /*
+ * So here we are not built with libz support.
+ * For a dump, if no compression was specified issue a warning, and fall
+ * back to no compression.
+ */
+ if ((compression > 0 && compression <= 9)
+ || compression == Z_DEFAULT_COMPRESSION)
+ if (cs->action == COMPRESSOR_DEFLATE)
+ {
+ write_msg(modulename, "WARNING: requested compression not available in "
+ "this installation -- archive will be uncompressed\n");
+ compression = 0;
+ alg = COMPR_ALG_NONE;
+ }
+
+ if (alg == COMPR_ALG_LIBZ)
+ die_horribly(NULL, modulename, "not built with zlib support\n");
+ #endif
+
+ /*
+ * Perform compression algorithm specific initialization.
+ */
+ cs->comprAlg = alg;
+ switch(cs->comprAlg)
+ {
+ #ifdef HAVE_LIBZ
+ case COMPR_ALG_LIBZ:
+ cs->funcs = cfs_zlib;
+ break;
+ #endif
+ case COMPR_ALG_NONE:
+ cs->funcs = cfs_none;
+ break;
+ default:
+ break;
+ }
+ cs->funcs.initCompressor(cs, compression);
+
+ Assert(compression == 0 ?
+ (cs->comprAlg == COMPR_ALG_NONE) :
+ (cs->comprAlg != COMPR_ALG_NONE));
+
+ return cs;
+ }
+
+ /*
+ * Read compressed data from the input stream (via readF).
+ */
+ void
+ ReadDataFromArchive(ArchiveHandle *AH, CompressorState *cs)
+ {
+ cs->funcs.readDataFromArchive(AH, cs);
+ }
+
+ /*
+ * Send compressed data to the output stream (via writeF).
+ */
+ size_t
+ WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs,
+ const void *data, size_t dLen)
+ {
+ return cs->funcs.writeDataToArchive(AH, cs, data, dLen);
+ }
+
+ /*
+ * Terminate compression library context and flush its buffers. If no
+ * compression library is in use then just return.
+ */
+ void
+ EndCompressorState(ArchiveHandle *AH, CompressorState *cs)
+ {
+ cs->funcs.endCompressor(AH, cs);
+ free(cs);
+ }
+
+ #ifdef HAVE_LIBZ
+ /*
+ * Functions for zlib compressed output.
+ */
+
+ static void
+ InitCompressorZlib(CompressorState *cs, int compression)
+ {
+ z_streamp zp;
+
+ zp = cs->zp = (z_streamp) malloc(sizeof(z_stream));
+ if (cs->zp == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+
+ /*
+ * comprOutInitSize is the buffer size we tell zlib it can output
+ * to. We actually allocate one extra byte because some routines
+ * want to append a trailing zero byte to the zlib output. The
+ * input buffer is expansible and is always of size
+ * cs->comprInSize; comprInInitSize is just the initial default
+ * size for it.
+ */
+ cs->comprOut = (char *) malloc(comprOutInitSize + 1);
+ cs->comprIn = (char *) malloc(comprInInitSize);
+ cs->comprInSize = comprInInitSize;
+ cs->comprOutSize = comprOutInitSize;
+
+ if (cs->comprOut == NULL || cs->comprIn == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+
+ zp->zalloc = Z_NULL;
+ zp->zfree = Z_NULL;
+ zp->opaque = Z_NULL;
+
+ if (cs->action == COMPRESSOR_DEFLATE)
+ if (deflateInit(zp, compression) != Z_OK)
+ die_horribly(NULL, modulename,
+ "could not initialize compression library: %s\n",
+ zp->msg);
+ if (cs->action == COMPRESSOR_INFLATE)
+ if (inflateInit(zp) != Z_OK)
+ die_horribly(NULL, modulename,
+ "could not initialize compression library: %s\n",
+ zp->msg);
+
+ /* Just be paranoid - maybe End is called after Start, with no Write */
+ zp->next_out = (void *) cs->comprOut;
+ zp->avail_out = comprOutInitSize;
+ }
+
+ static void
+ EndCompressorZlib(ArchiveHandle *AH, CompressorState *cs)
+ {
+ z_streamp zp = cs->zp;
+
+ zp->next_in = NULL;
+ zp->avail_in = 0;
+
+ /* Flush any remaining data from zlib buffer */
+ DeflateCompressorZlib(AH, cs, true);
+
+ if (deflateEnd(zp) != Z_OK)
+ die_horribly(AH, modulename,
+ "could not close compression stream: %s\n", zp->msg);
+
+ free(cs->comprOut);
+ free(cs->comprIn);
+ free(cs->zp);
+ }
+
+ static void
+ DeflateCompressorZlib(ArchiveHandle *AH, CompressorState *cs,
+ bool flush)
+ {
+ z_streamp zp = cs->zp;
+ char *out = cs->comprOut;
+ int res = Z_OK;
+
+ while (cs->zp->avail_in != 0 || flush)
+ {
+ res = deflate(zp, flush ? Z_FINISH : Z_NO_FLUSH);
+ if (res == Z_STREAM_ERROR)
+ die_horribly(AH, modulename,
+ "could not compress data: %s\n", zp->msg);
+ if ((flush && (zp->avail_out < comprOutInitSize))
+ || (zp->avail_out == 0)
+ || (zp->avail_in != 0)
+ )
+ {
+ /*
+ * Extra paranoia: avoid zero-length chunks, since a zero length
+ * chunk is the EOF marker in the custom format. This should never
+ * happen but...
+ */
+ if (zp->avail_out < comprOutInitSize)
+ {
+ /*
+ * Any write function shoud do its own error checking but
+ * to make sure we do a check here as well...
+ */
+ size_t len = comprOutInitSize - zp->avail_out;
+ if (cs->writeF(AH, out, len) != len)
+ die_horribly(AH, modulename,
+ "could not write to output file: %s\n",
+ strerror(errno));
+ }
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+ }
+
+ if (res == Z_STREAM_END)
+ break;
+ }
+ }
+
+ static void
+ ReadDataFromArchiveZlib(ArchiveHandle *AH, CompressorState *cs)
+ {
+ z_streamp zp = cs->zp;
+ char *out = cs->comprOut;
+ int res = Z_OK;
+ size_t cnt;
+ void *in;
+
+ /* no minimal chunk size for zlib */
+ while ((cnt = cs->readF(AH, &in, 0)))
+ {
+ zp->next_in = (void *) in;
+ zp->avail_in = cnt;
+
+ while (zp->avail_in > 0)
+ {
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+
+ res = inflate(zp, 0);
+ if (res != Z_OK && res != Z_STREAM_END)
+ die_horribly(AH, modulename,
+ "could not uncompress data: %s\n", zp->msg);
+
+ out[comprOutInitSize - zp->avail_out] = '\0';
+ ahwrite(out, 1, comprOutInitSize - zp->avail_out, AH);
+ }
+ }
+
+ zp->next_in = NULL;
+ zp->avail_in = 0;
+ while (res != Z_STREAM_END)
+ {
+ zp->next_out = (void *) out;
+ zp->avail_out = comprOutInitSize;
+ res = inflate(zp, 0);
+ if (res != Z_OK && res != Z_STREAM_END)
+ die_horribly(AH, modulename,
+ "could not uncompress data: %s\n", zp->msg);
+
+ out[comprOutInitSize - zp->avail_out] = '\0';
+ ahwrite(out, 1, comprOutInitSize - zp->avail_out, AH);
+ }
+
+ if (inflateEnd(zp) != Z_OK)
+ die_horribly(AH, modulename,
+ "could not close compression library: %s\n", zp->msg);
+ }
+
+ static size_t
+ WriteDataToArchiveZlib(ArchiveHandle *AH, CompressorState *cs,
+ const void *data, size_t dLen)
+ {
+ cs->zp->next_in = (void *) data;
+ cs->zp->avail_in = dLen;
+ DeflateCompressorZlib(AH, cs, false);
+ /* we have either succeeded in writing dLen bytes or we have called
+ * die_horribly() */
+ return dLen;
+ }
+
+ #endif /* HAVE_LIBZ */
+
+
+ /*
+ * Functions for uncompressed output.
+ */
+ static void
+ InitCompressorNone(CompressorState *cs, int compression)
+ {
+ cs->comprOut = (char *) malloc(comprOutInitSize + 1);
+ cs->comprIn = (char *) malloc(comprInInitSize);
+ cs->comprInSize = comprInInitSize;
+ cs->comprOutSize = comprOutInitSize;
+
+ if (cs->comprOut == NULL || cs->comprIn == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+ }
+
+ static void
+ ReadDataFromArchiveNone(ArchiveHandle *AH, CompressorState *cs)
+ {
+ size_t cnt;
+ void *in;
+
+ /* no minimal chunk size for uncompressed data */
+ while ((cnt = cs->readF(AH, &in, 0)))
+ {
+ ahwrite(in, 1, cnt, AH);
+ }
+ }
+
+ static size_t
+ WriteDataToArchiveNone(ArchiveHandle *AH, CompressorState *cs,
+ const void *data, size_t dLen)
+ {
+ /*
+ * Any write function shoud do its own error checking but to make
+ * sure we do a check here as well...
+ */
+ if (cs->writeF(AH, data, dLen) != dLen)
+ die_horribly(AH, modulename,
+ "could not write to output file: %s\n",
+ strerror(errno));
+ return dLen;
+ }
+
+ static void
+ EndCompressorNone(ArchiveHandle *AH, CompressorState *cs)
+ {
+ free(cs->comprOut);
+ free(cs->comprIn);
+ }
+
+
*** /dev/null
--- b/src/bin/pg_dump/compress_io.h
***************
*** 0 ****
--- 1,94 ----
+ /*-------------------------------------------------------------------------
+ *
+ * compress_io.h
+ * Routines for archivers to write an uncompressed or compressed data
+ * stream.
+ *
+ * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * pg_dump will read the system catalogs in a database and dump out a
+ * script that reproduces the schema in terms of SQL that is understood
+ * by PostgreSQL
+ *
+ * IDENTIFICATION
+ * XXX
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include "pg_backup_archiver.h"
+
+ #define comprOutInitSize 65536
+ #define comprInInitSize 65536
+
+ struct _CompressorState;
+
+ typedef enum
+ {
+ COMPRESSOR_INFLATE,
+ COMPRESSOR_DEFLATE
+ } CompressorAction;
+
+ typedef enum
+ {
+ COMPR_ALG_NONE,
+ COMPR_ALG_LIBZ
+ } CompressorAlgorithm;
+
+ typedef size_t (*WriteFunc)(ArchiveHandle *AH, const void *buf, size_t len);
+ /*
+ * The sizeHint parameter tells the format which size is required for the
+ * algorithm. If the format doesn't know better it should send back that many
+ * bytes of input. If the format was written by blocks however, then the
+ * format already knows the block size and can deliver exactly the size of the
+ * next block.
+ *
+ * The custom archive is written in such blocks. The directory archive however
+ * is just a continuous stream of data. Other compressed formats than libz
+ * however deal with blocks on the algorithm level and then the algorithm is
+ * able to tell the format the amount of data that it is ready to consume next.
+ */
+ typedef size_t (*ReadFunc)(ArchiveHandle *AH, void **buf, size_t sizeHint);
+
+ typedef void (*InitCompressorPtr)(struct _CompressorState *cs, int compression);
+ typedef void (*ReadDataFromArchivePtr)(ArchiveHandle *AH,
+ struct _CompressorState *cs);
+ typedef size_t (*WriteDataToArchivePtr)(ArchiveHandle *AH,
+ struct _CompressorState *cs,
+ const void *data, size_t dLen);
+ typedef void (*EndCompressorPtr)(ArchiveHandle *AH,
+ struct _CompressorState *cs);
+
+ typedef struct
+ {
+ InitCompressorPtr initCompressor;
+ ReadDataFromArchivePtr readDataFromArchive;
+ WriteDataToArchivePtr writeDataToArchive;
+ EndCompressorPtr endCompressor;
+ } CompressorFuncs;
+
+ typedef struct _CompressorState
+ {
+ CompressorAlgorithm comprAlg;
+ ReadFunc readF;
+ WriteFunc writeF;
+ #ifdef HAVE_LIBZ
+ z_streamp zp;
+ #endif
+ char *comprOut;
+ char *comprIn;
+ size_t comprInSize;
+ size_t comprOutSize;
+ CompressorAction action;
+ CompressorFuncs funcs;
+ } CompressorState;
+
+ extern CompressorState *AllocateInflator(int compression, ReadFunc readF);
+ extern CompressorState *AllocateDeflator(int compression, WriteFunc writeF);
+ extern void ReadDataFromArchive(ArchiveHandle *AH, CompressorState *cs);
+ extern size_t WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs,
+ const void *data, size_t dLen);
+ extern void EndCompressorState(ArchiveHandle *AH, CompressorState *cs);
+ extern void FreeCompressorState(CompressorState *cs);
+
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
***************
*** 22,27 ****
--- 22,28 ----
#include "pg_backup_db.h"
#include "dumputils.h"
+ #include "compress_io.h"
#include <ctype.h>
#include <unistd.h>
*** a/src/bin/pg_dump/pg_backup_archiver.h
--- b/src/bin/pg_dump/pg_backup_archiver.h
***************
*** 49,54 ****
--- 49,55 ----
#define GZCLOSE(fh) fclose(fh)
#define GZWRITE(p, s, n, fh) (fwrite(p, s, n, fh) * (s))
#define GZREAD(p, s, n, fh) fread(p, s, n, fh)
+ /* this is just the redefinition of a libz constant */
#define Z_DEFAULT_COMPRESSION (-1)
typedef struct _z_stream
***************
*** 61,66 **** typedef struct _z_stream
--- 62,76 ----
typedef z_stream *z_streamp;
#endif
+ /* XXX eventually this should be an enum. However if we want something
+ * pluggable in the long run it can get hard to add values to a central
+ * enum from the plugins... */
+ #define COMPRESSION_UNKNOWN (-2)
+ #define COMPRESSION_NONE 0
+
+ /* XXX should we change the archive version for pg_dump with directory support?
+ * XXX We are not actually modifying the existing formats, but on the other hand
+ * XXX a file could now be compressed with liblzf. */
/* Current archive version number (the format we can output) */
#define K_VERS_MAJOR 1
#define K_VERS_MINOR 12
***************
*** 267,272 **** typedef struct _archiveHandle
--- 277,288 ----
struct _tocEntry *currToc; /* Used when dumping data */
int compression; /* Compression requested on open */
+ /* Possible values for compression:
+ -2 COMPRESSION_UNKNOWN
+ -1 Z_DEFAULT_COMPRESSION
+ 0 COMPRESSION_NONE
+ 1-9 levels for gzip compression
+ */
ArchiveMode mode; /* File mode - r or w */
void *formatData; /* Header data specific to file format */
***************
*** 381,384 **** int ahprintf(ArchiveHandle *AH, const char *fmt,...) __attribute__((format(pri
--- 397,411 ----
void ahlog(ArchiveHandle *AH, int level, const char *fmt,...) __attribute__((format(printf, 3, 4)));
+ #ifdef USE_ASSERT_CHECKING
+ #define Assert(condition) \
+ if (!(condition)) \
+ { \
+ write_msg(NULL, "Failed assertion in %s, line %d\n", \
+ __FILE__, __LINE__); \
+ abort();\
+ }
+ #else
+ #define Assert(condition)
+ #endif
#endif
*** a/src/bin/pg_dump/pg_backup_custom.c
--- b/src/bin/pg_dump/pg_backup_custom.c
***************
*** 25,30 ****
--- 25,31 ----
*/
#include "pg_backup_archiver.h"
+ #include "compress_io.h"
/*--------
* Routines in the format interface
***************
*** 58,77 **** static void _LoadBlobs(ArchiveHandle *AH, bool drop);
static void _Clone(ArchiveHandle *AH);
static void _DeClone(ArchiveHandle *AH);
! /*------------
! * Buffers used in zlib compression and extra data stored in archive and
! * in TOC entries.
! *------------
! */
! #define zlibOutSize 4096
! #define zlibInSize 4096
typedef struct
{
! z_streamp zp;
! char *zlibOut;
! char *zlibIn;
! size_t inSize;
int hasSeek;
pgoff_t filePos;
pgoff_t dataStart;
--- 59,70 ----
static void _Clone(ArchiveHandle *AH);
static void _DeClone(ArchiveHandle *AH);
! static size_t _CustomWriteFunc(ArchiveHandle *AH, const void *buf, size_t len);
! static size_t _CustomReadFunc(ArchiveHandle *AH, void **buf, size_t sizeHint);
typedef struct
{
! CompressorState *cs;
int hasSeek;
pgoff_t filePos;
pgoff_t dataStart;
***************
*** 89,98 **** typedef struct
*------
*/
static void _readBlockHeader(ArchiveHandle *AH, int *type, int *id);
- static void _StartDataCompressor(ArchiveHandle *AH, TocEntry *te);
- static void _EndDataCompressor(ArchiveHandle *AH, TocEntry *te);
static pgoff_t _getFilePos(ArchiveHandle *AH, lclContext *ctx);
- static int _DoDeflate(ArchiveHandle *AH, lclContext *ctx, int flush);
static const char *modulename = gettext_noop("custom archiver");
--- 82,88 ----
***************
*** 144,174 **** InitArchiveFmt_Custom(ArchiveHandle *AH)
die_horribly(AH, modulename, "out of memory\n");
AH->formatData = (void *) ctx;
- ctx->zp = (z_streamp) malloc(sizeof(z_stream));
- if (ctx->zp == NULL)
- die_horribly(AH, modulename, "out of memory\n");
-
/* Initialize LO buffering */
AH->lo_buf_size = LOBBUFSIZE;
AH->lo_buf = (void *) malloc(LOBBUFSIZE);
if (AH->lo_buf == NULL)
die_horribly(AH, modulename, "out of memory\n");
- /*
- * zlibOutSize is the buffer size we tell zlib it can output to. We
- * actually allocate one extra byte because some routines want to append a
- * trailing zero byte to the zlib output. The input buffer is expansible
- * and is always of size ctx->inSize; zlibInSize is just the initial
- * default size for it.
- */
- ctx->zlibOut = (char *) malloc(zlibOutSize + 1);
- ctx->zlibIn = (char *) malloc(zlibInSize);
- ctx->inSize = zlibInSize;
ctx->filePos = 0;
- if (ctx->zlibOut == NULL || ctx->zlibIn == NULL)
- die_horribly(AH, modulename, "out of memory\n");
-
/*
* Now open the file
*/
--- 134,147 ----
***************
*** 211,216 **** InitArchiveFmt_Custom(ArchiveHandle *AH)
--- 184,191 ----
ctx->hasSeek = checkSeek(AH->FH);
ReadHead(AH);
+ ctx->cs = AllocateInflator(AH->compression, _CustomReadFunc);
+
ReadToc(AH);
ctx->dataStart = _getFilePos(AH, ctx);
}
***************
*** 324,330 **** _StartData(ArchiveHandle *AH, TocEntry *te)
_WriteByte(AH, BLK_DATA); /* Block type */
WriteInt(AH, te->dumpId); /* For sanity check */
! _StartDataCompressor(AH, te);
}
/*
--- 299,305 ----
_WriteByte(AH, BLK_DATA); /* Block type */
WriteInt(AH, te->dumpId); /* For sanity check */
! ctx->cs = AllocateDeflator(AH->compression, _CustomWriteFunc);
}
/*
***************
*** 340,356 **** static size_t
_WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
{
lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
!
! zp->next_in = (void *) data;
! zp->avail_in = dLen;
! while (zp->avail_in != 0)
! {
! /* printf("Deflating %lu bytes\n", (unsigned long) dLen); */
! _DoDeflate(AH, ctx, 0);
! }
! return dLen;
}
/*
--- 315,323 ----
_WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
{
lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! return WriteDataToArchive(AH, cs, data, dLen);
}
/*
***************
*** 363,372 **** _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
static void
_EndData(ArchiveHandle *AH, TocEntry *te)
{
! /* lclContext *ctx = (lclContext *) AH->formatData; */
! /* lclTocEntry *tctx = (lclTocEntry *) te->formatData; */
! _EndDataCompressor(AH, te);
}
/*
--- 330,340 ----
static void
_EndData(ArchiveHandle *AH, TocEntry *te)
{
! lclContext *ctx = (lclContext *) AH->formatData;
! EndCompressorState(AH, ctx->cs);
! /* Send the end marker */
! WriteInt(AH, 0);
}
/*
***************
*** 401,411 **** _StartBlobs(ArchiveHandle *AH, TocEntry *te)
static void
_StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
if (oid == 0)
die_horribly(AH, modulename, "invalid OID for large object\n");
WriteInt(AH, oid);
! _StartDataCompressor(AH, te);
}
/*
--- 369,382 ----
static void
_StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
+ lclContext *ctx = (lclContext *) AH->formatData;
+
if (oid == 0)
die_horribly(AH, modulename, "invalid OID for large object\n");
WriteInt(AH, oid);
!
! ctx->cs = AllocateDeflator(AH->compression, _CustomWriteFunc);
}
/*
***************
*** 416,422 **** _StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
static void
_EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
! _EndDataCompressor(AH, te);
}
/*
--- 387,397 ----
static void
_EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
! lclContext *ctx = (lclContext *) AH->formatData;
!
! EndCompressorState(AH, ctx->cs);
! /* Send the end marker */
! WriteInt(AH, 0);
}
/*
***************
*** 533,639 **** static void
_PrintData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
- z_streamp zp = ctx->zp;
- size_t blkLen;
- char *in = ctx->zlibIn;
- size_t cnt;
-
- #ifdef HAVE_LIBZ
- int res;
- char *out = ctx->zlibOut;
- #endif
! #ifdef HAVE_LIBZ
!
! res = Z_OK;
!
! if (AH->compression != 0)
! {
! zp->zalloc = Z_NULL;
! zp->zfree = Z_NULL;
! zp->opaque = Z_NULL;
!
! if (inflateInit(zp) != Z_OK)
! die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
! }
! #endif
!
! blkLen = ReadInt(AH);
! while (blkLen != 0)
! {
! if (blkLen + 1 > ctx->inSize)
! {
! free(ctx->zlibIn);
! ctx->zlibIn = NULL;
! ctx->zlibIn = (char *) malloc(blkLen + 1);
! if (!ctx->zlibIn)
! die_horribly(AH, modulename, "out of memory\n");
!
! ctx->inSize = blkLen + 1;
! in = ctx->zlibIn;
! }
!
! cnt = fread(in, 1, blkLen, AH->FH);
! if (cnt != blkLen)
! {
! if (feof(AH->FH))
! die_horribly(AH, modulename,
! "could not read from input file: end of file\n");
! else
! die_horribly(AH, modulename,
! "could not read from input file: %s\n", strerror(errno));
! }
!
! ctx->filePos += blkLen;
!
! zp->next_in = (void *) in;
! zp->avail_in = blkLen;
!
! #ifdef HAVE_LIBZ
! if (AH->compression != 0)
! {
! while (zp->avail_in != 0)
! {
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! res = inflate(zp, 0);
! if (res != Z_OK && res != Z_STREAM_END)
! die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
!
! out[zlibOutSize - zp->avail_out] = '\0';
! ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
! }
! }
! else
! #endif
! {
! in[zp->avail_in] = '\0';
! ahwrite(in, 1, zp->avail_in, AH);
! zp->avail_in = 0;
! }
! blkLen = ReadInt(AH);
! }
!
! #ifdef HAVE_LIBZ
! if (AH->compression != 0)
! {
! zp->next_in = NULL;
! zp->avail_in = 0;
! while (res != Z_STREAM_END)
! {
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! res = inflate(zp, 0);
! if (res != Z_OK && res != Z_STREAM_END)
! die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
!
! out[zlibOutSize - zp->avail_out] = '\0';
! ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
! }
! if (inflateEnd(zp) != Z_OK)
! die_horribly(AH, modulename, "could not close compression library: %s\n", zp->msg);
! }
! #endif
}
static void
--- 508,516 ----
_PrintData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
! ctx->cs = AllocateInflator(AH->compression, _CustomReadFunc);
! ReadDataFromArchive(AH, ctx->cs);
}
static void
***************
*** 683,701 **** static void
_skipData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
size_t blkLen;
! char *in = ctx->zlibIn;
size_t cnt;
blkLen = ReadInt(AH);
while (blkLen != 0)
{
! if (blkLen > ctx->inSize)
{
! free(ctx->zlibIn);
! ctx->zlibIn = (char *) malloc(blkLen);
! ctx->inSize = blkLen;
! in = ctx->zlibIn;
}
cnt = fread(in, 1, blkLen, AH->FH);
if (cnt != blkLen)
--- 560,579 ----
_skipData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
size_t blkLen;
! char *in = cs->comprIn;
size_t cnt;
blkLen = ReadInt(AH);
while (blkLen != 0)
{
! if (blkLen > cs->comprInSize)
{
! free(cs->comprIn);
! cs->comprIn = (char *) malloc(blkLen);
! cs->comprInSize = blkLen;
! in = cs->comprIn;
}
cnt = fread(in, 1, blkLen, AH->FH);
if (cnt != blkLen)
***************
*** 960,1105 **** _readBlockHeader(ArchiveHandle *AH, int *type, int *id)
*id = ReadInt(AH);
}
! /*
! * If zlib is available, then startit up. This is called from
! * StartData & StartBlob. The buffers are setup in the Init routine.
! */
! static void
! _StartDataCompressor(ArchiveHandle *AH, TocEntry *te)
{
! lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
! #ifdef HAVE_LIBZ
! if (AH->compression < 0 || AH->compression > 9)
! AH->compression = Z_DEFAULT_COMPRESSION;
!
! if (AH->compression != 0)
! {
! zp->zalloc = Z_NULL;
! zp->zfree = Z_NULL;
! zp->opaque = Z_NULL;
!
! if (deflateInit(zp, AH->compression) != Z_OK)
! die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
! }
! #else
!
! AH->compression = 0;
! #endif
!
! /* Just be paranoid - maybe End is called after Start, with no Write */
! zp->next_out = (void *) ctx->zlibOut;
! zp->avail_out = zlibOutSize;
}
! /*
! * Send compressed data to the output stream (via ahwrite).
! * Each data chunk is preceded by it's length.
! * In the case of Z0, or no zlib, just write the raw data.
! *
! */
! static int
! _DoDeflate(ArchiveHandle *AH, lclContext *ctx, int flush)
{
! z_streamp zp = ctx->zp;
!
! #ifdef HAVE_LIBZ
! char *out = ctx->zlibOut;
! int res = Z_OK;
!
! if (AH->compression != 0)
! {
! res = deflate(zp, flush);
! if (res == Z_STREAM_ERROR)
! die_horribly(AH, modulename, "could not compress data: %s\n", zp->msg);
!
! if (((flush == Z_FINISH) && (zp->avail_out < zlibOutSize))
! || (zp->avail_out == 0)
! || (zp->avail_in != 0)
! )
! {
! /*
! * Extra paranoia: avoid zero-length chunks since a zero length
! * chunk is the EOF marker. This should never happen but...
! */
! if (zp->avail_out < zlibOutSize)
! {
! /*
! * printf("Wrote %lu byte deflated chunk\n", (unsigned long)
! * (zlibOutSize - zp->avail_out));
! */
! WriteInt(AH, zlibOutSize - zp->avail_out);
! if (fwrite(out, 1, zlibOutSize - zp->avail_out, AH->FH) != (zlibOutSize - zp->avail_out))
! die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
! ctx->filePos += zlibOutSize - zp->avail_out;
! }
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! }
! }
! else
! #endif
! {
! if (zp->avail_in > 0)
! {
! WriteInt(AH, zp->avail_in);
! if (fwrite(zp->next_in, 1, zp->avail_in, AH->FH) != zp->avail_in)
! die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
! ctx->filePos += zp->avail_in;
! zp->avail_in = 0;
! }
! else
! {
! #ifdef HAVE_LIBZ
! if (flush == Z_FINISH)
! res = Z_STREAM_END;
! #endif
! }
! }
! #ifdef HAVE_LIBZ
! return res;
! #else
! return 1;
! #endif
! }
! /*
! * Terminate zlib context and flush it's buffers. If no zlib
! * then just return.
! */
! static void
! _EndDataCompressor(ArchiveHandle *AH, TocEntry *te)
! {
! #ifdef HAVE_LIBZ
! lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
! int res;
! if (AH->compression != 0)
{
! zp->next_in = NULL;
! zp->avail_in = 0;
!
! do
! {
! /* printf("Ending data output\n"); */
! res = _DoDeflate(AH, ctx, Z_FINISH);
! } while (res != Z_STREAM_END);
! if (deflateEnd(zp) != Z_OK)
! die_horribly(AH, modulename, "could not close compression stream: %s\n", zp->msg);
}
! #endif
!
! /* Send the end marker */
! WriteInt(AH, 0);
}
-
/*
* Clone format-specific fields during parallel restoration.
*/
--- 838,898 ----
*id = ReadInt(AH);
}
! static size_t
! _CustomWriteFunc(ArchiveHandle *AH, const void *buf, size_t len)
{
! Assert(len != 0);
! /* never write 0-byte blocks (this should not happen) */
! if (len == 0)
! return 0;
! WriteInt(AH, len);
! return _WriteBuf(AH, buf, len);
}
! static size_t
! _CustomReadFunc(ArchiveHandle *AH, void **buf, size_t sizeHint)
{
! lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! size_t blkLen;
! size_t cnt;
! /*
! * We deliberately ignore the sizeHint parameter because we know
! * the exact size of the next compressed block (=blkLen).
! */
! blkLen = ReadInt(AH);
! if (blkLen == 0)
! return 0;
! if (blkLen + 1 > cs->comprInSize)
{
! free(cs->comprIn);
! cs->comprIn = NULL;
! cs->comprIn = (char *) malloc(blkLen + 1);
! if (!cs->comprIn)
! die_horribly(AH, modulename, "out of memory\n");
! cs->comprInSize = blkLen + 1;
}
! cnt = _ReadBuf(AH, cs->comprIn, blkLen);
! if (cnt != blkLen)
! {
! if (feof(AH->FH))
! die_horribly(AH, modulename,
! "could not read from input file: end of file\n");
! else
! die_horribly(AH, modulename,
! "could not read from input file: %s\n", strerror(errno));
! }
! *buf = cs->comprIn;
! return cnt;
}
/*
* Clone format-specific fields during parallel restoration.
*/
***************
*** 1107,1112 **** static void
--- 900,906 ----
_Clone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorAction action = ctx->cs->action;
AH->formatData = (lclContext *) malloc(sizeof(lclContext));
if (AH->formatData == NULL)
***************
*** 1114,1125 **** _Clone(ArchiveHandle *AH)
memcpy(AH->formatData, ctx, sizeof(lclContext));
ctx = (lclContext *) AH->formatData;
! ctx->zp = (z_streamp) malloc(sizeof(z_stream));
! ctx->zlibOut = (char *) malloc(zlibOutSize + 1);
! ctx->zlibIn = (char *) malloc(ctx->inSize);
!
! if (ctx->zp == NULL || ctx->zlibOut == NULL || ctx->zlibIn == NULL)
! die_horribly(AH, modulename, "out of memory\n");
/*
* Note: we do not make a local lo_buf because we expect at most one BLOBS
--- 908,917 ----
memcpy(AH->formatData, ctx, sizeof(lclContext));
ctx = (lclContext *) AH->formatData;
! if (action == COMPRESSOR_INFLATE)
! ctx->cs = AllocateInflator(AH->compression, _CustomReadFunc);
! else
! ctx->cs = AllocateDeflator(AH->compression, _CustomWriteFunc);
/*
* Note: we do not make a local lo_buf because we expect at most one BLOBS
***************
*** 1133,1141 **** static void
_DeClone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
- free(ctx->zlibOut);
- free(ctx->zlibIn);
- free(ctx->zp);
free(ctx);
}
--- 925,934 ----
_DeClone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ EndCompressorState(AH, cs);
free(ctx);
}
+
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
***************
*** 56,61 ****
--- 56,62 ----
#include "pg_backup_archiver.h"
#include "dumputils.h"
+ #include "compress_io.h"
extern char *optarg;
extern int optind,
***************
*** 255,261 **** main(int argc, char **argv)
int numObjs;
int i;
enum trivalue prompt_password = TRI_DEFAULT;
! int compressLevel = -1;
int plainText = 0;
int outputClean = 0;
int outputCreateDB = 0;
--- 256,262 ----
int numObjs;
int i;
enum trivalue prompt_password = TRI_DEFAULT;
! int compressLevel = COMPRESSION_UNKNOWN;
int plainText = 0;
int outputClean = 0;
int outputCreateDB = 0;
***************
*** 535,540 **** main(int argc, char **argv)
--- 536,547 ----
exit(1);
}
+ /* actually we are using a zlib constant here but formats that don't
+ * support compression won't care and if we are not compiled with zlib
+ * compression we will be forced to no compression anyway. */
+ if (compressLevel == COMPRESSION_UNKNOWN)
+ compressLevel = Z_DEFAULT_COMPRESSION;
+
/* open the output file */
if (pg_strcasecmp(format, "a") == 0 || pg_strcasecmp(format, "append") == 0)
{
***************
*** 2174,2180 **** dumpBlobs(Archive *AH, void *arg)
exit_nicely();
}
! WriteData(AH, buf, cnt);
} while (cnt > 0);
lo_close(g_conn, loFd);
--- 2181,2189 ----
exit_nicely();
}
! /* we try to avoid writing empty chunks */
! if (cnt > 0)
! WriteData(AH, buf, cnt);
} while (cnt > 0);
lo_close(g_conn, loFd);
On 29.11.2010 22:21, Heikki Linnakangas wrote:
On 29.11.2010 07:11, Joachim Wieland wrote:
On Mon, Nov 22, 2010 at 3:44 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:* wrap long lines
* use extern in function prototypes in header files
* "inline" some functions like _StartDataCompressor, _EndDataCompressor,
_DoInflate/_DoDeflate that aren't doing anything but call some other
function.So here is a new round of patches. It turned out that the feature to
allow to also restore files from a different dump and with a different
compression required some changes in the compressor API. And in the
end I didn't like all the #ifdefs either and made a less #ifdef-rich
version using function pointers.Ok. The separate InitCompressorState() and AllocateCompressorState()
functions seem unnecessary. As the code stands, there's little
performance gain from re-using the same CompressorState, just
re-initializing it, and I can't see any other justification for them
either.I combined those, and the Free/Flush steps, and did a bunch of other
editorializations and cleanups. Here's an updated patch, also available
in my git repository at
git://git.postgresql.org/git/users/heikki/postgres.git, branch
"pg_dump-dir". I'm going to continue reviewing this later, tomorrow
hopefully.
Here's another update. I changed things quite heavily. I didn't see the
point of having the Alloc+Free functions for uncompressing, because the
ReadDataFromArchive processed the whole input stream in one go anyway.
So the new API consists of four functions, AllocateCompressor,
WriteDataToArchive and EndCompressor for writing, and
ReadDataFromArchive for reading.
Also, I reverted the zlib buffer size from 64k to 4k. If you want to
raise that, let's discuss that separately.
Please let me know what you think of this version, or if you spot any
bugs. I'll keep working on this, I'm hoping to get this into committable
shape by the end of the week.
The pg_backup_directory patch naturally won't apply over this anymore.
Once we have the compress_io part in shape, that will need to be fixed.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On 01.12.2010 16:03, Heikki Linnakangas wrote:
On 29.11.2010 22:21, Heikki Linnakangas wrote:
I combined those, and the Free/Flush steps, and did a bunch of other
editorializations and cleanups. Here's an updated patch, also available
in my git repository at
git://git.postgresql.org/git/users/heikki/postgres.git, branch
"pg_dump-dir". I'm going to continue reviewing this later, tomorrow
hopefully.Here's another update.
Forgot attachment. This is also available in the above git repo.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Attachments:
pg_dump-compression-refactor-3.patchtext/x-diff; name=pg_dump-compression-refactor-3.patchDownload
*** a/src/bin/pg_dump/Makefile
--- b/src/bin/pg_dump/Makefile
***************
*** 20,26 **** override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
--- 20,26 ----
OBJS= pg_backup_archiver.o pg_backup_db.o pg_backup_custom.o \
pg_backup_files.o pg_backup_null.o pg_backup_tar.o \
! dumputils.o compress_io.o $(WIN32RES)
KEYWRDOBJS = keywords.o kwlookup.o
*** /dev/null
--- b/src/bin/pg_dump/compress_io.c
***************
*** 0 ****
--- 1,404 ----
+ /*-------------------------------------------------------------------------
+ *
+ * compress_io.c
+ * Routines for archivers to write an uncompressed or compressed data
+ * stream.
+ *
+ * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * The interface for writing to an archive consists of three functions:
+ * AllocateCompressor, WriteDataToArchive and EndCompressor. First you call
+ * AllocateCompressor, then write all the data by calling WriteDataToArchive
+ * as many times as needed, and finally EndCompressor. WriteDataToArchive
+ * and EndCompressor will call the WriteFunc that was provided to
+ * AllocateCompressor for each chunk of compressed data.
+ *
+ * The interface for reading an archive consists of just one function:
+ * ReadDataFromArchive. ReadDataFromArchive reads the whole compressed input
+ * stream, by repeatedly calling the given ReadFunc. ReadFunc returns the
+ * compressed data chunk at a time, and ReadDataFromArchive decompresses it
+ * and passes the decompressed data to ahwrite(), until ReadFunc returns 0
+ * to signal EOF.
+ *
+ * The interface is the same for compressed and uncompressed streams.
+ *
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/compress_io.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #include "compress_io.h"
+
+ static const char *modulename = gettext_noop("compress_io");
+
+ static void ParseCompressionOption(int compression, CompressorAlgorithm *alg,
+ int *level);
+
+ /* Routines that are private to a specific compressor (static functions) */
+ #ifdef HAVE_LIBZ
+ /* Routines that support zlib compressed data I/O */
+ static void InitCompressorZlib(CompressorState *cs, int level);
+ static void DeflateCompressorZlib(ArchiveHandle *AH, CompressorState *cs,
+ bool flush);
+ static void ReadDataFromArchiveZlib(ArchiveHandle *AH, ReadFunc readF);
+ static size_t WriteDataToArchiveZlib(ArchiveHandle *AH, CompressorState *cs,
+ const char *data, size_t dLen);
+ static void EndCompressorZlib(ArchiveHandle *AH, CompressorState *cs);
+
+ #endif
+
+ /* Routines that support uncompressed data I/O */
+ static void ReadDataFromArchiveNone(ArchiveHandle *AH, ReadFunc readF);
+ static size_t WriteDataToArchiveNone(ArchiveHandle *AH, CompressorState *cs,
+ const char *data, size_t dLen);
+
+ static void
+ ParseCompressionOption(int compression, CompressorAlgorithm *alg, int *level)
+ {
+ /*
+ * The compression is set either on the commandline when creating
+ * an archive or by ReadHead() when restoring an archive. It can also be
+ * set on a per-data item basis in the directory archive format.
+ */
+ if (compression == Z_DEFAULT_COMPRESSION ||
+ (compression > 0 && compression <= 9))
+ *alg = COMPR_ALG_LIBZ;
+ else if (compression == 0)
+ *alg = COMPR_ALG_NONE;
+ else
+ die_horribly(NULL, modulename, "Invalid compression code: %d\n",
+ compression);
+
+ if (level)
+ *level = compression;
+ }
+
+ /* Public interface routines */
+
+ /* Allocate a new compressor */
+ CompressorState *
+ AllocateCompressor(int compression, WriteFunc writeF)
+ {
+ CompressorState *cs;
+ CompressorAlgorithm alg;
+ int level;
+
+ ParseCompressionOption(compression, &alg, &level);
+
+ cs = (CompressorState *) calloc(1, sizeof(CompressorState));
+ if (cs == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+ cs->writeF = writeF;
+ cs->comprAlg = alg;
+
+ #ifndef HAVE_LIBZ
+ if (alg == COMPR_ALG_LIBZ)
+ die_horribly(NULL, modulename, "not built with zlib support\n");
+ #endif
+
+ /*
+ * Perform compression algorithm specific initialization.
+ */
+ #ifdef HAVE_LIBZ
+ if (alg == COMPR_ALG_LIBZ)
+ InitCompressorZlib(cs, level);
+ #endif
+
+ return cs;
+ }
+
+ /*
+ * Read all compressed data from the input stream (via readF) and print it
+ * out with ahwrite().
+ */
+ void
+ ReadDataFromArchive(ArchiveHandle *AH, int compression, ReadFunc readF)
+ {
+ CompressorAlgorithm alg;
+
+ ParseCompressionOption(compression, &alg, NULL);
+
+ if (alg == COMPR_ALG_NONE)
+ ReadDataFromArchiveNone(AH, readF);
+ if (alg == COMPR_ALG_LIBZ)
+ {
+ #ifdef HAVE_LIBZ
+ ReadDataFromArchiveZlib(AH, readF);
+ #else
+ die_horribly(NULL, modulename, "not built with zlib support\n");
+ #endif
+ }
+ }
+
+ /*
+ * Compress and write data to the output stream (via writeF).
+ */
+ size_t
+ WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs,
+ const void *data, size_t dLen)
+ {
+ switch(cs->comprAlg)
+ {
+ case COMPR_ALG_LIBZ:
+ #ifdef HAVE_LIBZ
+ return WriteDataToArchiveZlib(AH, cs, data, dLen);
+ #else
+ die_horribly(NULL, modulename, "not built with zlib support\n");
+ #endif
+ case COMPR_ALG_NONE:
+ return WriteDataToArchiveNone(AH, cs, data, dLen);
+ }
+ return 0; /* keep compiler quiet */
+ }
+
+ /*
+ * Terminate compression library context and flush its buffers.
+ */
+ void
+ EndCompressorState(ArchiveHandle *AH, CompressorState *cs)
+ {
+ #ifdef HAVE_LIBZ
+ if (cs->comprAlg == COMPR_ALG_LIBZ)
+ EndCompressorZlib(AH, cs);
+ #endif
+ free(cs);
+ }
+
+ /* Private routines, specific to each compression method. */
+
+ #ifdef HAVE_LIBZ
+ /*
+ * Functions for zlib compressed output.
+ */
+
+ static void
+ InitCompressorZlib(CompressorState *cs, int level)
+ {
+ z_streamp zp;
+
+ zp = cs->zp = (z_streamp) malloc(sizeof(z_stream));
+ if (cs->zp == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+ zp->zalloc = Z_NULL;
+ zp->zfree = Z_NULL;
+ zp->opaque = Z_NULL;
+
+ /*
+ * zlibOutSize is the buffer size we tell zlib it can output
+ * to. We actually allocate one extra byte because some routines
+ * want to append a trailing zero byte to the zlib output.
+ */
+ cs->zlibOut = (char *) malloc(ZLIB_OUT_SIZE + 1);
+ cs->zlibOutSize = ZLIB_OUT_SIZE;
+
+ if (cs->zlibOut == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+
+ if (deflateInit(zp, level) != Z_OK)
+ die_horribly(NULL, modulename,
+ "could not initialize compression library: %s\n",
+ zp->msg);
+
+ /* Just be paranoid - maybe End is called after Start, with no Write */
+ zp->next_out = (void *) cs->zlibOut;
+ zp->avail_out = cs->zlibOutSize;
+ }
+
+ static void
+ EndCompressorZlib(ArchiveHandle *AH, CompressorState *cs)
+ {
+ z_streamp zp = cs->zp;
+
+ zp->next_in = NULL;
+ zp->avail_in = 0;
+
+ /* Flush any remaining data from zlib buffer */
+ DeflateCompressorZlib(AH, cs, true);
+
+ if (deflateEnd(zp) != Z_OK)
+ die_horribly(AH, modulename,
+ "could not close compression stream: %s\n", zp->msg);
+
+ free(cs->zlibOut);
+ free(cs->zp);
+ }
+
+ static void
+ DeflateCompressorZlib(ArchiveHandle *AH, CompressorState *cs, bool flush)
+ {
+ z_streamp zp = cs->zp;
+ char *out = cs->zlibOut;
+ int res = Z_OK;
+
+ while (cs->zp->avail_in != 0 || flush)
+ {
+ res = deflate(zp, flush ? Z_FINISH : Z_NO_FLUSH);
+ if (res == Z_STREAM_ERROR)
+ die_horribly(AH, modulename,
+ "could not compress data: %s\n", zp->msg);
+ if ((flush && (zp->avail_out < cs->zlibOutSize))
+ || (zp->avail_out == 0)
+ || (zp->avail_in != 0)
+ )
+ {
+ /*
+ * Extra paranoia: avoid zero-length chunks, since a zero length
+ * chunk is the EOF marker in the custom format. This should never
+ * happen but...
+ */
+ if (zp->avail_out < cs->zlibOutSize)
+ {
+ /*
+ * Any write function shoud do its own error checking but
+ * to make sure we do a check here as well...
+ */
+ size_t len = cs->zlibOutSize - zp->avail_out;
+ if (cs->writeF(AH, out, len) != len)
+ die_horribly(AH, modulename,
+ "could not write to output file: %s\n",
+ strerror(errno));
+ }
+ zp->next_out = (void *) out;
+ zp->avail_out = cs->zlibOutSize;
+ }
+
+ if (res == Z_STREAM_END)
+ break;
+ }
+ }
+
+ static size_t
+ WriteDataToArchiveZlib(ArchiveHandle *AH, CompressorState *cs,
+ const char *data, size_t dLen)
+ {
+ cs->zp->next_in = (void *) data;
+ cs->zp->avail_in = dLen;
+ DeflateCompressorZlib(AH, cs, false);
+ /* we have either succeeded in writing dLen bytes or we have called
+ * die_horribly() */
+ return dLen;
+ }
+
+ static void
+ ReadDataFromArchiveZlib(ArchiveHandle *AH, ReadFunc readF)
+ {
+ z_streamp zp;
+ char *out;
+ int res = Z_OK;
+ size_t cnt;
+ char *buf;
+ size_t buflen;
+
+ zp = (z_streamp) malloc(sizeof(z_stream));
+ if (zp == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+ zp->zalloc = Z_NULL;
+ zp->zfree = Z_NULL;
+ zp->opaque = Z_NULL;
+
+ buf = malloc(ZLIB_IN_SIZE);
+ if (buf == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+ buflen = ZLIB_IN_SIZE;
+
+ out = malloc(ZLIB_OUT_SIZE + 1);
+ if (out == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+
+ if (inflateInit(zp) != Z_OK)
+ die_horribly(NULL, modulename,
+ "could not initialize compression library: %s\n",
+ zp->msg);
+
+ /* no minimal chunk size for zlib */
+ while ((cnt = readF(AH, &buf, &buflen)))
+ {
+ zp->next_in = (void *) buf;
+ zp->avail_in = cnt;
+
+ while (zp->avail_in > 0)
+ {
+ zp->next_out = (void *) out;
+ zp->avail_out = ZLIB_OUT_SIZE;
+
+ res = inflate(zp, 0);
+ if (res != Z_OK && res != Z_STREAM_END)
+ die_horribly(AH, modulename,
+ "could not uncompress data: %s\n", zp->msg);
+
+ out[ZLIB_OUT_SIZE - zp->avail_out] = '\0';
+ ahwrite(out, 1, ZLIB_OUT_SIZE - zp->avail_out, AH);
+ }
+ }
+
+ zp->next_in = NULL;
+ zp->avail_in = 0;
+ while (res != Z_STREAM_END)
+ {
+ zp->next_out = (void *) out;
+ zp->avail_out = ZLIB_OUT_SIZE;
+ res = inflate(zp, 0);
+ if (res != Z_OK && res != Z_STREAM_END)
+ die_horribly(AH, modulename,
+ "could not uncompress data: %s\n", zp->msg);
+
+ out[ZLIB_OUT_SIZE - zp->avail_out] = '\0';
+ ahwrite(out, 1, ZLIB_OUT_SIZE - zp->avail_out, AH);
+ }
+
+ if (inflateEnd(zp) != Z_OK)
+ die_horribly(AH, modulename,
+ "could not close compression library: %s\n", zp->msg);
+
+ free(buf);
+ free(out);
+ free(zp);
+ }
+
+ #endif /* HAVE_LIBZ */
+
+
+ /*
+ * Functions for uncompressed output.
+ */
+
+ static void
+ ReadDataFromArchiveNone(ArchiveHandle *AH, ReadFunc readF)
+ {
+ size_t cnt;
+ char *buf;
+ size_t buflen;
+
+ buf = malloc(ZLIB_OUT_SIZE);
+ if (buf == NULL)
+ die_horribly(NULL, modulename, "out of memory\n");
+ buflen = ZLIB_OUT_SIZE;
+
+ /* no minimal chunk size for uncompressed data */
+ while ((cnt = readF(AH, &buf, &buflen)))
+ {
+ ahwrite(buf, 1, cnt, AH);
+ }
+
+ free(buf);
+ }
+
+ static size_t
+ WriteDataToArchiveNone(ArchiveHandle *AH, CompressorState *cs,
+ const char *data, size_t dLen)
+ {
+ /*
+ * Any write function should do its own error checking but to make
+ * sure we do a check here as well...
+ */
+ if (cs->writeF(AH, data, dLen) != dLen)
+ die_horribly(AH, modulename,
+ "could not write to output file: %s\n",
+ strerror(errno));
+ return dLen;
+ }
+
+
*** /dev/null
--- b/src/bin/pg_dump/compress_io.h
***************
*** 0 ****
--- 1,73 ----
+ /*-------------------------------------------------------------------------
+ *
+ * compress_io.h
+ * Routines for archivers to write an uncompressed or compressed data
+ * stream.
+ *
+ * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ * src/bin/pg_dump/compress_io.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+ #ifndef __COMPRESS_IO__
+ #define __COMPRESS_IO__
+
+ #include "postgres_fe.h"
+ #include "pg_backup_archiver.h"
+
+ /*------------
+ * Buffers used in zlib compression.
+ *------------
+ */
+ #define ZLIB_OUT_SIZE 4096
+ #define ZLIB_IN_SIZE 4096
+
+ struct _CompressorState;
+
+ typedef enum
+ {
+ COMPR_ALG_NONE,
+ COMPR_ALG_LIBZ
+ } CompressorAlgorithm;
+
+ /* Prototype for callback function to WriteDataToArchive() */
+ typedef size_t (*WriteFunc)(ArchiveHandle *AH, const char *buf, size_t len);
+
+ /*
+ * Prototype for callback function to ReadDataFromArchive()
+ *
+ * ReadDataFromArchive will call the read function repeatedly, until it
+ * returns 0 to signal EOF. ReadDataFromArchive passes a buffer to read the
+ * data into in *buf, of length *buflen. If that's not big enough for the
+ * callback function, it can free() it and malloc() a new one, returning the
+ * new buffer and its size in *buf and *buflen.
+ *
+ * Returns the number of bytes read into *buf, or 0 on EOF.
+ */
+ typedef size_t (*ReadFunc)(ArchiveHandle *AH, char **buf, size_t *buflen);
+
+ typedef struct _CompressorState
+ {
+ CompressorAlgorithm comprAlg;
+ WriteFunc writeF;
+
+ #ifdef HAVE_LIBZ
+ z_streamp zp;
+ char *zlibOut;
+ size_t zlibOutSize;
+ #endif
+ } CompressorState;
+
+ extern CompressorState *AllocateCompressor(int compression, WriteFunc writeF);
+ extern void ReadDataFromArchive(ArchiveHandle *AH, int compression,
+ ReadFunc readF);
+ extern size_t WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs,
+ const void *data, size_t dLen);
+ extern void EndCompressorState(ArchiveHandle *AH, CompressorState *cs);
+ extern void FreeCompressorState(CompressorState *cs);
+
+ #endif
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
***************
*** 22,27 ****
--- 22,28 ----
#include "pg_backup_db.h"
#include "dumputils.h"
+ #include "compress_io.h"
#include <ctype.h>
#include <unistd.h>
*** a/src/bin/pg_dump/pg_backup_archiver.h
--- b/src/bin/pg_dump/pg_backup_archiver.h
***************
*** 49,54 ****
--- 49,55 ----
#define GZCLOSE(fh) fclose(fh)
#define GZWRITE(p, s, n, fh) (fwrite(p, s, n, fh) * (s))
#define GZREAD(p, s, n, fh) fread(p, s, n, fh)
+ /* this is just the redefinition of a libz constant */
#define Z_DEFAULT_COMPRESSION (-1)
typedef struct _z_stream
***************
*** 61,66 **** typedef struct _z_stream
--- 62,70 ----
typedef z_stream *z_streamp;
#endif
+ /* XXX should we change the archive version for pg_dump with directory support?
+ * XXX We are not actually modifying the existing formats, but on the other hand
+ * XXX a file could now be compressed with liblzf. */
/* Current archive version number (the format we can output) */
#define K_VERS_MAJOR 1
#define K_VERS_MINOR 12
***************
*** 266,272 **** typedef struct _archiveHandle
DumpId maxDumpId; /* largest DumpId among all TOC entries */
struct _tocEntry *currToc; /* Used when dumping data */
! int compression; /* Compression requested on open */
ArchiveMode mode; /* File mode - r or w */
void *formatData; /* Header data specific to file format */
--- 270,280 ----
DumpId maxDumpId; /* largest DumpId among all TOC entries */
struct _tocEntry *currToc; /* Used when dumping data */
! int compression; /* Compression requested on open
! * Possible values for compression:
! * -1 Z_DEFAULT_COMPRESSION
! * 0 COMPRESSION_NONE
! * 1-9 levels for gzip compression */
ArchiveMode mode; /* File mode - r or w */
void *formatData; /* Header data specific to file format */
***************
*** 381,384 **** int ahprintf(ArchiveHandle *AH, const char *fmt,...) __attribute__((format(pri
--- 389,403 ----
void ahlog(ArchiveHandle *AH, int level, const char *fmt,...) __attribute__((format(printf, 3, 4)));
+ #ifdef USE_ASSERT_CHECKING
+ #define Assert(condition) \
+ if (!(condition)) \
+ { \
+ write_msg(NULL, "Failed assertion in %s, line %d\n", \
+ __FILE__, __LINE__); \
+ abort();\
+ }
+ #else
+ #define Assert(condition)
+ #endif
#endif
*** a/src/bin/pg_dump/pg_backup_custom.c
--- b/src/bin/pg_dump/pg_backup_custom.c
***************
*** 25,30 ****
--- 25,31 ----
*/
#include "pg_backup_archiver.h"
+ #include "compress_io.h"
/*--------
* Routines in the format interface
***************
*** 58,77 **** static void _LoadBlobs(ArchiveHandle *AH, bool drop);
static void _Clone(ArchiveHandle *AH);
static void _DeClone(ArchiveHandle *AH);
! /*------------
! * Buffers used in zlib compression and extra data stored in archive and
! * in TOC entries.
! *------------
! */
! #define zlibOutSize 4096
! #define zlibInSize 4096
typedef struct
{
! z_streamp zp;
! char *zlibOut;
! char *zlibIn;
! size_t inSize;
int hasSeek;
pgoff_t filePos;
pgoff_t dataStart;
--- 59,70 ----
static void _Clone(ArchiveHandle *AH);
static void _DeClone(ArchiveHandle *AH);
! static size_t _CustomWriteFunc(ArchiveHandle *AH, const char *buf, size_t len);
! static size_t _CustomReadFunc(ArchiveHandle *AH, char **buf, size_t *buflen);
typedef struct
{
! CompressorState *cs;
int hasSeek;
pgoff_t filePos;
pgoff_t dataStart;
***************
*** 89,98 **** typedef struct
*------
*/
static void _readBlockHeader(ArchiveHandle *AH, int *type, int *id);
- static void _StartDataCompressor(ArchiveHandle *AH, TocEntry *te);
- static void _EndDataCompressor(ArchiveHandle *AH, TocEntry *te);
static pgoff_t _getFilePos(ArchiveHandle *AH, lclContext *ctx);
- static int _DoDeflate(ArchiveHandle *AH, lclContext *ctx, int flush);
static const char *modulename = gettext_noop("custom archiver");
--- 82,88 ----
***************
*** 144,174 **** InitArchiveFmt_Custom(ArchiveHandle *AH)
die_horribly(AH, modulename, "out of memory\n");
AH->formatData = (void *) ctx;
- ctx->zp = (z_streamp) malloc(sizeof(z_stream));
- if (ctx->zp == NULL)
- die_horribly(AH, modulename, "out of memory\n");
-
/* Initialize LO buffering */
AH->lo_buf_size = LOBBUFSIZE;
AH->lo_buf = (void *) malloc(LOBBUFSIZE);
if (AH->lo_buf == NULL)
die_horribly(AH, modulename, "out of memory\n");
- /*
- * zlibOutSize is the buffer size we tell zlib it can output to. We
- * actually allocate one extra byte because some routines want to append a
- * trailing zero byte to the zlib output. The input buffer is expansible
- * and is always of size ctx->inSize; zlibInSize is just the initial
- * default size for it.
- */
- ctx->zlibOut = (char *) malloc(zlibOutSize + 1);
- ctx->zlibIn = (char *) malloc(zlibInSize);
- ctx->inSize = zlibInSize;
ctx->filePos = 0;
- if (ctx->zlibOut == NULL || ctx->zlibIn == NULL)
- die_horribly(AH, modulename, "out of memory\n");
-
/*
* Now open the file
*/
--- 134,147 ----
***************
*** 324,330 **** _StartData(ArchiveHandle *AH, TocEntry *te)
_WriteByte(AH, BLK_DATA); /* Block type */
WriteInt(AH, te->dumpId); /* For sanity check */
! _StartDataCompressor(AH, te);
}
/*
--- 297,303 ----
_WriteByte(AH, BLK_DATA); /* Block type */
WriteInt(AH, te->dumpId); /* For sanity check */
! ctx->cs = AllocateCompressor(AH->compression, _CustomWriteFunc);
}
/*
***************
*** 340,356 **** static size_t
_WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
{
lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
! zp->next_in = (void *) data;
! zp->avail_in = dLen;
!
! while (zp->avail_in != 0)
! {
! /* printf("Deflating %lu bytes\n", (unsigned long) dLen); */
! _DoDeflate(AH, ctx, 0);
! }
! return dLen;
}
/*
--- 313,321 ----
_WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
{
lclContext *ctx = (lclContext *) AH->formatData;
! CompressorState *cs = ctx->cs;
! return WriteDataToArchive(AH, cs, data, dLen);
}
/*
***************
*** 363,372 **** _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
static void
_EndData(ArchiveHandle *AH, TocEntry *te)
{
! /* lclContext *ctx = (lclContext *) AH->formatData; */
! /* lclTocEntry *tctx = (lclTocEntry *) te->formatData; */
! _EndDataCompressor(AH, te);
}
/*
--- 328,338 ----
static void
_EndData(ArchiveHandle *AH, TocEntry *te)
{
! lclContext *ctx = (lclContext *) AH->formatData;
! EndCompressorState(AH, ctx->cs);
! /* Send the end marker */
! WriteInt(AH, 0);
}
/*
***************
*** 401,411 **** _StartBlobs(ArchiveHandle *AH, TocEntry *te)
static void
_StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
if (oid == 0)
die_horribly(AH, modulename, "invalid OID for large object\n");
WriteInt(AH, oid);
! _StartDataCompressor(AH, te);
}
/*
--- 367,380 ----
static void
_StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
+ lclContext *ctx = (lclContext *) AH->formatData;
+
if (oid == 0)
die_horribly(AH, modulename, "invalid OID for large object\n");
WriteInt(AH, oid);
!
! ctx->cs = AllocateCompressor(AH->compression, _CustomWriteFunc);
}
/*
***************
*** 416,422 **** _StartBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
static void
_EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
! _EndDataCompressor(AH, te);
}
/*
--- 385,395 ----
static void
_EndBlob(ArchiveHandle *AH, TocEntry *te, Oid oid)
{
! lclContext *ctx = (lclContext *) AH->formatData;
!
! EndCompressorState(AH, ctx->cs);
! /* Send the end marker */
! WriteInt(AH, 0);
}
/*
***************
*** 532,639 **** _PrintTocData(ArchiveHandle *AH, TocEntry *te, RestoreOptions *ropt)
static void
_PrintData(ArchiveHandle *AH)
{
! lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
! size_t blkLen;
! char *in = ctx->zlibIn;
! size_t cnt;
!
! #ifdef HAVE_LIBZ
! int res;
! char *out = ctx->zlibOut;
! #endif
!
! #ifdef HAVE_LIBZ
!
! res = Z_OK;
!
! if (AH->compression != 0)
! {
! zp->zalloc = Z_NULL;
! zp->zfree = Z_NULL;
! zp->opaque = Z_NULL;
!
! if (inflateInit(zp) != Z_OK)
! die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
! }
! #endif
!
! blkLen = ReadInt(AH);
! while (blkLen != 0)
! {
! if (blkLen + 1 > ctx->inSize)
! {
! free(ctx->zlibIn);
! ctx->zlibIn = NULL;
! ctx->zlibIn = (char *) malloc(blkLen + 1);
! if (!ctx->zlibIn)
! die_horribly(AH, modulename, "out of memory\n");
!
! ctx->inSize = blkLen + 1;
! in = ctx->zlibIn;
! }
!
! cnt = fread(in, 1, blkLen, AH->FH);
! if (cnt != blkLen)
! {
! if (feof(AH->FH))
! die_horribly(AH, modulename,
! "could not read from input file: end of file\n");
! else
! die_horribly(AH, modulename,
! "could not read from input file: %s\n", strerror(errno));
! }
!
! ctx->filePos += blkLen;
!
! zp->next_in = (void *) in;
! zp->avail_in = blkLen;
!
! #ifdef HAVE_LIBZ
! if (AH->compression != 0)
! {
! while (zp->avail_in != 0)
! {
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! res = inflate(zp, 0);
! if (res != Z_OK && res != Z_STREAM_END)
! die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
!
! out[zlibOutSize - zp->avail_out] = '\0';
! ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
! }
! }
! else
! #endif
! {
! in[zp->avail_in] = '\0';
! ahwrite(in, 1, zp->avail_in, AH);
! zp->avail_in = 0;
! }
! blkLen = ReadInt(AH);
! }
!
! #ifdef HAVE_LIBZ
! if (AH->compression != 0)
! {
! zp->next_in = NULL;
! zp->avail_in = 0;
! while (res != Z_STREAM_END)
! {
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! res = inflate(zp, 0);
! if (res != Z_OK && res != Z_STREAM_END)
! die_horribly(AH, modulename, "could not uncompress data: %s\n", zp->msg);
!
! out[zlibOutSize - zp->avail_out] = '\0';
! ahwrite(out, 1, zlibOutSize - zp->avail_out, AH);
! }
! if (inflateEnd(zp) != Z_OK)
! die_horribly(AH, modulename, "could not close compression library: %s\n", zp->msg);
! }
! #endif
}
static void
--- 505,511 ----
static void
_PrintData(ArchiveHandle *AH)
{
! ReadDataFromArchive(AH, AH->compression, _CustomReadFunc);
}
static void
***************
*** 684,703 **** _skipData(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
size_t blkLen;
! char *in = ctx->zlibIn;
size_t cnt;
blkLen = ReadInt(AH);
while (blkLen != 0)
{
! if (blkLen > ctx->inSize)
{
! free(ctx->zlibIn);
! ctx->zlibIn = (char *) malloc(blkLen);
! ctx->inSize = blkLen;
! in = ctx->zlibIn;
}
! cnt = fread(in, 1, blkLen, AH->FH);
if (cnt != blkLen)
{
if (feof(AH->FH))
--- 556,576 ----
{
lclContext *ctx = (lclContext *) AH->formatData;
size_t blkLen;
! char *buf = NULL;
! int buflen = 0;
size_t cnt;
blkLen = ReadInt(AH);
while (blkLen != 0)
{
! if (blkLen > buflen)
{
! if (buf)
! free(buf);
! buf = (char *) malloc(blkLen);
! buflen = blkLen;
}
! cnt = fread(buf, 1, blkLen, AH->FH);
if (cnt != blkLen)
{
if (feof(AH->FH))
***************
*** 712,717 **** _skipData(ArchiveHandle *AH)
--- 585,593 ----
blkLen = ReadInt(AH);
}
+
+ if (buf)
+ free(buf);
}
/*
***************
*** 960,1105 **** _readBlockHeader(ArchiveHandle *AH, int *type, int *id)
*id = ReadInt(AH);
}
! /*
! * If zlib is available, then startit up. This is called from
! * StartData & StartBlob. The buffers are setup in the Init routine.
! */
! static void
! _StartDataCompressor(ArchiveHandle *AH, TocEntry *te)
{
! lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
!
! #ifdef HAVE_LIBZ
! if (AH->compression < 0 || AH->compression > 9)
! AH->compression = Z_DEFAULT_COMPRESSION;
! if (AH->compression != 0)
! {
! zp->zalloc = Z_NULL;
! zp->zfree = Z_NULL;
! zp->opaque = Z_NULL;
!
! if (deflateInit(zp, AH->compression) != Z_OK)
! die_horribly(AH, modulename, "could not initialize compression library: %s\n", zp->msg);
! }
! #else
!
! AH->compression = 0;
! #endif
!
! /* Just be paranoid - maybe End is called after Start, with no Write */
! zp->next_out = (void *) ctx->zlibOut;
! zp->avail_out = zlibOutSize;
}
! /*
! * Send compressed data to the output stream (via ahwrite).
! * Each data chunk is preceded by it's length.
! * In the case of Z0, or no zlib, just write the raw data.
! *
! */
! static int
! _DoDeflate(ArchiveHandle *AH, lclContext *ctx, int flush)
{
! z_streamp zp = ctx->zp;
! #ifdef HAVE_LIBZ
! char *out = ctx->zlibOut;
! int res = Z_OK;
! if (AH->compression != 0)
! {
! res = deflate(zp, flush);
! if (res == Z_STREAM_ERROR)
! die_horribly(AH, modulename, "could not compress data: %s\n", zp->msg);
!
! if (((flush == Z_FINISH) && (zp->avail_out < zlibOutSize))
! || (zp->avail_out == 0)
! || (zp->avail_in != 0)
! )
! {
! /*
! * Extra paranoia: avoid zero-length chunks since a zero length
! * chunk is the EOF marker. This should never happen but...
! */
! if (zp->avail_out < zlibOutSize)
! {
! /*
! * printf("Wrote %lu byte deflated chunk\n", (unsigned long)
! * (zlibOutSize - zp->avail_out));
! */
! WriteInt(AH, zlibOutSize - zp->avail_out);
! if (fwrite(out, 1, zlibOutSize - zp->avail_out, AH->FH) != (zlibOutSize - zp->avail_out))
! die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
! ctx->filePos += zlibOutSize - zp->avail_out;
! }
! zp->next_out = (void *) out;
! zp->avail_out = zlibOutSize;
! }
! }
! else
! #endif
{
! if (zp->avail_in > 0)
! {
! WriteInt(AH, zp->avail_in);
! if (fwrite(zp->next_in, 1, zp->avail_in, AH->FH) != zp->avail_in)
! die_horribly(AH, modulename, "could not write to output file: %s\n", strerror(errno));
! ctx->filePos += zp->avail_in;
! zp->avail_in = 0;
! }
! else
! {
! #ifdef HAVE_LIBZ
! if (flush == Z_FINISH)
! res = Z_STREAM_END;
! #endif
! }
}
! #ifdef HAVE_LIBZ
! return res;
! #else
! return 1;
! #endif
! }
!
! /*
! * Terminate zlib context and flush it's buffers. If no zlib
! * then just return.
! */
! static void
! _EndDataCompressor(ArchiveHandle *AH, TocEntry *te)
! {
!
! #ifdef HAVE_LIBZ
! lclContext *ctx = (lclContext *) AH->formatData;
! z_streamp zp = ctx->zp;
! int res;
!
! if (AH->compression != 0)
{
! zp->next_in = NULL;
! zp->avail_in = 0;
!
! do
! {
! /* printf("Ending data output\n"); */
! res = _DoDeflate(AH, ctx, Z_FINISH);
! } while (res != Z_STREAM_END);
!
! if (deflateEnd(zp) != Z_OK)
! die_horribly(AH, modulename, "could not close compression stream: %s\n", zp->msg);
}
! #endif
!
! /* Send the end marker */
! WriteInt(AH, 0);
}
-
/*
* Clone format-specific fields during parallel restoration.
*/
--- 836,891 ----
*id = ReadInt(AH);
}
! static size_t
! _CustomWriteFunc(ArchiveHandle *AH, const char *buf, size_t len)
{
! Assert(len != 0);
! /* never write 0-byte blocks (this should not happen) */
! if (len == 0)
! return 0;
! WriteInt(AH, len);
! return _WriteBuf(AH, buf, len);
}
! static size_t
! _CustomReadFunc(ArchiveHandle *AH, char **buf, size_t *buflen)
{
! size_t blkLen;
! size_t cnt;
! /*
! * To keep things simple, we always read one compressed block at a time.
! */
! blkLen = ReadInt(AH);
! if (blkLen == 0)
! return 0;
!
! /* If the caller's buffer is not large enough, allocate a bigger one */
! if (blkLen > *buflen)
{
! free(*buf);
! *buf = (char *) malloc(blkLen);
! if (!(*buf))
! die_horribly(AH, modulename, "out of memory\n");
! *buflen = blkLen;
}
! cnt = _ReadBuf(AH, *buf, blkLen);
! if (cnt != blkLen)
{
! if (feof(AH->FH))
! die_horribly(AH, modulename,
! "could not read from input file: end of file\n");
! else
! die_horribly(AH, modulename,
! "could not read from input file: %s\n", strerror(errno));
}
! return cnt;
}
/*
* Clone format-specific fields during parallel restoration.
*/
***************
*** 1114,1125 **** _Clone(ArchiveHandle *AH)
memcpy(AH->formatData, ctx, sizeof(lclContext));
ctx = (lclContext *) AH->formatData;
! ctx->zp = (z_streamp) malloc(sizeof(z_stream));
! ctx->zlibOut = (char *) malloc(zlibOutSize + 1);
! ctx->zlibIn = (char *) malloc(ctx->inSize);
!
! if (ctx->zp == NULL || ctx->zlibOut == NULL || ctx->zlibIn == NULL)
! die_horribly(AH, modulename, "out of memory\n");
/*
* Note: we do not make a local lo_buf because we expect at most one BLOBS
--- 900,907 ----
memcpy(AH->formatData, ctx, sizeof(lclContext));
ctx = (lclContext *) AH->formatData;
! if (ctx->cs != NULL)
! die_horribly(AH, modulename, "compressor active\n");
/*
* Note: we do not make a local lo_buf because we expect at most one BLOBS
***************
*** 1133,1141 **** static void
_DeClone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
- free(ctx->zlibOut);
- free(ctx->zlibIn);
- free(ctx->zp);
free(ctx);
}
--- 915,924 ----
_DeClone(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ CompressorState *cs = ctx->cs;
+
+ EndCompressorState(AH, cs);
free(ctx);
}
+
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
***************
*** 56,61 ****
--- 56,62 ----
#include "pg_backup_archiver.h"
#include "dumputils.h"
+ #include "compress_io.h"
extern char *optarg;
extern int optind,
***************
*** 2174,2180 **** dumpBlobs(Archive *AH, void *arg)
exit_nicely();
}
! WriteData(AH, buf, cnt);
} while (cnt > 0);
lo_close(g_conn, loFd);
--- 2175,2183 ----
exit_nicely();
}
! /* we try to avoid writing empty chunks */
! if (cnt > 0)
! WriteData(AH, buf, cnt);
} while (cnt > 0);
lo_close(g_conn, loFd);
On Wed, Dec 1, 2010 at 9:05 AM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
Forgot attachment. This is also available in the above git repo.
I have quickly checked your modifications, on the one hand I like the
reduction of functions, I would have said that we have AH around all
the time and so we could just allocate once and stuff it all into
ctx->cs and reuse the buffers for every object, but re-allocating them
for every (dumpable) object should be fine as well.
Regarding the function pointers that you removed, you are now putting
back in what Dimitri wanted me to take out, namely switch/case
instructions for the algorithms and then #ifdefs for every algorithm.
It's not too many now since we have taken out LZF. Well, I can live
with both ways.
There is one thing however that I am not in favor of, which is the
removal of the "sizeHint" parameter for the read functions. The reason
for this parameter is not very clear now without LZF but I have tried
to put in a few comments to explain the situation (which you have
taken out as well :-) ).
The point is that zlib is a stream based compression algorithm, you
just stuff data in and from time to time you get data out and in the
end you explicitly flush the compressor. The read function can just
return as many bytes as it wants and we can just hand it all over to
zlib. Other compression algorithms however are block based and first
write a block header that contains the information on the next data
block, including uncompressed and compressed sizes. Now with the
sizeHint parameter I used, the compressor could tell the read function
that it just wants to read the fixed size header (6 bytes IIRC). In
the header it would look up the compressed size for the next block and
would then ask the read function to get exactly this amount of data,
decompress it and go on with the next block, and so forth...
Of course you can possibly do that memory management inside the
compressor with an extra buffer holding what you got in excess but
it's a pain. If you removed that part on purpose on the grounds that
there is no block based compression algorithm in core and probably
never will be, then that's okay :-)
Joachim
On 02.12.2010 04:35, Joachim Wieland wrote:
There is one thing however that I am not in favor of, which is the
removal of the "sizeHint" parameter for the read functions. The reason
for this parameter is not very clear now without LZF but I have tried
to put in a few comments to explain the situation (which you have
taken out as well :-) ).The point is that zlib is a stream based compression algorithm, you
just stuff data in and from time to time you get data out and in the
end you explicitly flush the compressor. The read function can just
return as many bytes as it wants and we can just hand it all over to
zlib. Other compression algorithms however are block based and first
write a block header that contains the information on the next data
block, including uncompressed and compressed sizes. Now with the
sizeHint parameter I used, the compressor could tell the read function
that it just wants to read the fixed size header (6 bytes IIRC). In
the header it would look up the compressed size for the next block and
would then ask the read function to get exactly this amount of data,
decompress it and go on with the next block, and so forth...Of course you can possibly do that memory management inside the
compressor with an extra buffer holding what you got in excess but
it's a pain. If you removed that part on purpose on the grounds that
there is no block based compression algorithm in core and probably
never will be, then that's okay :-)
Yeah, we're not going to have lzf built-in anytime soon. The external
command approach seems like the best way to support additional
compression algorithms, and I don't think it could do anything with
sizeHint. And the custom format didn't obey sizeHint anyway, because it
reads one custom-format block at a time.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Ok, committed, with some small cleanup since the last patch I posted.
Could you update the directory-format patch on top of the committed
version, please?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Excerpts from Heikki Linnakangas's message of jue dic 02 16:52:27 -0300 2010:
Ok, committed, with some small cleanup since the last patch I posted.
I think the comments on _ReadBuf and friends need to be updated, since
they are not just for headers and TOC stuff anymore. I'm not sure if
they were already outdated before your patch ...
--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
On 02.12.2010 23:12, Alvaro Herrera wrote:
Excerpts from Heikki Linnakangas's message of jue dic 02 16:52:27 -0300 2010:
Ok, committed, with some small cleanup since the last patch I posted.
I think the comments on _ReadBuf and friends need to be updated, since
they are not just for headers and TOC stuff anymore. I'm not sure if
they were already outdated before your patch ...
"These routines are only used to read & write headers & TOC"
Hmm, ReadInt calls _ReadByte, and PrintData used to call ReadInt, so it
was indirectly been called for things other than headers and TOC
already. Unless you consider the "headers" to include length integer in
in each data block. I'm inclined to just remove that sentence.
I also note that the _Clone and _DeClone functions are a bit misplaced.
There's a big "END OF FORMAT CALLBACKS" earlier in the file, but _Clone
and _DeClone are such callbacks. I'll move them to the right place.
PS. Thanks for the cleanup you did yesterday.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Thu, Dec 2, 2010 at 2:52 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
Ok, committed, with some small cleanup since the last patch I posted.
Could you update the directory-format patch on top of the committed version,
please?
Thanks for committing the first part. Here is the updated and rebased
directory-format patch.
Joachim
Attachments:
Moving onto the directory archive part of this patch, the feature seems
to work as advertised; here's a quick test case:
createdb pgbench
pgbench -i -s 1 pgbench
pg_dump -F d -f test
pg_restore -k test
pg_restore -l test
createdb copy
pg_restore -d copy test
The copy made that way looked good. There's a good chunk of code in the
patch that revolves around BLOB support. We need to get someone who is
more familiar with those than me to suggest some tests for that part
before this gets committed. If you could suggest how to test that code,
that would be helpful.
There's a number of small things that I'd like to see improved in new
rev of this code
pg_dump: help message for "--file" needs to mention that this is
overloaded to also specify the output directory too.
pg_dump: the documentation for --file should say the directory is
created, and must not exist when you start. The code catches this well,
but that expectation is not clear until you try it.
pg_restore: the help message "check the directory archive" would be
clearer as "check an archive in directory format".
There are some tab vs. space whitespace inconsistencies in the
documentation added.
The comments at the beginning of functions could be more consistent.
Early parts of the code have a header for each function that's
extensive. Maybe even a bit more than needed. I'm not sure why it's
important to document here which of these functions is
optional/mandatory for example, and getting rid of just those would trim
a decent number of lines out of the patch. But then at the end, all of
the new functions added aren't documented at all. Some of those are
near trivial, but it would be better to have at least a small
descriptive header for them.
The comment header at the beginning of pg_backup_directory is a bit
weird. I guess Philip Warner should still be credited as the author of
the code this was based on, but it's a weird seeing a new file
attributed solely to him. Also, there's an XXX in the identification
field there that should be filled in with the file name.
There's your feedback for this round. I hope we'll see an updated patch
from you as part of the next CommitFest.
--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services and Support www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books
On 16.12.2010 12:12, Greg Smith wrote:
Moving onto the directory archive part of this patch, the feature seems
to work as advertised; here's a quick test case:createdb pgbench
pgbench -i -s 1 pgbench
pg_dump -F d -f test
pg_restore -k test
pg_restore -l test
createdb copy
pg_restore -d copy testThe copy made that way looked good. There's a good chunk of code in the
patch that revolves around BLOB support. We need to get someone who is
more familiar with those than me to suggest some tests for that part
before this gets committed. If you could suggest how to test that code,
that would be helpful.There's a number of small things that I'd like to see improved in new
rev of this code
...
In addition to those:
The "check" functionality seems orthogonal, it should be splitted off to
a separate patch. It would possibly be useful to be perform sanity
checks on an archive in custom format too, and the directory format
works just as well without it.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On 16.12.2010 17:23, Heikki Linnakangas wrote:
On 16.12.2010 12:12, Greg Smith wrote:
There's a number of small things that I'd like to see improved in new
rev of this code
...In addition to those:
...
One more thing: the motivation behind this patch is to allow parallel
pg_dump in the future, so we should be make sure this patch caters well
for that.
As soon as we have parallel pg_dump, the next big thing is going to be
parallel dump of the same table using multiple processes. Perhaps we
should prepare for that in the directory archive format, by allowing the
data of a single table to be split into multiple files. That way
parallel pg_dump is simple, you just split the table in chunks of
roughly the same size, say 10GB each, and launch a process for each
chunk, writing to a separate file.
It should be a quite simple add-on to the current patch, but will make
life so much easier for parallel pg_dump. It would also be helpful to
work around file size limitations on some filesystems.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
One more thing: the motivation behind this patch is to allow parallel
pg_dump in the future, so we should be make sure this patch caters well for
that.As soon as we have parallel pg_dump, the next big thing is going to be
parallel dump of the same table using multiple processes. Perhaps we should
prepare for that in the directory archive format, by allowing the data of a
single table to be split into multiple files. That way parallel pg_dump is
simple, you just split the table in chunks of roughly the same size, say
10GB each, and launch a process for each chunk, writing to a separate file.It should be a quite simple add-on to the current patch, but will make life
so much easier for parallel pg_dump. It would also be helpful to work around
file size limitations on some filesystems.
Sounds reasonable. Are you planning to do this and commit?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 16.12.2010 19:58, Robert Haas wrote:
On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:One more thing: the motivation behind this patch is to allow parallel
pg_dump in the future, so we should be make sure this patch caters well for
that.As soon as we have parallel pg_dump, the next big thing is going to be
parallel dump of the same table using multiple processes. Perhaps we should
prepare for that in the directory archive format, by allowing the data of a
single table to be split into multiple files. That way parallel pg_dump is
simple, you just split the table in chunks of roughly the same size, say
10GB each, and launch a process for each chunk, writing to a separate file.It should be a quite simple add-on to the current patch, but will make life
so much easier for parallel pg_dump. It would also be helpful to work around
file size limitations on some filesystems.Sounds reasonable. Are you planning to do this and commit?
I'll defer to Joachim, assuming he has the time & energy.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
As soon as we have parallel pg_dump, the next big thing is going to be
parallel dump of the same table using multiple processes. Perhaps we should
prepare for that in the directory archive format, by allowing the data of a
single table to be split into multiple files. That way parallel pg_dump is
simple, you just split the table in chunks of roughly the same size, say
10GB each, and launch a process for each chunk, writing to a separate file.
How exactly would you "just split the table in chunks of roughly the
same size" ? Which queries should pg_dump send to the backend? If it
just sends a bunch of WHERE queries, the server would still scan the
same data several times since each pg_dump client would result in a
seqscan over the full table.
Ideally pg_dump should be able to query for all data in only one
relation segment so that each segment is scanned by only one backend
process. However this requires backend support and we would be sending
queries that we'd not want clients other than pg_dump to send...
If you were thinking about WHERE queries to get equally sized
partitions, how would we deal with unindexed and/or non-numerical data
in a large table?
Joachim
On 16.12.2010 20:33, Joachim Wieland wrote:
On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:As soon as we have parallel pg_dump, the next big thing is going to be
parallel dump of the same table using multiple processes. Perhaps we should
prepare for that in the directory archive format, by allowing the data of a
single table to be split into multiple files. That way parallel pg_dump is
simple, you just split the table in chunks of roughly the same size, say
10GB each, and launch a process for each chunk, writing to a separate file.How exactly would you "just split the table in chunks of roughly the
same size" ?
Check pg_class.relpages, and divide that evenly across the processes.
That should be good enough.
Which queries should pg_dump send to the backend? If it
just sends a bunch of WHERE queries, the server would still scan the
same data several times since each pg_dump client would result in a
seqscan over the full table.
Hmm, I was thinking of "SELECT * FROM table WHERE ctid BETWEEN ? AND ?",
but we don't support TidScans for ranges. Perhaps we could add that.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
On 16.12.2010 20:33, Joachim Wieland wrote:
How exactly would you "just split the table in chunks of roughly the
same size" ?
Check pg_class.relpages, and divide that evenly across the processes.
That should be good enough.
Not even close ... relpages could be badly out of date. If you believe
it, you could fail to dump data that's in further-out pages. We'd need
to move pg_relpages() or some equivalent into core to make this
workable.
Which queries should pg_dump send to the backend?
Hmm, I was thinking of "SELECT * FROM table WHERE ctid BETWEEN ? AND ?",
but we don't support TidScans for ranges. Perhaps we could add that.
Yeah, that seems probably workable, given an up-to-date idea of the
possible block range.
regards, tom lane
On Thu, Dec 16, 2010 at 2:29 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes:
On 16.12.2010 20:33, Joachim Wieland wrote:
How exactly would you "just split the table in chunks of roughly the
same size" ?Check pg_class.relpages, and divide that evenly across the processes.
That should be good enough.Not even close ... relpages could be badly out of date. If you believe
it, you could fail to dump data that's in further-out pages. We'd need
to move pg_relpages() or some equivalent into core to make this
workable.Which queries should pg_dump send to the backend?
Hmm, I was thinking of "SELECT * FROM table WHERE ctid BETWEEN ? AND ?",
but we don't support TidScans for ranges. Perhaps we could add that.Yeah, that seems probably workable, given an up-to-date idea of the
possible block range.
So how bad would it be if we committed this new format without support
for splitting large relations into multiple files, or with some stub
support that never actually gets used, and fixed this later? Because
this is starting to sound like a bigger project than I think we ought
to be requiring for this patch.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 16.12.2010 22:13, Robert Haas wrote:
So how bad would it be if we committed this new format without support
for splitting large relations into multiple files, or with some stub
support that never actually gets used, and fixed this later? Because
this is starting to sound like a bigger project than I think we ought
to be requiring for this patch.
Would probably be fine, as long as we don't paint ourselves in the corner.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On 12/16/2010 03:13 PM, Robert Haas wrote:
So how bad would it be if we committed this new format without support
for splitting large relations into multiple files, or with some stub
support that never actually gets used, and fixed this later? Because
this is starting to sound like a bigger project than I think we ought
to be requiring for this patch.
I don't think we have to have that in the first go at all. Parallel dump
could be extremely useful without it. I haven't looked closely, but I
assume there will still be an archive version recorded somewhere. When
we change the archive format, bump the version number.
cheers
andrew
Andrew Dunstan <andrew@dunslane.net> writes:
On 12/16/2010 03:13 PM, Robert Haas wrote:
So how bad would it be if we committed this new format without support
for splitting large relations into multiple files, or with some stub
support that never actually gets used, and fixed this later? Because
this is starting to sound like a bigger project than I think we ought
to be requiring for this patch.
I don't think we have to have that in the first go at all. Parallel dump
could be extremely useful without it. I haven't looked closely, but I
assume there will still be an archive version recorded somewhere. When
we change the archive format, bump the version number.
Sure, but it's worth thinking about the feature now. If there are
format tweaks to be made, it might be less painful to make them now
instead of later, even if actual support for the feature isn't there.
(I agree I don't want to try to implement it just yet.)
regards, tom lane
On Thursday 16 December 2010 19:33:10 Joachim Wieland wrote:
On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
As soon as we have parallel pg_dump, the next big thing is going to be
parallel dump of the same table using multiple processes. Perhaps we
should prepare for that in the directory archive format, by allowing the
data of a single table to be split into multiple files. That way
parallel pg_dump is simple, you just split the table in chunks of
roughly the same size, say 10GB each, and launch a process for each
chunk, writing to a separate file.How exactly would you "just split the table in chunks of roughly the
same size" ? Which queries should pg_dump send to the backend? If it
just sends a bunch of WHERE queries, the server would still scan the
same data several times since each pg_dump client would result in a
seqscan over the full table.
I would suggest implementing < > support for tidscans and doing it in segment
size...
Andres
On 17.12.2010 00:29, Andres Freund wrote:
On Thursday 16 December 2010 19:33:10 Joachim Wieland wrote:
On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
As soon as we have parallel pg_dump, the next big thing is going to be
parallel dump of the same table using multiple processes. Perhaps we
should prepare for that in the directory archive format, by allowing the
data of a single table to be split into multiple files. That way
parallel pg_dump is simple, you just split the table in chunks of
roughly the same size, say 10GB each, and launch a process for each
chunk, writing to a separate file.How exactly would you "just split the table in chunks of roughly the
same size" ? Which queries should pg_dump send to the backend? If it
just sends a bunch of WHERE queries, the server would still scan the
same data several times since each pg_dump client would result in a
seqscan over the full table.I would suggest implementing< > support for tidscans and doing it in segment
size...
I don't think there's any particular gain from matching the server's
data file segment size, although 1GB does sound like a good chunk size
for this too.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Thursday 16 December 2010 23:34:02 Heikki Linnakangas wrote:
On 17.12.2010 00:29, Andres Freund wrote:
On Thursday 16 December 2010 19:33:10 Joachim Wieland wrote:
On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
<heikki.linnakangas@enterprisedb.com> wrote:
As soon as we have parallel pg_dump, the next big thing is going to be
parallel dump of the same table using multiple processes. Perhaps we
should prepare for that in the directory archive format, by allowing
the data of a single table to be split into multiple files. That way
parallel pg_dump is simple, you just split the table in chunks of
roughly the same size, say 10GB each, and launch a process for each
chunk, writing to a separate file.How exactly would you "just split the table in chunks of roughly the
same size" ? Which queries should pg_dump send to the backend? If it
just sends a bunch of WHERE queries, the server would still scan the
same data several times since each pg_dump client would result in a
seqscan over the full table.I would suggest implementing< > support for tidscans and doing it in
segment size...I don't think there's any particular gain from matching the server's
data file segment size, although 1GB does sound like a good chunk size
for this too.
Its noticeable more efficient reading from different files in different processes
in comparison to all hammering the same file.
Andres
On 12/16/2010 03:52 PM, Tom Lane wrote:
Andrew Dunstan<andrew@dunslane.net> writes:
On 12/16/2010 03:13 PM, Robert Haas wrote:
So how bad would it be if we committed this new format without support
for splitting large relations into multiple files, or with some stub
support that never actually gets used, and fixed this later? Because
this is starting to sound like a bigger project than I think we ought
to be requiring for this patch.I don't think we have to have that in the first go at all. Parallel dump
could be extremely useful without it. I haven't looked closely, but I
assume there will still be an archive version recorded somewhere. When
we change the archive format, bump the version number.Sure, but it's worth thinking about the feature now. If there are
format tweaks to be made, it might be less painful to make them now
instead of later, even if actual support for the feature isn't there.
(I agree I don't want to try to implement it just yet.)
Yeah, OK. Well, time is getting short but (hand waving wildly) I think
we could probably get by with just adding a member to the TOC for the
section number of the entry (set it to 0 for non TABLE DATA TOC
entries). The section number could be built into the file name in
directory format. For now that number would always be 1 for TABLE DATA
members.
This has intriguing possibilities for parallel restore of custom format
dumps too. It could be very useful to be able to restore a single table
in parallel, if we had more than one TABLE DATA member per table.
I'm deliberately just addressing infrastructure issues rather than how
we actually generate multiple sections of data for a single table
(especially if we want to do that in parallel).
cheers
andrew