pg_basebackup for streaming base backups
This patch creates pg_basebackup in bin/, being a client program for
the streaming base backup feature.
I think it's more or less done now. I've again split it out of
pg_streamrecv, because it had very little shared code with that
(basically just the PQconnectdb() wrapper).
One thing I'm thinking about - right now the tool just takes -c
<conninfo> to connect to the database. Should it instead be taught to
take the connection parameters that for example pg_dump does - one for
each of host, port, user, password? (shouldn't be hard to do..)
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Attachments:
pg_basebackup.patchtext/x-patch; charset=US-ASCII; name=pg_basebackup.patchDownload
diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index db7c834..c14ae43 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -813,6 +813,16 @@ SELECT pg_stop_backup();
</para>
<para>
+ You can also use the <xref linkend="app-pgbasebackup"> tool to take
+ the backup, instead of manually copying the files. This tool will take
+ care of the <function>pg_start_backup()</>, copy and
+ <function>pg_stop_backup()</> steps automatically, and transfers the
+ backup over a regular <productname>PostgreSQL</productname> connection
+ using the replication protocol, instead of requiring filesystem level
+ access.
+ </para>
+
+ <para>
Some file system backup tools emit warnings or errors
if the files they are trying to copy change while the copy proceeds.
When taking a base backup of an active database, this situation is normal
diff --git a/doc/src/sgml/ref/allfiles.sgml b/doc/src/sgml/ref/allfiles.sgml
index f40fa9d..c44d11e 100644
--- a/doc/src/sgml/ref/allfiles.sgml
+++ b/doc/src/sgml/ref/allfiles.sgml
@@ -160,6 +160,7 @@ Complete list of usable sgml source files in this directory.
<!entity dropuser system "dropuser.sgml">
<!entity ecpgRef system "ecpg-ref.sgml">
<!entity initdb system "initdb.sgml">
+<!entity pgBasebackup system "pg_basebackup.sgml">
<!entity pgConfig system "pg_config-ref.sgml">
<!entity pgControldata system "pg_controldata.sgml">
<!entity pgCtl system "pg_ctl-ref.sgml">
diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml
new file mode 100644
index 0000000..42a714b
--- /dev/null
+++ b/doc/src/sgml/ref/pg_basebackup.sgml
@@ -0,0 +1,249 @@
+<!--
+doc/src/sgml/ref/pg_basebackup.sgml
+PostgreSQL documentation
+-->
+
+<refentry id="app-pgbasebackup">
+ <refmeta>
+ <refentrytitle>pg_basebackup</refentrytitle>
+ <manvolnum>1</manvolnum>
+ <refmiscinfo>Application</refmiscinfo>
+ </refmeta>
+
+ <refnamediv>
+ <refname>pg_basebackup</refname>
+ <refpurpose>take a base backup of a <productname>PostgreSQL</productname> cluster</refpurpose>
+ </refnamediv>
+
+ <indexterm zone="app-pgbasebackup">
+ <primary>pg_basebackup</primary>
+ </indexterm>
+
+ <refsynopsisdiv>
+ <cmdsynopsis>
+ <command>pg_basebackup</command>
+ <arg rep="repeat"><replaceable>option</></arg>
+ </cmdsynopsis>
+ </refsynopsisdiv>
+
+ <refsect1>
+ <title>
+ Description
+ </title>
+ <para>
+ <application>pg_basebackup</application> is used to take base backups of
+ a running <productname>PostgreSQL</productname> database cluster. These
+ are taken without affecting other clients to the database, and can be used
+ both for point-in-time recovery (see <xref linkend="continuous-archiving">)
+ and as the starting point for a log shipping or streaming replication standby
+ server (see <xref linkend="warm-standby">).
+ </para>
+
+ <para>
+ <application>pg_basebackup</application> makes a binary copy of the database
+ cluster files, while making sure the system is automatically put in and
+ out of backup mode automatically. Backups are always taken of the entire
+ database cluster, it is not possible to back up individual databases or
+ database objects. For individual database backups, a tool such as
+ <xref linkend="APP-PGDUMP"> must be used.
+ </para>
+
+ <para>
+ The backup is made over a regular <productname>PostgreSQL</productname>
+ connection, and uses the replication protocol. The connection must be
+ made with a user having <literal>REPLICATION</literal> permissions (see
+ <xref linkend="role-attributes">).
+ </para>
+
+ <para>
+ Only one backup can be concurrently active in
+ <productname>PostgreSQL</productname>, meaning that only one instance of
+ <application>pg_basebackup</application> can run at the same time
+ against a single database cluster.
+ </para>
+ </refsect1>
+
+ <refsect1>
+ <title>Options</title>
+
+ <para>
+ <variablelist>
+ <varlistentry>
+ <term><option>-c <replaceable class="parameter">conninfo</replaceable></option></term>
+ <term><option>--conninfo=<replaceable class="parameter">conninfo</replaceable></option></term>
+ <listitem>
+ <para>
+ Specify the conninfo string used to connect to the server. For example:
+<programlisting>
+$ <userinput>pg_basebackup -c "host=192.168.0.2 user=backup"</userinput>
+</programlisting>
+ See <xref linkend="libpq-connect"> for more information on all the
+ available connection options.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-d <replaceable class="parameter">directory</replaceable></option></term>
+ <term><option>--basedir=<replaceable class="parameter">directory</replaceable></option></term>
+ <listitem>
+ <para>
+ Directory to restore the base data directory to. When the cluster has
+ no additional tablespaces, the whole database will be placed in this
+ directory. If the cluster contains additional tablespaces, the main
+ data directory will be placed in this directory, but all other
+ tablespaces will be placed in the same absolute path as they have
+ on the server.
+ </para>
+ <para>
+ Only one of <literal>-d</> and <literal>-t</> can be specified.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-l <replaceable class="parameter">label</replaceable></option></term>
+ <term><option>--label=<replaceable class="parameter">label</replaceable></option></term>
+ <listitem>
+ <para>
+ Sets the label for the backup. If none is specified, a default value of
+ <literal>pg_basebackup base backup</literal> will be used.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-p</option></term>
+ <term><option>--progress</option></term>
+ <listitem>
+ <para>
+ Enables progress reporting. Turning this on will deliver an approximate
+ progress report during the backup. Since the database may change during
+ the backup, this is only an approximation and may not end at exactly
+ <literal>100%</literal>.
+ </para>
+ <para>
+ When this is enabled, the backup will start by enumerating the size of
+ the entire database, and then go back and send the actual contents.
+ This may make the backup take slightly longer, and in particular it
+ will take longer before the first data is sent.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-t <replaceable class="parameter">directory</replaceable></option></term>
+ <term><option>--tardir=<replaceable class="parameter">directory</replaceable></option></term>
+ <listitem>
+ <para>
+ Directory to place tar format files in. When this is specified, the
+ backup will consist of a number of tar files, one for each tablespace
+ in the database, stored in this directory. The tar file for the main
+ data directory will be named <filename>base.tar</>, and all other
+ tablespaces will be named after the tablespace oid.
+ </para>
+ <para>
+ If the value <literal>-</> (dash) is specified as tar directory,
+ the tar contents will be written to standard output, suitable for
+ piping to for example <productname>gzip</>. This is only possible if
+ the cluster has no additional tablespaces.
+ </para>
+ <para>
+ Only one of <literal>-d</> and <literal>-t</> can be specified.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-v</option></term>
+ <term><option>--verbose</option></term>
+ <listitem>
+ <para>
+ Enables verbose mode. Will output some extra steps during startup and
+ shutdown, as well as show the exact filename that is currently being
+ processed if progress reporting is also enabled.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-Z <replaceable class="parameter">level</replaceable></option></term>
+ <term><option>--compress=<replaceable class="parameter">level</replaceable></option></term>
+ <listitem>
+ <para>
+ Enables gzip compression of tar file output. Compression is only
+ available when generating tar files, and is not available when sending
+ output to standard output.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ Other, less commonly used, parameters are also available:
+
+ <variablelist>
+ <varlistentry>
+ <term><option>-V</></term>
+ <term><option>--version</></term>
+ <listitem>
+ <para>
+ Print the <application>pg_basebackup</application> version and exit.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-?</></term>
+ <term><option>--help</></term>
+ <listitem>
+ <para>
+ Show help about <application>pg_basebackup</application> command line
+ arguments, and exit.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
+
+ </refsect1>
+
+ <refsect1>
+ <title>Environment</title>
+
+ <para>
+ This utility, like most other <productname>PostgreSQL</> utilities,
+ uses the environment variables supported by <application>libpq</>
+ (see <xref linkend="libpq-envars">).
+ </para>
+
+ </refsect1>
+
+ <refsect1>
+ <title>Notes</title>
+
+ <para>
+ The backup will include all files in the data directory and tablespaces,
+ including the configuration files and any additional files placed in the
+ directory by third parties. Only regular files and directories are allowed
+ in the data directory, no symbolic links or special device files.
+ </para>
+
+ <para>
+ The way <productname>PostgreSQL</productname> manages tablespaces, the path
+ for all additional tablespaces must be identical whenever a backup is
+ restored. The main data directory, however, is relocatable to any location.
+ </para>
+ </refsect1>
+
+ <refsect1>
+ <title>See Also</title>
+
+ <simplelist type="inline">
+ <member><xref linkend="APP-PGDUMP"></member>
+ </simplelist>
+ </refsect1>
+
+</refentry>
diff --git a/doc/src/sgml/reference.sgml b/doc/src/sgml/reference.sgml
index 84babf6..6ee8e5b 100644
--- a/doc/src/sgml/reference.sgml
+++ b/doc/src/sgml/reference.sgml
@@ -202,6 +202,7 @@
&droplang;
&dropuser;
&ecpgRef;
+ &pgBasebackup;
&pgConfig;
&pgDump;
&pgDumpall;
diff --git a/src/bin/Makefile b/src/bin/Makefile
index c18c05c..3809412 100644
--- a/src/bin/Makefile
+++ b/src/bin/Makefile
@@ -14,7 +14,7 @@ top_builddir = ../..
include $(top_builddir)/src/Makefile.global
SUBDIRS = initdb pg_ctl pg_dump \
- psql scripts pg_config pg_controldata pg_resetxlog
+ psql scripts pg_config pg_controldata pg_resetxlog pg_basebackup
ifeq ($(PORTNAME), win32)
SUBDIRS+=pgevent
endif
diff --git a/src/bin/pg_basebackup/Makefile b/src/bin/pg_basebackup/Makefile
new file mode 100644
index 0000000..ccb1502
--- /dev/null
+++ b/src/bin/pg_basebackup/Makefile
@@ -0,0 +1,38 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/bin/pg_basebackup
+#
+# Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/bin/pg_basebackup/Makefile
+#
+#-------------------------------------------------------------------------
+
+PGFILEDESC = "pg_basebackup - takes a streaming base backup of a PostgreSQL instance"
+PGAPPICON=win32
+
+subdir = src/bin/pg_basebackup
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS= pg_basebackup.o $(WIN32RES)
+
+all: pg_basebackup
+
+pg_basebackup: $(OBJS) | submake-libpq submake-libpgport
+ $(CC) $(CFLAGS) $(OBJS) $(libpq_pgport) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+
+install: all installdirs
+ $(INSTALL_PROGRAM) pg_basebackup$(X) '$(DESTDIR)$(bindir)/pg_basebackup$(X)'
+
+installdirs:
+ $(MKDIR_P) '$(DESTDIR)$(bindir)'
+
+uninstall:
+ rm -f '$(DESTDIR)$(bindir)/pg_basebackup$(X)'
+
+clean distclean maintainer-clean:
+ rm -f pg_basebackup$(X) $(OBJS)
diff --git a/src/bin/pg_basebackup/nls.mk b/src/bin/pg_basebackup/nls.mk
new file mode 100644
index 0000000..760ee1d
--- /dev/null
+++ b/src/bin/pg_basebackup/nls.mk
@@ -0,0 +1,5 @@
+# src/bin/pg_basebackup/nls.mk
+CATALOG_NAME := pg_basebackup
+AVAIL_LANGUAGES :=
+GETTEXT_FILES := pg_basebackup.c
+GETTEXT_TRIGGERS:= _
diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c
new file mode 100644
index 0000000..7fcb20a
--- /dev/null
+++ b/src/bin/pg_basebackup/pg_basebackup.c
@@ -0,0 +1,893 @@
+/*-------------------------------------------------------------------------
+ *
+ * pg_basebackup.c - receive a base backup using streaming replication protocol
+ *
+ * Author: Magnus Hagander <magnus@hagander.net>
+ *
+ * Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ * src/bin/pg_basebackup.c
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+#include "libpq-fe.h"
+
+#include <unistd.h>
+#include <dirent.h>
+#include <sys/stat.h>
+
+#ifdef HAVE_LIBZ
+#include <zlib.h>
+#endif
+
+#include "getopt_long.h"
+
+
+/* Global options */
+static const char *progname;
+char *basedir = NULL;
+char *tardir = NULL;
+char *label = "pg_basebackup base backup";
+bool showprogress = false;
+int verbose = 0;
+int compresslevel = 0;
+char *conninfo = NULL;
+
+/* Progress counters */
+static uint64 totalsize;
+static uint64 totaldone;
+static int tablespacecount;
+
+/* Function headers */
+static char *xstrdup(const char *s);
+static void usage(void);
+static void verify_dir_is_empty_or_create(char *dirname);
+static void progress_report(int tablespacenum, char *fn);
+static PGconn *GetConnection(void);
+
+static void ReceiveTarFile(PGconn *conn, PGresult *res, int rownum);
+static void ReceiveAndUnpackTarFile(PGconn *conn, PGresult *res, int rownum);
+static void BaseBackup();
+
+#ifdef HAVE_LIBZ
+static const char *
+get_gz_error(gzFile *gzf)
+{
+ int errnum;
+ const char *errmsg;
+
+ errmsg = gzerror(gzf, &errnum);
+ if (errnum == Z_ERRNO)
+ return strerror(errno);
+ else
+ return errmsg;
+}
+#endif
+
+/*
+ * strdup() replacement that prints an error and exists if something goes
+ * wrong. Can never return NULL.
+ */
+static char *
+xstrdup(const char *s)
+{
+ char *result;
+
+ result = strdup(s);
+ if (!result)
+ {
+ fprintf(stderr, _("%s: out of memory\n"), progname);
+ exit(1);
+ }
+ return result;
+}
+
+
+static void
+usage(void)
+{
+ printf(_("%s takes base backups of running PostgreSQL servers\n\n"),
+ progname);
+ printf(_("Usage:\n"));
+ printf(_(" %s [OPTION]...\n"), progname);
+ printf(_("\nOptions:\n"));
+ printf(_(" -c, --conninfo=conninfo connection info string to server\n"));
+ printf(_(" -d, --basedir=directory receive base backup into directory\n"));
+ printf(_(" -t, --tardir=directory receive base backup into tar files\n"
+ " stored in specified directory\n"));
+ printf(_(" -Z, --compress=0-9 compress tar output\n"));
+ printf(_(" -l, --label=label set backup label\n"));
+ printf(_(" -p, --progress show progress information\n"));
+ printf(_(" -v, --verbose output verbose messages\n"));
+ printf(_("\nOther options:\n"));
+ printf(_(" -?, --help show this help, then exit\n"));
+ printf(_(" -V, --version output version information, then exit\n"));
+}
+
+
+/*
+ * Verify that the given directory exists and is empty. If it does not
+ * exist, it is created. If it exists but is not empty, an error will
+ * be give and the process ended.
+ */
+static void
+verify_dir_is_empty_or_create(char *dirname)
+{
+ switch (pg_check_dir(dirname))
+ {
+ case 0:
+
+ /*
+ * Does not exist, so create
+ */
+ if (pg_mkdir_p(dirname, S_IRWXU) == -1)
+ {
+ fprintf(stderr,
+ _("%s: could not create directory \"%s\": %s\n"),
+ progname, dirname, strerror(errno));
+ exit(1);
+ }
+ return;
+ case 1:
+
+ /*
+ * Exists, empty
+ */
+ return;
+ case 2:
+
+ /*
+ * Exists, not empty
+ */
+ fprintf(stderr,
+ _("%s: directory \"%s\" exists but is not empty\n"),
+ progname, dirname);
+ exit(1);
+ case -1:
+
+ /*
+ * Access problem
+ */
+ fprintf(stderr, _("%s: could not access directory \"%s\": %s\n"),
+ progname, dirname, strerror(errno));
+ exit(1);
+ }
+}
+
+
+/*
+ * Print a progress report based on the global variables. If verbose output
+ * is disabled, also print the current file name.
+ */
+static void
+progress_report(int tablespacenum, char *fn)
+{
+ if (verbose)
+ fprintf(stderr,
+ INT64_FORMAT "/" INT64_FORMAT " kB (%i%%) %i/%i tablespaces (%-30s)\r",
+ totaldone / 1024, totalsize,
+ (int) ((totaldone / 1024) * 100 / totalsize),
+ tablespacenum, tablespacecount, fn);
+ else
+ fprintf(stderr, INT64_FORMAT "/" INT64_FORMAT " kB (%i%%) %i/%i tablespaces\r",
+ totaldone / 1024, totalsize,
+ (int) ((totaldone / 1024) * 100 / totalsize),
+ tablespacenum, tablespacecount);
+}
+
+
+/*
+ * Receive a tar format file from the connection to the server, and write
+ * the data from this file directly into a tar file. If compression is
+ * enabled, the data will be compressed while written to the file.
+ *
+ * The file will be named base.tar[.gz] if it's for the main data directory
+ * or <tablespaceoid>.tar[.gz] if it's for another tablespace.
+ *
+ * No attempt to inspect or validate the contents of the file is done.
+ */
+static void
+ReceiveTarFile(PGconn *conn, PGresult *res, int rownum)
+{
+ char fn[MAXPGPATH];
+ char *copybuf = NULL;
+ FILE *tarfile = NULL;
+
+#ifdef HAVE_LIBZ
+ gzFile *ztarfile = NULL;
+#endif
+
+ if (PQgetisnull(res, rownum, 0))
+
+ /*
+ * Base tablespaces
+ */
+ if (strcmp(tardir, "-") == 0)
+ tarfile = stdout;
+ else
+ {
+#ifdef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ snprintf(fn, sizeof(fn), "%s/base.tar.gz", tardir);
+ ztarfile = gzopen(fn, "wb");
+ if (gzsetparams(ztarfile, compresslevel, Z_DEFAULT_STRATEGY) != Z_OK)
+ {
+ fprintf(stderr, _("%s: could not set compression level %i\n"),
+ progname, compresslevel);
+ exit(1);
+ }
+ }
+ else
+#endif
+ {
+ snprintf(fn, sizeof(fn), "%s/base.tar", tardir);
+ tarfile = fopen(fn, "wb");
+ }
+ }
+ else
+ {
+ /*
+ * Specific tablespace
+ */
+#ifdef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ snprintf(fn, sizeof(fn), "%s/%s.tar.gz", tardir, PQgetvalue(res, rownum, 0));
+ ztarfile = gzopen(fn, "wb");
+ }
+ else
+#endif
+ {
+ snprintf(fn, sizeof(fn), "%s/%s.tar", tardir, PQgetvalue(res, rownum, 0));
+ tarfile = fopen(fn, "wb");
+ }
+ }
+
+#ifdef HAVE_LIBZ
+ if (!tarfile && !ztarfile)
+#else
+ if (!tarfile)
+#endif
+ {
+ fprintf(stderr, _("%s: could not create file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ exit(1);
+ }
+
+ /*
+ * Get the COPY data stream
+ */
+ res = PQgetResult(conn);
+ if (!res || PQresultStatus(res) != PGRES_COPY_OUT)
+ {
+ fprintf(stderr, _("%s: could not get COPY data stream: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+ while (1)
+ {
+ int r;
+
+ if (copybuf != NULL)
+ {
+ PQfreemem(copybuf);
+ copybuf = NULL;
+ }
+
+ r = PQgetCopyData(conn, ©buf, 0);
+ if (r == -1)
+ {
+ /*
+ * End of chunk. Close file (but not stdout).
+ *
+ * Also, write two completely empty blocks at the end of the tar
+ * file, as required by some tar programs.
+ */
+ char zerobuf[1024];
+
+ MemSet(zerobuf, 0, sizeof(zerobuf));
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ {
+ if (gzwrite(ztarfile, zerobuf, sizeof(zerobuf)) != sizeof(zerobuf))
+ {
+ fprintf(stderr, _("%s: could not write to compressed file '%s': %s\n"),
+ progname, fn, get_gz_error(ztarfile));
+ }
+ }
+ else
+#endif
+ {
+ if (fwrite(zerobuf, sizeof(zerobuf), 1, tarfile) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
+ progname, fn);
+ exit(1);
+ }
+ }
+
+ if (strcmp(tardir, "-") != 0)
+ {
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ gzclose(ztarfile);
+#endif
+ if (tarfile != NULL)
+ fclose(tarfile);
+ }
+
+ break;
+ }
+ else if (r == -2)
+ {
+ fprintf(stderr, _("%s: could not read COPY data: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ {
+ if (gzwrite(ztarfile, copybuf, r) != r)
+ {
+ fprintf(stderr, _("%s: could not write to compressed file '%s': %s\n"),
+ progname, fn, get_gz_error(ztarfile));
+ }
+ }
+ else
+#endif
+ {
+ if (fwrite(copybuf, r, 1, tarfile) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
+ progname, fn);
+ exit(1);
+ }
+ }
+ totaldone += r;
+ if (showprogress)
+ progress_report(rownum, fn);
+ } /* while (1) */
+
+ if (copybuf != NULL)
+ PQfreemem(copybuf);
+}
+
+/*
+ * Receive a tar format stream from the connection to the server, and unpack
+ * the contents of it into a directory. Only files, directories and
+ * symlinks are supported, no other kinds of special files.
+ *
+ * If the data is for the main data directory, it will be restored in the
+ * specified directory. If it's for another tablespace, it will be restored
+ * in the original directory, since relocation of tablespaces is not
+ * supported.
+ */
+static void
+ReceiveAndUnpackTarFile(PGconn *conn, PGresult *res, int rownum)
+{
+ char current_path[MAXPGPATH];
+ char fn[MAXPGPATH];
+ int current_len_left;
+ int current_padding;
+ char *copybuf = NULL;
+ FILE *file = NULL;
+
+ if (PQgetisnull(res, rownum, 0))
+ strcpy(current_path, basedir);
+ else
+ strcpy(current_path, PQgetvalue(res, rownum, 1));
+
+ /*
+ * Make sure we're unpacking into an empty directory
+ */
+ verify_dir_is_empty_or_create(current_path);
+
+ /*
+ * Get the COPY data
+ */
+ res = PQgetResult(conn);
+ if (!res || PQresultStatus(res) != PGRES_COPY_OUT)
+ {
+ fprintf(stderr, _("%s: could not get COPY data stream: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+ while (1)
+ {
+ int r;
+
+ if (copybuf != NULL)
+ {
+ PQfreemem(copybuf);
+ copybuf = NULL;
+ }
+
+ r = PQgetCopyData(conn, ©buf, 0);
+
+ if (r == -1)
+ {
+ /*
+ * End of chunk
+ */
+ if (file)
+ fclose(file);
+
+ break;
+ }
+ else if (r == -2)
+ {
+ fprintf(stderr, _("%s: could not read COPY data: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+ if (file == NULL)
+ {
+#ifndef WIN32
+ mode_t filemode;
+#endif
+
+ /*
+ * No current file, so this must be the header for a new file
+ */
+ if (r != 512)
+ {
+ fprintf(stderr, _("%s: Invalid tar block header size: %i\n"),
+ progname, r);
+ exit(1);
+ }
+ totaldone += 512;
+
+ if (sscanf(copybuf + 124, "%11o", ¤t_len_left) != 1)
+ {
+ fprintf(stderr, _("%s: could not parse file size!\n"),
+ progname);
+ exit(1);
+ }
+
+ /* Set permissions on the file */
+ if (sscanf(©buf[100], "%07o ", &filemode) != 1)
+ {
+ fprintf(stderr, _("%s: could not parse file mode!\n"),
+ progname);
+ exit(1);
+ }
+
+ /*
+ * All files are padded up to 512 bytes
+ */
+ current_padding =
+ ((current_len_left + 511) & ~511) - current_len_left;
+
+ /*
+ * First part of header is zero terminated filename
+ */
+ snprintf(fn, sizeof(fn), "%s/%s", current_path, copybuf);
+ if (fn[strlen(fn) - 1] == '/')
+ {
+ /*
+ * Ends in a slash means directory or symlink to directory
+ */
+ if (copybuf[156] == '5')
+ {
+ /*
+ * Directory
+ */
+ fn[strlen(fn) - 1] = '\0'; /* Remove trailing slash */
+ if (mkdir(fn, S_IRWXU) != 0)
+ {
+ fprintf(stderr,
+ _("%s: could not create directory \"%s\": %m\n"),
+ progname, fn);
+ exit(1);
+ }
+#ifndef WIN32
+ if (chmod(fn, filemode))
+ fprintf(stderr, _("%s: could not set permissions on directory '%s': %m\n"),
+ progname, fn);
+#endif
+ }
+ else if (copybuf[156] == '2')
+ {
+ /*
+ * Symbolic link
+ */
+ fn[strlen(fn) - 1] = '\0'; /* Remove trailing slash */
+ if (symlink(©buf[157], fn) != 0)
+ {
+ fprintf(stderr,
+ _("%s: could not create symbolic link from %s to %s: %m\n"),
+ progname, fn, ©buf[157]);
+ exit(1);
+ }
+ }
+ else
+ {
+ fprintf(stderr, _("%s: unknown link indicator '%c'\n"),
+ progname, copybuf[156]);
+ exit(1);
+ }
+ continue; /* directory or link handled */
+ }
+
+ /*
+ * regular file
+ */
+ file = fopen(fn, "wb");
+ if (!file)
+ {
+ fprintf(stderr, _("%s: could not create file '%s': %m\n"),
+ progname, fn);
+ exit(1);
+ }
+
+#ifndef WIN32
+ if (chmod(fn, filemode))
+ fprintf(stderr, _("%s: could not set permissions on file '%s': %m\n"),
+ progname, fn);
+#endif
+
+ if (current_len_left == 0)
+ {
+ /*
+ * Done with this file, next one will be a new tar header
+ */
+ fclose(file);
+ file = NULL;
+ continue;
+ }
+ } /* new file */
+ else
+ {
+ /*
+ * Continuing blocks in existing file
+ */
+ if (current_len_left == 0 && r == current_padding)
+ {
+ /*
+ * Received the padding block for this file, ignore it and
+ * close the file, then move on to the next tar header.
+ */
+ fclose(file);
+ file = NULL;
+ totaldone += r;
+ continue;
+ }
+
+ if (fwrite(copybuf, r, 1, file) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
+ progname, fn);
+ exit(1);
+ }
+ totaldone += r;
+ if (showprogress)
+ progress_report(rownum, fn);
+
+ current_len_left -= r;
+ if (current_len_left == 0 && current_padding == 0)
+ {
+ /*
+ * Received the last block, and there is no padding to be
+ * expected. Close the file and move on to the next tar
+ * header.
+ */
+ fclose(file);
+ file = NULL;
+ continue;
+ }
+ } /* continuing data in existing file */
+ } /* loop over all data blocks */
+
+ if (file != NULL)
+ {
+ fprintf(stderr, _("%s: last file was never finsihed!\n"), progname);
+ exit(1);
+ }
+
+ if (copybuf != NULL)
+ PQfreemem(copybuf);
+}
+
+
+static PGconn *
+GetConnection(void)
+{
+ char buf[MAXPGPATH];
+ PGconn *conn;
+
+ snprintf(buf, sizeof(buf), "%s dbname=replication replication=true", conninfo);
+
+ if (verbose)
+ fprintf(stderr, _("%s: Connecting to \"%s\"\n"), progname, buf);
+
+ conn = PQconnectdb(buf);
+ if (!conn || PQstatus(conn) != CONNECTION_OK)
+ {
+ fprintf(stderr, _("%s: could not connect to server: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+ return conn;
+}
+
+static void
+BaseBackup()
+{
+ PGconn *conn;
+ PGresult *res;
+ char current_path[MAXPGPATH];
+ char escaped_label[MAXPGPATH];
+ int i;
+
+ /*
+ * Connect in replication mode to the server
+ */
+ conn = GetConnection();
+
+ PQescapeStringConn(conn, escaped_label, label, sizeof(escaped_label), &i);
+ snprintf(current_path, sizeof(current_path), "BASE_BACKUP LABEL '%s' %s",
+ escaped_label,
+ showprogress ? "PROGRESS" : "");
+
+ if (PQsendQuery(conn, current_path) == 0)
+ {
+ fprintf(stderr, _("%s: coult not start base backup: %s\n"),
+ progname, PQerrorMessage(conn));
+ PQfinish(conn);
+ exit(1);
+ }
+
+ /*
+ * Get the header
+ */
+ res = PQgetResult(conn);
+ if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ fprintf(stderr, _("%s: could not initiate base backup: %s\n"),
+ progname, PQerrorMessage(conn));
+ PQfinish(conn);
+ exit(1);
+ }
+ if (PQntuples(res) < 1)
+ {
+ fprintf(stderr, _("%s: no data returned from server.\n"), progname);
+ PQfinish(conn);
+ exit(1);
+ }
+
+ /*
+ * Sum up the total size, for progress reporting
+ */
+ totalsize = totaldone = 0;
+ tablespacecount = PQntuples(res);
+ for (i = 0; i < PQntuples(res); i++)
+ {
+ if (showprogress)
+ totalsize += atol(PQgetvalue(res, i, 2));
+
+ /*
+ * Verify tablespace directories are empty Don't bother with the first
+ * once since it can be relocated, and it will be checked before we do
+ * anything anyway.
+ */
+ if (basedir != NULL && i > 0)
+ verify_dir_is_empty_or_create(PQgetvalue(res, i, 1));
+ }
+
+ /*
+ * When writing to stdout, require a single tablespace
+ */
+ if (tardir != NULL && strcmp(tardir, "-") == 0 && PQntuples(res) > 1)
+ {
+ fprintf(stderr, _("%s: can only write single tablespace to stdout, database has %i.\n"),
+ progname, PQntuples(res));
+ PQfinish(conn);
+ exit(1);
+ }
+
+ /*
+ * Start receiving chunks
+ */
+ for (i = 0; i < PQntuples(res); i++)
+ {
+ if (tardir != NULL)
+ ReceiveTarFile(conn, res, i);
+ else
+ ReceiveAndUnpackTarFile(conn, res, i);
+ } /* Loop over all tablespaces */
+
+ if (showprogress)
+ {
+ progress_report(PQntuples(res), "");
+ fprintf(stderr, "\n"); /* Need to move to next line */
+ }
+ PQclear(res);
+
+ res = PQgetResult(conn);
+ if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
+ {
+ fprintf(stderr, _("%s: final receive failed: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+ /*
+ * End of copy data. Final result is already checked inside the loop.
+ */
+ PQfinish(conn);
+
+ if (verbose)
+ fprintf(stderr, "%s: base backup completed.\n", progname);
+}
+
+
+int
+main(int argc, char **argv)
+{
+ static struct option long_options[] = {
+ {"help", no_argument, NULL, '?'},
+ {"version", no_argument, NULL, 'V'},
+ {"conninfo", required_argument, NULL, 'c'},
+ {"basedir", required_argument, NULL, 'd'},
+ {"tardir", required_argument, NULL, 't'},
+ {"compress", required_argument, NULL, 'Z'},
+ {"label", required_argument, NULL, 'l'},
+ {"verbose", no_argument, NULL, 'v'},
+ {"progress", no_argument, NULL, 'p'},
+ {NULL, 0, NULL, 0}
+ };
+ int c;
+
+ int option_index;
+
+ progname = get_progname(argv[0]);
+ set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_basebackup"));
+
+ if (argc > 1)
+ {
+ if (strcmp(argv[1], "-h") == 0 || strcmp(argv[1], "--help") == 0 ||
+ strcmp(argv[1], "-?") == 0)
+ {
+ usage();
+ exit(0);
+ }
+ else if (strcmp(argv[1], "-V") == 0
+ || strcmp(argv[1], "--version") == 0)
+ {
+ puts("pg_basebackup (PostgreSQL) " PG_VERSION);
+ exit(0);
+ }
+ }
+
+ while ((c = getopt_long(argc, argv, "c:d:t:l:Z:vp",
+ long_options, &option_index)) != -1)
+ {
+ switch (c)
+ {
+ case 'c':
+ conninfo = xstrdup(optarg);
+ break;
+ case 'd':
+ basedir = xstrdup(optarg);
+ break;
+ case 't':
+ tardir = xstrdup(optarg);
+ break;
+ case 'l':
+ label = xstrdup(optarg);
+ break;
+ case 'Z':
+ compresslevel = atoi(optarg);
+ break;
+ case 'v':
+ verbose++;
+ break;
+ case 'p':
+ showprogress = true;
+ break;
+ default:
+
+ /*
+ * getopt_long already emitted a complaint
+ */
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+ }
+
+ /*
+ * Any non-option arguments?
+ */
+ if (optind < argc)
+ {
+ fprintf(stderr,
+ _("%s: too many command-line arguments (first is \"%s\")\n"),
+ progname, argv[optind + 1]);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ /*
+ * Required arguments
+ */
+ if (basedir == NULL && tardir == NULL)
+ {
+ fprintf(stderr, _("%s: no target directory specified\n"), progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ if (conninfo == NULL)
+ {
+ fprintf(stderr, _("%s: no conninfo string specified\n"), progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ /*
+ * Mutually exclusive arguments
+ */
+ if (basedir != NULL && tardir != NULL)
+ {
+ fprintf(stderr,
+ _("%s: both directory mode and tar mode cannot be specified\n"),
+ progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ if (basedir != NULL && compresslevel > 0)
+ {
+ fprintf(stderr,
+ _("%s: only tar mode backups can be compressed\n"),
+ progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+#ifndef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ fprintf(stderr,
+ _("%s: this build does not support compression\n"),
+ progname);
+ exit(1);
+ }
+#else
+ if (compresslevel > 0 && strcmp(tardir, "-") == 0)
+ {
+ fprintf(stderr,
+ _("%s: compression is not supported on standard output\n"),
+ progname);
+ exit(1);
+ }
+#endif
+
+ /*
+ * Verify directories
+ */
+ if (basedir)
+ verify_dir_is_empty_or_create(basedir);
+ else if (strcmp(tardir, "-") != 0)
+ verify_dir_is_empty_or_create(tardir);
+
+
+
+ BaseBackup();
+
+ return 0;
+}
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 29c3c77..40fb130 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -273,6 +273,8 @@ sub mkvcbuild
$initdb->AddLibrary('wsock32.lib');
$initdb->AddLibrary('ws2_32.lib');
+ my $pgbasebackup = AddSimpleFrontend('pg_basebackup', 1);
+
my $pgconfig = AddSimpleFrontend('pg_config');
my $pgcontrol = AddSimpleFrontend('pg_controldata');
Hi,
I have an unexpected 5 mins window to do a first reading of the patch,
so here goes the quick doc and comments proof reading of it. :)
Magnus Hagander <magnus@hagander.net> writes:
This patch creates pg_basebackup in bin/, being a client program for
the streaming base backup feature.
Great! We have pg_ctl init[db], I think we want pg_ctl clone or some
other command here to call the binary for us. What do you think?
One thing I'm thinking about - right now the tool just takes -c
<conninfo> to connect to the database. Should it instead be taught to
take the connection parameters that for example pg_dump does - one for
each of host, port, user, password? (shouldn't be hard to do..)
Consistency is good.
Now, basic first patch reading level review:
I think doc/src/sgml/backup.sgml should include some notes about how
libpq base backup streaming compares to rsync and the like in term of
efficiency or typical performances, when to prefer which, etc. I'll see
about doing some tests next week.
+ <term><option>--basedir=<replaceable class="parameter">directory</replaceable></option></term>
That should be -D --pgdata, for consistency with pg_dump.
On a quick reading it's unclear from the docs alone how -d and -t leave
together. It seems like the options are exclusive but I'd have to ask…
+ * The file will be named base.tar[.gz] if it's for the main data directory
+ * or <tablespaceoid>.tar[.gz] if it's for another tablespace.
Well we have UNIQUE, btree (spcname), so maybe we can use that here?
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Sat, Jan 15, 2011 at 21:16, Dimitri Fontaine <dimitri@2ndquadrant.fr> wrote:
Hi,
I have an unexpected 5 mins window to do a first reading of the patch,
so here goes the quick doc and comments proof reading of it. :)
:-)
Magnus Hagander <magnus@hagander.net> writes:
This patch creates pg_basebackup in bin/, being a client program for
the streaming base backup feature.Great! We have pg_ctl init[db], I think we want pg_ctl clone or some
other command here to call the binary for us. What do you think?
That might be useful, but I think we need to settle on the
pg_basebackup contents itself first.
Not sure pg_ctl clone would be the proper name, since it's not
actually a clone at this point (it might be with the second patch I
ust posted that includes the WAL files)
One thing I'm thinking about - right now the tool just takes -c
<conninfo> to connect to the database. Should it instead be taught to
take the connection parameters that for example pg_dump does - one for
each of host, port, user, password? (shouldn't be hard to do..)Consistency is good.
Now, basic first patch reading level review:
I think doc/src/sgml/backup.sgml should include some notes about how
libpq base backup streaming compares to rsync and the like in term of
efficiency or typical performances, when to prefer which, etc. I'll see
about doing some tests next week.
Yeah, the whole backup chapter may well need some more work after this.
+ <term><option>--basedir=<replaceable class="parameter">directory</replaceable></option></term>
That should be -D --pgdata, for consistency with pg_dump.
pg_dump doesn't have a -D. I assume you mean pg_ctl / initdb?
On a quick reading it's unclear from the docs alone how -d and -t leave
together. It seems like the options are exclusive but I'd have to ask…
They are. The docs clearly say "Only one of <literal>-d</> and
<literal>-t</> can be specified"
+ * The file will be named base.tar[.gz] if it's for the main data directory + * or <tablespaceoid>.tar[.gz] if it's for another tablespace.Well we have UNIQUE, btree (spcname), so maybe we can use that here?
We could, but that would make it more likely to run into encoding
issues and such - do we restrict what can be in a tablespace name?
Also with a tar named by the oid, you *can* untar it into a directory
in pg_tblspc to recover from if you have to.
Another option, I think Heikki mentioned this on IM at some point, is
to do something like name it <oid>-<name>.tar. That would give us best
of both worlds?
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Em 15-01-2011 15:10, Magnus Hagander escreveu:
One thing I'm thinking about - right now the tool just takes -c
<conninfo> to connect to the database. Should it instead be taught to
take the connection parameters that for example pg_dump does - one for
each of host, port, user, password? (shouldn't be hard to do..)
+1.
--
Euler Taveira de Oliveira
http://www.timbira.com/
Magnus Hagander <magnus@hagander.net> writes:
Not sure pg_ctl clone would be the proper name, since it's not
actually a clone at this point (it might be with the second patch I
ust posted that includes the WAL files)
Let's keep the clone name for the client that makes it all then :)
That should be -D --pgdata, for consistency with pg_dump.
pg_dump doesn't have a -D. I assume you mean pg_ctl / initdb?
Yes, sorry, been too fast.
They are. The docs clearly say "Only one of <literal>-d</> and
<literal>-t</> can be specified"
Really too fast…
Another option, I think Heikki mentioned this on IM at some point, is
to do something like name it <oid>-<name>.tar. That would give us best
of both worlds?
Well I'd think we know the pg_tablespace columns encoding, so the
problem might be the filesystem encodings, right? Well there's also the
option of creating <oid>.tar and have a symlink to it called <name>.tar
but that's pushing it. I don't think naming after OIDs is a good
service for users, but if that's all we can reasonably do…
Will continue reviewing and post something more polished and
comprehensive next week — mainly wanted to see if you wanted to include
pg_ctl <command> in the patch already.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Sat, Jan 15, 2011 at 23:10, Dimitri Fontaine <dimitri@2ndquadrant.fr> wrote:
That should be -D --pgdata, for consistency with pg_dump.
pg_dump doesn't have a -D. I assume you mean pg_ctl / initdb?
Yes, sorry, been too fast.
Ok. Updated patch that includes this change attached. I also changed
the tar directory from -t to -T, for consistency.
It also includes the change to take -h host, -U user, -w/-W for
password -p port instead of a conninfo string.
Another option, I think Heikki mentioned this on IM at some point, is
to do something like name it <oid>-<name>.tar. That would give us best
of both worlds?Well I'd think we know the pg_tablespace columns encoding, so the
problem might be the filesystem encodings, right? Well there's also the
Do we really? That's one of the global catalogs that don't really have
an encoding, isn't it?
option of creating <oid>.tar and have a symlink to it called <name>.tar
but that's pushing it. I don't think naming after OIDs is a good
service for users, but if that's all we can reasonably do…
Yeah, symlink seems to be making things way too complex. <oid>-<name>
seems is perhaps a reasonable compromise?
Will continue reviewing and post something more polished and
comprehensive next week — mainly wanted to see if you wanted to include
pg_ctl <command> in the patch already.
Ok, thanks.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Attachments:
pg_basebackup.patchtext/x-patch; charset=US-ASCII; name=pg_basebackup.patchDownload
diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index db7c834..c14ae43 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -813,6 +813,16 @@ SELECT pg_stop_backup();
</para>
<para>
+ You can also use the <xref linkend="app-pgbasebackup"> tool to take
+ the backup, instead of manually copying the files. This tool will take
+ care of the <function>pg_start_backup()</>, copy and
+ <function>pg_stop_backup()</> steps automatically, and transfers the
+ backup over a regular <productname>PostgreSQL</productname> connection
+ using the replication protocol, instead of requiring filesystem level
+ access.
+ </para>
+
+ <para>
Some file system backup tools emit warnings or errors
if the files they are trying to copy change while the copy proceeds.
When taking a base backup of an active database, this situation is normal
diff --git a/doc/src/sgml/ref/allfiles.sgml b/doc/src/sgml/ref/allfiles.sgml
index f40fa9d..c44d11e 100644
--- a/doc/src/sgml/ref/allfiles.sgml
+++ b/doc/src/sgml/ref/allfiles.sgml
@@ -160,6 +160,7 @@ Complete list of usable sgml source files in this directory.
<!entity dropuser system "dropuser.sgml">
<!entity ecpgRef system "ecpg-ref.sgml">
<!entity initdb system "initdb.sgml">
+<!entity pgBasebackup system "pg_basebackup.sgml">
<!entity pgConfig system "pg_config-ref.sgml">
<!entity pgControldata system "pg_controldata.sgml">
<!entity pgCtl system "pg_ctl-ref.sgml">
diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml
new file mode 100644
index 0000000..dc8e2f4
--- /dev/null
+++ b/doc/src/sgml/ref/pg_basebackup.sgml
@@ -0,0 +1,313 @@
+<!--
+doc/src/sgml/ref/pg_basebackup.sgml
+PostgreSQL documentation
+-->
+
+<refentry id="app-pgbasebackup">
+ <refmeta>
+ <refentrytitle>pg_basebackup</refentrytitle>
+ <manvolnum>1</manvolnum>
+ <refmiscinfo>Application</refmiscinfo>
+ </refmeta>
+
+ <refnamediv>
+ <refname>pg_basebackup</refname>
+ <refpurpose>take a base backup of a <productname>PostgreSQL</productname> cluster</refpurpose>
+ </refnamediv>
+
+ <indexterm zone="app-pgbasebackup">
+ <primary>pg_basebackup</primary>
+ </indexterm>
+
+ <refsynopsisdiv>
+ <cmdsynopsis>
+ <command>pg_basebackup</command>
+ <arg rep="repeat"><replaceable>option</></arg>
+ </cmdsynopsis>
+ </refsynopsisdiv>
+
+ <refsect1>
+ <title>
+ Description
+ </title>
+ <para>
+ <application>pg_basebackup</application> is used to take base backups of
+ a running <productname>PostgreSQL</productname> database cluster. These
+ are taken without affecting other clients to the database, and can be used
+ both for point-in-time recovery (see <xref linkend="continuous-archiving">)
+ and as the starting point for a log shipping or streaming replication standby
+ server (see <xref linkend="warm-standby">).
+ </para>
+
+ <para>
+ <application>pg_basebackup</application> makes a binary copy of the database
+ cluster files, while making sure the system is automatically put in and
+ out of backup mode automatically. Backups are always taken of the entire
+ database cluster, it is not possible to back up individual databases or
+ database objects. For individual database backups, a tool such as
+ <xref linkend="APP-PGDUMP"> must be used.
+ </para>
+
+ <para>
+ The backup is made over a regular <productname>PostgreSQL</productname>
+ connection, and uses the replication protocol. The connection must be
+ made with a user having <literal>REPLICATION</literal> permissions (see
+ <xref linkend="role-attributes">).
+ </para>
+
+ <para>
+ Only one backup can be concurrently active in
+ <productname>PostgreSQL</productname>, meaning that only one instance of
+ <application>pg_basebackup</application> can run at the same time
+ against a single database cluster.
+ </para>
+ </refsect1>
+
+ <refsect1>
+ <title>Options</title>
+
+ <para>
+ <variablelist>
+ <varlistentry>
+ <term><option>-D <replaceable class="parameter">directory</replaceable></option></term>
+ <term><option>--pgdata=<replaceable class="parameter">directory</replaceable></option></term>
+ <listitem>
+ <para>
+ Directory to restore the base data directory to. When the cluster has
+ no additional tablespaces, the whole database will be placed in this
+ directory. If the cluster contains additional tablespaces, the main
+ data directory will be placed in this directory, but all other
+ tablespaces will be placed in the same absolute path as they have
+ on the server.
+ </para>
+ <para>
+ Only one of <literal>-D</> and <literal>-T</> can be specified.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-l <replaceable class="parameter">label</replaceable></option></term>
+ <term><option>--label=<replaceable class="parameter">label</replaceable></option></term>
+ <listitem>
+ <para>
+ Sets the label for the backup. If none is specified, a default value of
+ <literal>pg_basebackup base backup</literal> will be used.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-p</option></term>
+ <term><option>--progress</option></term>
+ <listitem>
+ <para>
+ Enables progress reporting. Turning this on will deliver an approximate
+ progress report during the backup. Since the database may change during
+ the backup, this is only an approximation and may not end at exactly
+ <literal>100%</literal>.
+ </para>
+ <para>
+ When this is enabled, the backup will start by enumerating the size of
+ the entire database, and then go back and send the actual contents.
+ This may make the backup take slightly longer, and in particular it
+ will take longer before the first data is sent.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-T <replaceable class="parameter">directory</replaceable></option></term>
+ <term><option>--tardir=<replaceable class="parameter">directory</replaceable></option></term>
+ <listitem>
+ <para>
+ Directory to place tar format files in. When this is specified, the
+ backup will consist of a number of tar files, one for each tablespace
+ in the database, stored in this directory. The tar file for the main
+ data directory will be named <filename>base.tar</>, and all other
+ tablespaces will be named after the tablespace oid.
+ </para>
+ <para>
+ If the value <literal>-</> (dash) is specified as tar directory,
+ the tar contents will be written to standard output, suitable for
+ piping to for example <productname>gzip</>. This is only possible if
+ the cluster has no additional tablespaces.
+ </para>
+ <para>
+ Only one of <literal>-D</> and <literal>-T</> can be specified.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-v</option></term>
+ <term><option>--verbose</option></term>
+ <listitem>
+ <para>
+ Enables verbose mode. Will output some extra steps during startup and
+ shutdown, as well as show the exact filename that is currently being
+ processed if progress reporting is also enabled.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-Z <replaceable class="parameter">level</replaceable></option></term>
+ <term><option>--compress=<replaceable class="parameter">level</replaceable></option></term>
+ <listitem>
+ <para>
+ Enables gzip compression of tar file output. Compression is only
+ available when generating tar files, and is not available when sending
+ output to standard output.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ The following command-line options control the database connection parameters.
+
+ <variablelist>
+ <varlistentry>
+ <term><option>-h <replaceable class="parameter">host</replaceable></option></term>
+ <term><option>--host=<replaceable class="parameter">host</replaceable></option></term>
+ <listitem>
+ <para>
+ Specifies the host name of the machine on which the server is
+ running. If the value begins with a slash, it is used as the
+ directory for the Unix domain socket. The default is taken
+ from the <envar>PGHOST</envar> environment variable, if set,
+ else a Unix domain socket connection is attempted.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-p <replaceable class="parameter">port</replaceable></option></term>
+ <term><option>--port=<replaceable class="parameter">port</replaceable></option></term>
+ <listitem>
+ <para>
+ Specifies the TCP port or local Unix domain socket file
+ extension on which the server is listening for connections.
+ Defaults to the <envar>PGPORT</envar> environment variable, if
+ set, or a compiled-in default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-U <replaceable>username</replaceable></option></term>
+ <term><option>--username=<replaceable class="parameter">username</replaceable></option></term>
+ <listitem>
+ <para>
+ User name to connect as.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-w</></term>
+ <term><option>--no-password</></term>
+ <listitem>
+ <para>
+ Never issue a password prompt. If the server requires
+ password authentication and a password is not available by
+ other means such as a <filename>.pgpass</filename> file, the
+ connection attempt will fail. This option can be useful in
+ batch jobs and scripts where no user is present to enter a
+ password.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-W</option></term>
+ <term><option>--password</option></term>
+ <listitem>
+ <para>
+ Force <application>pg_basebackup</application> to prompt for a
+ password before connecting to a database.
+ </para>
+
+ <para>
+ This option is never essential, since
+ <application>pg_bsaebackup</application> will automatically prompt
+ for a password if the server demands password authentication.
+ However, <application>pg_basebackup</application> will waste a
+ connection attempt finding out that the server wants a password.
+ In some cases it is worth typing <option>-W</> to avoid the extra
+ connection attempt.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ Other, less commonly used, parameters are also available:
+
+ <variablelist>
+ <varlistentry>
+ <term><option>-V</></term>
+ <term><option>--version</></term>
+ <listitem>
+ <para>
+ Print the <application>pg_basebackup</application> version and exit.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-?</></term>
+ <term><option>--help</></term>
+ <listitem>
+ <para>
+ Show help about <application>pg_basebackup</application> command line
+ arguments, and exit.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
+
+ </refsect1>
+
+ <refsect1>
+ <title>Environment</title>
+
+ <para>
+ This utility, like most other <productname>PostgreSQL</> utilities,
+ uses the environment variables supported by <application>libpq</>
+ (see <xref linkend="libpq-envars">).
+ </para>
+
+ </refsect1>
+
+ <refsect1>
+ <title>Notes</title>
+
+ <para>
+ The backup will include all files in the data directory and tablespaces,
+ including the configuration files and any additional files placed in the
+ directory by third parties. Only regular files and directories are allowed
+ in the data directory, no symbolic links or special device files.
+ </para>
+
+ <para>
+ The way <productname>PostgreSQL</productname> manages tablespaces, the path
+ for all additional tablespaces must be identical whenever a backup is
+ restored. The main data directory, however, is relocatable to any location.
+ </para>
+ </refsect1>
+
+ <refsect1>
+ <title>See Also</title>
+
+ <simplelist type="inline">
+ <member><xref linkend="APP-PGDUMP"></member>
+ </simplelist>
+ </refsect1>
+
+</refentry>
diff --git a/doc/src/sgml/reference.sgml b/doc/src/sgml/reference.sgml
index 84babf6..6ee8e5b 100644
--- a/doc/src/sgml/reference.sgml
+++ b/doc/src/sgml/reference.sgml
@@ -202,6 +202,7 @@
&droplang;
&dropuser;
&ecpgRef;
+ &pgBasebackup;
&pgConfig;
&pgDump;
&pgDumpall;
diff --git a/src/bin/Makefile b/src/bin/Makefile
index c18c05c..3809412 100644
--- a/src/bin/Makefile
+++ b/src/bin/Makefile
@@ -14,7 +14,7 @@ top_builddir = ../..
include $(top_builddir)/src/Makefile.global
SUBDIRS = initdb pg_ctl pg_dump \
- psql scripts pg_config pg_controldata pg_resetxlog
+ psql scripts pg_config pg_controldata pg_resetxlog pg_basebackup
ifeq ($(PORTNAME), win32)
SUBDIRS+=pgevent
endif
diff --git a/src/bin/pg_basebackup/Makefile b/src/bin/pg_basebackup/Makefile
new file mode 100644
index 0000000..ccb1502
--- /dev/null
+++ b/src/bin/pg_basebackup/Makefile
@@ -0,0 +1,38 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/bin/pg_basebackup
+#
+# Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/bin/pg_basebackup/Makefile
+#
+#-------------------------------------------------------------------------
+
+PGFILEDESC = "pg_basebackup - takes a streaming base backup of a PostgreSQL instance"
+PGAPPICON=win32
+
+subdir = src/bin/pg_basebackup
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS= pg_basebackup.o $(WIN32RES)
+
+all: pg_basebackup
+
+pg_basebackup: $(OBJS) | submake-libpq submake-libpgport
+ $(CC) $(CFLAGS) $(OBJS) $(libpq_pgport) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+
+install: all installdirs
+ $(INSTALL_PROGRAM) pg_basebackup$(X) '$(DESTDIR)$(bindir)/pg_basebackup$(X)'
+
+installdirs:
+ $(MKDIR_P) '$(DESTDIR)$(bindir)'
+
+uninstall:
+ rm -f '$(DESTDIR)$(bindir)/pg_basebackup$(X)'
+
+clean distclean maintainer-clean:
+ rm -f pg_basebackup$(X) $(OBJS)
diff --git a/src/bin/pg_basebackup/nls.mk b/src/bin/pg_basebackup/nls.mk
new file mode 100644
index 0000000..760ee1d
--- /dev/null
+++ b/src/bin/pg_basebackup/nls.mk
@@ -0,0 +1,5 @@
+# src/bin/pg_basebackup/nls.mk
+CATALOG_NAME := pg_basebackup
+AVAIL_LANGUAGES :=
+GETTEXT_FILES := pg_basebackup.c
+GETTEXT_TRIGGERS:= _
diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c
new file mode 100644
index 0000000..bb9dea0
--- /dev/null
+++ b/src/bin/pg_basebackup/pg_basebackup.c
@@ -0,0 +1,999 @@
+/*-------------------------------------------------------------------------
+ *
+ * pg_basebackup.c - receive a base backup using streaming replication protocol
+ *
+ * Author: Magnus Hagander <magnus@hagander.net>
+ *
+ * Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ * src/bin/pg_basebackup.c
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+#include "libpq-fe.h"
+
+#include <unistd.h>
+#include <dirent.h>
+#include <sys/stat.h>
+
+#ifdef HAVE_LIBZ
+#include <zlib.h>
+#endif
+
+#include "getopt_long.h"
+
+
+/* Global options */
+static const char *progname;
+char *basedir = NULL;
+char *tardir = NULL;
+char *label = "pg_basebackup base backup";
+bool showprogress = false;
+int verbose = 0;
+int compresslevel = 0;
+char *dbhost = NULL;
+char *dbuser = NULL;
+char *dbport = NULL;
+int dbgetpassword = 0; /* 0=auto, -1=never, 1=always */
+
+/* Progress counters */
+static uint64 totalsize;
+static uint64 totaldone;
+static int tablespacecount;
+
+/* Function headers */
+static char *xstrdup(const char *s);
+static void *xmalloc0(int size);
+static void usage(void);
+static void verify_dir_is_empty_or_create(char *dirname);
+static void progress_report(int tablespacenum, char *fn);
+static PGconn *GetConnection(void);
+
+static void ReceiveTarFile(PGconn *conn, PGresult *res, int rownum);
+static void ReceiveAndUnpackTarFile(PGconn *conn, PGresult *res, int rownum);
+static void BaseBackup();
+
+#ifdef HAVE_LIBZ
+static const char *
+get_gz_error(gzFile *gzf)
+{
+ int errnum;
+ const char *errmsg;
+
+ errmsg = gzerror(gzf, &errnum);
+ if (errnum == Z_ERRNO)
+ return strerror(errno);
+ else
+ return errmsg;
+}
+#endif
+
+/*
+ * strdup() and malloc() replacements that prints an error and exits
+ * if something goes wrong. Can never return NULL.
+ */
+static char *
+xstrdup(const char *s)
+{
+ char *result;
+
+ result = strdup(s);
+ if (!result)
+ {
+ fprintf(stderr, _("%s: out of memory\n"), progname);
+ exit(1);
+ }
+ return result;
+}
+
+static void *
+xmalloc0(int size)
+{
+ void *result;
+
+ result = malloc(size);
+ if (!result)
+ {
+ fprintf(stderr, _("%s: out of memory\n"), progname);
+ exit(1);
+ }
+ MemSet(result, 0, size);
+ return result;
+}
+
+static void
+usage(void)
+{
+ printf(_("%s takes base backups of running PostgreSQL servers\n\n"),
+ progname);
+ printf(_("Usage:\n"));
+ printf(_(" %s [OPTION]...\n"), progname);
+ printf(_("\nOptions:\n"));
+ printf(_(" -D, --pgdata=directory receive base backup into directory\n"));
+ printf(_(" -T, --tardir=directory receive base backup into tar files\n"
+ " stored in specified directory\n"));
+ printf(_(" -Z, --compress=0-9 compress tar output\n"));
+ printf(_(" -l, --label=label set backup label\n"));
+ printf(_(" -p, --progress show progress information\n"));
+ printf(_(" -v, --verbose output verbose messages\n"));
+ printf(_("\nConnection options:\n"));
+ printf(_(" -h, --host=HOSTNAME database server host or socket directory\n"));
+ printf(_(" -p, --port=PORT database server port number\n"));
+ printf(_(" -U, --username=NAME connect as specified database user\n"));
+ printf(_(" -w, --no-password never prompt for password\n"));
+ printf(_(" -W, --password force password prompt (should happen automatically)\n"));
+ printf(_("\nOther options:\n"));
+ printf(_(" -?, --help show this help, then exit\n"));
+ printf(_(" -V, --version output version information, then exit\n"));
+ printf(_("\nReport bugs to <pgsql-bugs@postgresql.org>.\n"));
+}
+
+
+/*
+ * Verify that the given directory exists and is empty. If it does not
+ * exist, it is created. If it exists but is not empty, an error will
+ * be give and the process ended.
+ */
+static void
+verify_dir_is_empty_or_create(char *dirname)
+{
+ switch (pg_check_dir(dirname))
+ {
+ case 0:
+
+ /*
+ * Does not exist, so create
+ */
+ if (pg_mkdir_p(dirname, S_IRWXU) == -1)
+ {
+ fprintf(stderr,
+ _("%s: could not create directory \"%s\": %s\n"),
+ progname, dirname, strerror(errno));
+ exit(1);
+ }
+ return;
+ case 1:
+
+ /*
+ * Exists, empty
+ */
+ return;
+ case 2:
+
+ /*
+ * Exists, not empty
+ */
+ fprintf(stderr,
+ _("%s: directory \"%s\" exists but is not empty\n"),
+ progname, dirname);
+ exit(1);
+ case -1:
+
+ /*
+ * Access problem
+ */
+ fprintf(stderr, _("%s: could not access directory \"%s\": %s\n"),
+ progname, dirname, strerror(errno));
+ exit(1);
+ }
+}
+
+
+/*
+ * Print a progress report based on the global variables. If verbose output
+ * is disabled, also print the current file name.
+ */
+static void
+progress_report(int tablespacenum, char *fn)
+{
+ if (verbose)
+ fprintf(stderr,
+ INT64_FORMAT "/" INT64_FORMAT " kB (%i%%) %i/%i tablespaces (%-30s)\r",
+ totaldone / 1024, totalsize,
+ (int) ((totaldone / 1024) * 100 / totalsize),
+ tablespacenum, tablespacecount, fn);
+ else
+ fprintf(stderr, INT64_FORMAT "/" INT64_FORMAT " kB (%i%%) %i/%i tablespaces\r",
+ totaldone / 1024, totalsize,
+ (int) ((totaldone / 1024) * 100 / totalsize),
+ tablespacenum, tablespacecount);
+}
+
+
+/*
+ * Receive a tar format file from the connection to the server, and write
+ * the data from this file directly into a tar file. If compression is
+ * enabled, the data will be compressed while written to the file.
+ *
+ * The file will be named base.tar[.gz] if it's for the main data directory
+ * or <tablespaceoid>.tar[.gz] if it's for another tablespace.
+ *
+ * No attempt to inspect or validate the contents of the file is done.
+ */
+static void
+ReceiveTarFile(PGconn *conn, PGresult *res, int rownum)
+{
+ char fn[MAXPGPATH];
+ char *copybuf = NULL;
+ FILE *tarfile = NULL;
+
+#ifdef HAVE_LIBZ
+ gzFile *ztarfile = NULL;
+#endif
+
+ if (PQgetisnull(res, rownum, 0))
+
+ /*
+ * Base tablespaces
+ */
+ if (strcmp(tardir, "-") == 0)
+ tarfile = stdout;
+ else
+ {
+#ifdef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ snprintf(fn, sizeof(fn), "%s/base.tar.gz", tardir);
+ ztarfile = gzopen(fn, "wb");
+ if (gzsetparams(ztarfile, compresslevel, Z_DEFAULT_STRATEGY) != Z_OK)
+ {
+ fprintf(stderr, _("%s: could not set compression level %i\n"),
+ progname, compresslevel);
+ exit(1);
+ }
+ }
+ else
+#endif
+ {
+ snprintf(fn, sizeof(fn), "%s/base.tar", tardir);
+ tarfile = fopen(fn, "wb");
+ }
+ }
+ else
+ {
+ /*
+ * Specific tablespace
+ */
+#ifdef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ snprintf(fn, sizeof(fn), "%s/%s.tar.gz", tardir, PQgetvalue(res, rownum, 0));
+ ztarfile = gzopen(fn, "wb");
+ }
+ else
+#endif
+ {
+ snprintf(fn, sizeof(fn), "%s/%s.tar", tardir, PQgetvalue(res, rownum, 0));
+ tarfile = fopen(fn, "wb");
+ }
+ }
+
+#ifdef HAVE_LIBZ
+ if (!tarfile && !ztarfile)
+#else
+ if (!tarfile)
+#endif
+ {
+ fprintf(stderr, _("%s: could not create file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ exit(1);
+ }
+
+ /*
+ * Get the COPY data stream
+ */
+ res = PQgetResult(conn);
+ if (!res || PQresultStatus(res) != PGRES_COPY_OUT)
+ {
+ fprintf(stderr, _("%s: could not get COPY data stream: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+ while (1)
+ {
+ int r;
+
+ if (copybuf != NULL)
+ {
+ PQfreemem(copybuf);
+ copybuf = NULL;
+ }
+
+ r = PQgetCopyData(conn, ©buf, 0);
+ if (r == -1)
+ {
+ /*
+ * End of chunk. Close file (but not stdout).
+ *
+ * Also, write two completely empty blocks at the end of the tar
+ * file, as required by some tar programs.
+ */
+ char zerobuf[1024];
+
+ MemSet(zerobuf, 0, sizeof(zerobuf));
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ {
+ if (gzwrite(ztarfile, zerobuf, sizeof(zerobuf)) != sizeof(zerobuf))
+ {
+ fprintf(stderr, _("%s: could not write to compressed file '%s': %s\n"),
+ progname, fn, get_gz_error(ztarfile));
+ }
+ }
+ else
+#endif
+ {
+ if (fwrite(zerobuf, sizeof(zerobuf), 1, tarfile) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
+ progname, fn);
+ exit(1);
+ }
+ }
+
+ if (strcmp(tardir, "-") != 0)
+ {
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ gzclose(ztarfile);
+#endif
+ if (tarfile != NULL)
+ fclose(tarfile);
+ }
+
+ break;
+ }
+ else if (r == -2)
+ {
+ fprintf(stderr, _("%s: could not read COPY data: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ {
+ if (gzwrite(ztarfile, copybuf, r) != r)
+ {
+ fprintf(stderr, _("%s: could not write to compressed file '%s': %s\n"),
+ progname, fn, get_gz_error(ztarfile));
+ }
+ }
+ else
+#endif
+ {
+ if (fwrite(copybuf, r, 1, tarfile) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
+ progname, fn);
+ exit(1);
+ }
+ }
+ totaldone += r;
+ if (showprogress)
+ progress_report(rownum, fn);
+ } /* while (1) */
+
+ if (copybuf != NULL)
+ PQfreemem(copybuf);
+}
+
+/*
+ * Receive a tar format stream from the connection to the server, and unpack
+ * the contents of it into a directory. Only files, directories and
+ * symlinks are supported, no other kinds of special files.
+ *
+ * If the data is for the main data directory, it will be restored in the
+ * specified directory. If it's for another tablespace, it will be restored
+ * in the original directory, since relocation of tablespaces is not
+ * supported.
+ */
+static void
+ReceiveAndUnpackTarFile(PGconn *conn, PGresult *res, int rownum)
+{
+ char current_path[MAXPGPATH];
+ char fn[MAXPGPATH];
+ int current_len_left;
+ int current_padding;
+ char *copybuf = NULL;
+ FILE *file = NULL;
+
+ if (PQgetisnull(res, rownum, 0))
+ strcpy(current_path, basedir);
+ else
+ strcpy(current_path, PQgetvalue(res, rownum, 1));
+
+ /*
+ * Make sure we're unpacking into an empty directory
+ */
+ verify_dir_is_empty_or_create(current_path);
+
+ /*
+ * Get the COPY data
+ */
+ res = PQgetResult(conn);
+ if (!res || PQresultStatus(res) != PGRES_COPY_OUT)
+ {
+ fprintf(stderr, _("%s: could not get COPY data stream: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+ while (1)
+ {
+ int r;
+
+ if (copybuf != NULL)
+ {
+ PQfreemem(copybuf);
+ copybuf = NULL;
+ }
+
+ r = PQgetCopyData(conn, ©buf, 0);
+
+ if (r == -1)
+ {
+ /*
+ * End of chunk
+ */
+ if (file)
+ fclose(file);
+
+ break;
+ }
+ else if (r == -2)
+ {
+ fprintf(stderr, _("%s: could not read COPY data: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+ if (file == NULL)
+ {
+#ifndef WIN32
+ mode_t filemode;
+#endif
+
+ /*
+ * No current file, so this must be the header for a new file
+ */
+ if (r != 512)
+ {
+ fprintf(stderr, _("%s: Invalid tar block header size: %i\n"),
+ progname, r);
+ exit(1);
+ }
+ totaldone += 512;
+
+ if (sscanf(copybuf + 124, "%11o", ¤t_len_left) != 1)
+ {
+ fprintf(stderr, _("%s: could not parse file size!\n"),
+ progname);
+ exit(1);
+ }
+
+ /* Set permissions on the file */
+ if (sscanf(©buf[100], "%07o ", &filemode) != 1)
+ {
+ fprintf(stderr, _("%s: could not parse file mode!\n"),
+ progname);
+ exit(1);
+ }
+
+ /*
+ * All files are padded up to 512 bytes
+ */
+ current_padding =
+ ((current_len_left + 511) & ~511) - current_len_left;
+
+ /*
+ * First part of header is zero terminated filename
+ */
+ snprintf(fn, sizeof(fn), "%s/%s", current_path, copybuf);
+ if (fn[strlen(fn) - 1] == '/')
+ {
+ /*
+ * Ends in a slash means directory or symlink to directory
+ */
+ if (copybuf[156] == '5')
+ {
+ /*
+ * Directory
+ */
+ fn[strlen(fn) - 1] = '\0'; /* Remove trailing slash */
+ if (mkdir(fn, S_IRWXU) != 0)
+ {
+ fprintf(stderr,
+ _("%s: could not create directory \"%s\": %m\n"),
+ progname, fn);
+ exit(1);
+ }
+#ifndef WIN32
+ if (chmod(fn, filemode))
+ fprintf(stderr, _("%s: could not set permissions on directory '%s': %m\n"),
+ progname, fn);
+#endif
+ }
+ else if (copybuf[156] == '2')
+ {
+ /*
+ * Symbolic link
+ */
+ fn[strlen(fn) - 1] = '\0'; /* Remove trailing slash */
+ if (symlink(©buf[157], fn) != 0)
+ {
+ fprintf(stderr,
+ _("%s: could not create symbolic link from %s to %s: %m\n"),
+ progname, fn, ©buf[157]);
+ exit(1);
+ }
+ }
+ else
+ {
+ fprintf(stderr, _("%s: unknown link indicator '%c'\n"),
+ progname, copybuf[156]);
+ exit(1);
+ }
+ continue; /* directory or link handled */
+ }
+
+ /*
+ * regular file
+ */
+ file = fopen(fn, "wb");
+ if (!file)
+ {
+ fprintf(stderr, _("%s: could not create file '%s': %m\n"),
+ progname, fn);
+ exit(1);
+ }
+
+#ifndef WIN32
+ if (chmod(fn, filemode))
+ fprintf(stderr, _("%s: could not set permissions on file '%s': %m\n"),
+ progname, fn);
+#endif
+
+ if (current_len_left == 0)
+ {
+ /*
+ * Done with this file, next one will be a new tar header
+ */
+ fclose(file);
+ file = NULL;
+ continue;
+ }
+ } /* new file */
+ else
+ {
+ /*
+ * Continuing blocks in existing file
+ */
+ if (current_len_left == 0 && r == current_padding)
+ {
+ /*
+ * Received the padding block for this file, ignore it and
+ * close the file, then move on to the next tar header.
+ */
+ fclose(file);
+ file = NULL;
+ totaldone += r;
+ continue;
+ }
+
+ if (fwrite(copybuf, r, 1, file) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
+ progname, fn);
+ exit(1);
+ }
+ totaldone += r;
+ if (showprogress)
+ progress_report(rownum, fn);
+
+ current_len_left -= r;
+ if (current_len_left == 0 && current_padding == 0)
+ {
+ /*
+ * Received the last block, and there is no padding to be
+ * expected. Close the file and move on to the next tar
+ * header.
+ */
+ fclose(file);
+ file = NULL;
+ continue;
+ }
+ } /* continuing data in existing file */
+ } /* loop over all data blocks */
+
+ if (file != NULL)
+ {
+ fprintf(stderr, _("%s: last file was never finsihed!\n"), progname);
+ exit(1);
+ }
+
+ if (copybuf != NULL)
+ PQfreemem(copybuf);
+}
+
+
+static PGconn *
+GetConnection(void)
+{
+ PGconn *conn;
+ int argcount = 3; /* dbname, replication, password */
+ int i;
+ const char **keywords;
+ const char **values;
+ char *password = NULL;
+
+ if (dbhost)
+ argcount++;
+ if (dbuser)
+ argcount++;
+ if (dbport)
+ argcount++;
+
+ keywords = xmalloc0((argcount + 1) * sizeof(*keywords));
+ values = xmalloc0((argcount + 1) * sizeof(*values));
+
+ keywords[0] = "dbname";
+ values[0] = "replication";
+ keywords[1] = "replication";
+ values[1] = "true";
+ i = 2;
+ if (dbhost)
+ {
+ keywords[i] = "host";
+ values[i] = dbhost;
+ i++;
+ }
+ if (dbuser)
+ {
+ keywords[i] = "user";
+ values[i] = dbuser;
+ i++;
+ }
+ if (dbport)
+ {
+ keywords[i] = "port";
+ values[i] = dbport;
+ i++;
+ }
+
+ while (true)
+ {
+ if (dbgetpassword == 1)
+ {
+ /* Prompt for a password */
+ password = simple_prompt(_("Password: "), 100, false);
+ keywords[argcount - 1] = "password";
+ values[argcount - 1] = password;
+ }
+
+ conn = PQconnectdbParams(keywords, values, true);
+ if (password)
+ free(password);
+
+ if (PQstatus(conn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(conn) &&
+ dbgetpassword != -1)
+ {
+ dbgetpassword = 1; /* ask for password next time */
+ continue;
+ }
+
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ fprintf(stderr, _("%s: could not connect to server: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+ /* Connection ok! */
+ free(values);
+ free(keywords);
+ return conn;
+ }
+}
+
+static void
+BaseBackup()
+{
+ PGconn *conn;
+ PGresult *res;
+ char current_path[MAXPGPATH];
+ char escaped_label[MAXPGPATH];
+ int i;
+
+ /*
+ * Connect in replication mode to the server
+ */
+ conn = GetConnection();
+
+ PQescapeStringConn(conn, escaped_label, label, sizeof(escaped_label), &i);
+ snprintf(current_path, sizeof(current_path), "BASE_BACKUP LABEL '%s' %s",
+ escaped_label,
+ showprogress ? "PROGRESS" : "");
+
+ if (PQsendQuery(conn, current_path) == 0)
+ {
+ fprintf(stderr, _("%s: coult not start base backup: %s\n"),
+ progname, PQerrorMessage(conn));
+ PQfinish(conn);
+ exit(1);
+ }
+
+ /*
+ * Get the header
+ */
+ res = PQgetResult(conn);
+ if (!res || PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ fprintf(stderr, _("%s: could not initiate base backup: %s\n"),
+ progname, PQerrorMessage(conn));
+ PQfinish(conn);
+ exit(1);
+ }
+ if (PQntuples(res) < 1)
+ {
+ fprintf(stderr, _("%s: no data returned from server.\n"), progname);
+ PQfinish(conn);
+ exit(1);
+ }
+
+ /*
+ * Sum up the total size, for progress reporting
+ */
+ totalsize = totaldone = 0;
+ tablespacecount = PQntuples(res);
+ for (i = 0; i < PQntuples(res); i++)
+ {
+ if (showprogress)
+ totalsize += atol(PQgetvalue(res, i, 2));
+
+ /*
+ * Verify tablespace directories are empty Don't bother with the first
+ * once since it can be relocated, and it will be checked before we do
+ * anything anyway.
+ */
+ if (basedir != NULL && i > 0)
+ verify_dir_is_empty_or_create(PQgetvalue(res, i, 1));
+ }
+
+ /*
+ * When writing to stdout, require a single tablespace
+ */
+ if (tardir != NULL && strcmp(tardir, "-") == 0 && PQntuples(res) > 1)
+ {
+ fprintf(stderr, _("%s: can only write single tablespace to stdout, database has %i.\n"),
+ progname, PQntuples(res));
+ PQfinish(conn);
+ exit(1);
+ }
+
+ /*
+ * Start receiving chunks
+ */
+ for (i = 0; i < PQntuples(res); i++)
+ {
+ if (tardir != NULL)
+ ReceiveTarFile(conn, res, i);
+ else
+ ReceiveAndUnpackTarFile(conn, res, i);
+ } /* Loop over all tablespaces */
+
+ if (showprogress)
+ {
+ progress_report(PQntuples(res), "");
+ fprintf(stderr, "\n"); /* Need to move to next line */
+ }
+ PQclear(res);
+
+ res = PQgetResult(conn);
+ if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
+ {
+ fprintf(stderr, _("%s: final receive failed: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
+
+ /*
+ * End of copy data. Final result is already checked inside the loop.
+ */
+ PQfinish(conn);
+
+ if (verbose)
+ fprintf(stderr, "%s: base backup completed.\n", progname);
+}
+
+
+int
+main(int argc, char **argv)
+{
+ static struct option long_options[] = {
+ {"help", no_argument, NULL, '?'},
+ {"version", no_argument, NULL, 'V'},
+ {"pgdata", required_argument, NULL, 'D'},
+ {"tardir", required_argument, NULL, 'T'},
+ {"compress", required_argument, NULL, 'Z'},
+ {"label", required_argument, NULL, 'l'},
+ {"host", required_argument, NULL, 'h'},
+ {"port", required_argument, NULL, 'p'},
+ {"username", required_argument, NULL, 'U'},
+ {"no-password", no_argument, NULL, 'w'},
+ {"password", no_argument, NULL, 'W'},
+ {"verbose", no_argument, NULL, 'v'},
+ {"progress", no_argument, NULL, 'P'},
+ {NULL, 0, NULL, 0}
+ };
+ int c;
+
+ int option_index;
+
+ progname = get_progname(argv[0]);
+ set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_basebackup"));
+
+ if (argc > 1)
+ {
+ if (strcmp(argv[1], "-h") == 0 || strcmp(argv[1], "--help") == 0 ||
+ strcmp(argv[1], "-?") == 0)
+ {
+ usage();
+ exit(0);
+ }
+ else if (strcmp(argv[1], "-V") == 0
+ || strcmp(argv[1], "--version") == 0)
+ {
+ puts("pg_basebackup (PostgreSQL) " PG_VERSION);
+ exit(0);
+ }
+ }
+
+ while ((c = getopt_long(argc, argv, "D:T:l:Z:h:p:U:wWvP",
+ long_options, &option_index)) != -1)
+ {
+ switch (c)
+ {
+ case 'D':
+ basedir = xstrdup(optarg);
+ break;
+ case 'T':
+ tardir = xstrdup(optarg);
+ break;
+ case 'l':
+ label = xstrdup(optarg);
+ break;
+ case 'Z':
+ compresslevel = atoi(optarg);
+ break;
+ case 'h':
+ dbhost = xstrdup(optarg);
+ break;
+ case 'p':
+ if (atoi(optarg) == 0)
+ {
+ fprintf(stderr, _("%s: invalid port number \"%s\""),
+ progname, optarg);
+ exit(1);
+ }
+ dbport = xstrdup(optarg);
+ break;
+ case 'U':
+ dbuser = xstrdup(optarg);
+ break;
+ case 'w':
+ dbgetpassword = -1;
+ break;
+ case 'W':
+ dbgetpassword = 1;
+ break;
+ case 'v':
+ verbose++;
+ break;
+ case 'P':
+ showprogress = true;
+ break;
+ default:
+
+ /*
+ * getopt_long already emitted a complaint
+ */
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+ }
+
+ /*
+ * Any non-option arguments?
+ */
+ if (optind < argc)
+ {
+ fprintf(stderr,
+ _("%s: too many command-line arguments (first is \"%s\")\n"),
+ progname, argv[optind + 1]);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ /*
+ * Required arguments
+ */
+ if (basedir == NULL && tardir == NULL)
+ {
+ fprintf(stderr, _("%s: no target directory specified\n"), progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ /*
+ * Mutually exclusive arguments
+ */
+ if (basedir != NULL && tardir != NULL)
+ {
+ fprintf(stderr,
+ _("%s: both directory mode and tar mode cannot be specified\n"),
+ progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ if (basedir != NULL && compresslevel > 0)
+ {
+ fprintf(stderr,
+ _("%s: only tar mode backups can be compressed\n"),
+ progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+#ifndef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ fprintf(stderr,
+ _("%s: this build does not support compression\n"),
+ progname);
+ exit(1);
+ }
+#else
+ if (compresslevel > 0 && strcmp(tardir, "-") == 0)
+ {
+ fprintf(stderr,
+ _("%s: compression is not supported on standard output\n"),
+ progname);
+ exit(1);
+ }
+#endif
+
+ /*
+ * Verify directories
+ */
+ if (basedir)
+ verify_dir_is_empty_or_create(basedir);
+ else if (strcmp(tardir, "-") != 0)
+ verify_dir_is_empty_or_create(tardir);
+
+
+
+ BaseBackup();
+
+ return 0;
+}
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 29c3c77..40fb130 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -273,6 +273,8 @@ sub mkvcbuild
$initdb->AddLibrary('wsock32.lib');
$initdb->AddLibrary('ws2_32.lib');
+ my $pgbasebackup = AddSimpleFrontend('pg_basebackup', 1);
+
my $pgconfig = AddSimpleFrontend('pg_config');
my $pgcontrol = AddSimpleFrontend('pg_controldata');
Magnus Hagander <magnus@hagander.net> writes:
+ * The file will be named base.tar[.gz] if it's for the main data directory + * or <tablespaceoid>.tar[.gz] if it's for another tablespace.Well we have UNIQUE, btree (spcname), so maybe we can use that here?
We could, but that would make it more likely to run into encoding
issues and such - do we restrict what can be in a tablespace name?
No. Don't even think of going there --- we got rid of user-accessible
names in the filesystem years ago and we're not going back. Consider
CREATE TABLESPACE "/foo/bar" LOCATION '/foo/bar';
regards, tom lane
On Sun, Jan 16, 2011 at 18:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Magnus Hagander <magnus@hagander.net> writes:
+ * The file will be named base.tar[.gz] if it's for the main data directory + * or <tablespaceoid>.tar[.gz] if it's for another tablespace.Well we have UNIQUE, btree (spcname), so maybe we can use that here?
We could, but that would make it more likely to run into encoding
issues and such - do we restrict what can be in a tablespace name?No. Don't even think of going there --- we got rid of user-accessible
names in the filesystem years ago and we're not going back. Consider
CREATE TABLESPACE "/foo/bar" LOCATION '/foo/bar';
Well, we'd try to name the file for that "<oid>-/foo/bar.tar", which I
guess would break badly, yes.
I guess we could normalize the tablespace name into [a-zA-Z0-9] or so,
which would still be useful for the majority of cases, I think?
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Magnus Hagander <magnus@hagander.net> writes:
On Sun, Jan 16, 2011 at 18:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:
No. Don't even think of going there --- we got rid of user-accessible
names in the filesystem years ago and we're not going back. Consider
CREATE TABLESPACE "/foo/bar" LOCATION '/foo/bar';Well, we'd try to name the file for that "<oid>-/foo/bar.tar", which I
guess would break badly, yes.I guess we could normalize the tablespace name into [a-zA-Z0-9] or so,
which would still be useful for the majority of cases, I think?
Well if we're not using user names, there's no good choice except for
system name, and the one you're making up here isn't the "true" one…
Now I think the unfriendliness is around the fact that you need to
prepare (untar, unzip) and start a cluster from the backup to be able to
know what file contains what. Is it possible to offer a tool that lists
the logical objects contained into each tar file?
Maybe adding a special section at the beginning of each. That would be
logically like pg_dump "catalog", but implemented as a simple "noise"
file that you simply `cat` with some command.
Once more, I'm still unclear how important that is, but it's scratching.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
Magnus Hagander <magnus@hagander.net> writes:
Well, we'd try to name the file for that "<oid>-/foo/bar.tar", which I
guess would break badly, yes.
I guess we could normalize the tablespace name into [a-zA-Z0-9] or so,
which would still be useful for the majority of cases, I think?
Just stick with the OID. There's no reason that I can see to have
"friendly" names for these tarfiles --- in most cases, the DBA will
never even deal with them, no?
regards, tom lane
On Sun, Jan 16, 2011 at 18:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Magnus Hagander <magnus@hagander.net> writes:
Well, we'd try to name the file for that "<oid>-/foo/bar.tar", which I
guess would break badly, yes.I guess we could normalize the tablespace name into [a-zA-Z0-9] or so,
which would still be useful for the majority of cases, I think?Just stick with the OID. There's no reason that I can see to have
"friendly" names for these tarfiles --- in most cases, the DBA will
never even deal with them, no?
No, this is the output mode where the DBA chooses to get the output in
the form of tarfiles. So if chosen, he will definitely deal with it.
When we unpack the tars right away to a directory, they have no name,
so that doesn't apply here.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Magnus Hagander <magnus@hagander.net> writes:
On Sun, Jan 16, 2011 at 18:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Just stick with the OID. �There's no reason that I can see to have
"friendly" names for these tarfiles --- in most cases, the DBA will
never even deal with them, no?
No, this is the output mode where the DBA chooses to get the output in
the form of tarfiles. So if chosen, he will definitely deal with it.
Mph. How big a use-case has that got? Offhand I can't see a reason to
use it at all, ever. If you're trying to set up a clone you want the
files unpacked.
regards, tom lane
On Sun, Jan 16, 2011 at 19:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Magnus Hagander <magnus@hagander.net> writes:
On Sun, Jan 16, 2011 at 18:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Just stick with the OID. There's no reason that I can see to have
"friendly" names for these tarfiles --- in most cases, the DBA will
never even deal with them, no?No, this is the output mode where the DBA chooses to get the output in
the form of tarfiles. So if chosen, he will definitely deal with it.Mph. How big a use-case has that got? Offhand I can't see a reason to
use it at all, ever. If you're trying to set up a clone you want the
files unpacked.
Yes, but the tool isn't just for setting up a clone.
If you're doing a regular base backup, that's *not* for replication,
you might want them in files.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Magnus Hagander <magnus@hagander.net> writes:
If you're doing a regular base backup, that's *not* for replication,
you might want them in files.
+1
So, is that pg_restore -l idea feasible with your current tar format? I
guess that would translate to pg_basebackup -l <directory>|<oid>.tar.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Sun, Jan 16, 2011 at 19:21, Dimitri Fontaine <dimitri@2ndquadrant.fr> wrote:
Magnus Hagander <magnus@hagander.net> writes:
If you're doing a regular base backup, that's *not* for replication,
you might want them in files.+1
So, is that pg_restore -l idea feasible with your current tar format? I
guess that would translate to pg_basebackup -l <directory>|<oid>.tar.
Um, not easily if you want to translate it to names. Just like you
don't have access to the oid->name mapping without the server started.
The walsender can't read pg_class for example, so it can't generate
that mapping file.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Sun, Jan 16, 2011 at 11:31 PM, Magnus Hagander <magnus@hagander.net> wrote:
Ok. Updated patch that includes this change attached.
I could not apply the patch cleanly against the git master.
Do you know what the cause is?
$ patch -p1 -d. < /hoge/pg_basebackup.patch
patching file doc/src/sgml/backup.sgml
patching file doc/src/sgml/ref/allfiles.sgml
patching file doc/src/sgml/ref/pg_basebackup.sgml
patching file doc/src/sgml/reference.sgml
patching file src/bin/Makefile
patching file src/bin/pg_basebackup/Makefile
patching file src/bin/pg_basebackup/nls.mk
patching file src/bin/pg_basebackup/pg_basebackup.c
patch: **** malformed patch at line 1428: diff --git
a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Jan 17, 2011 9:16 AM, "Fujii Masao" <masao.fujii@gmail.com> wrote:
On Sun, Jan 16, 2011 at 11:31 PM, Magnus Hagander <magnus@hagander.net>
wrote:
Ok. Updated patch that includes this change attached.
I could not apply the patch cleanly against the git master.
Do you know what the cause is?$ patch -p1 -d. < /hoge/pg_basebackup.patch
patching file doc/src/sgml/backup.sgml
patching file doc/src/sgml/ref/allfiles.sgml
patching file doc/src/sgml/ref/pg_basebackup.sgml
patching file doc/src/sgml/reference.sgml
patching file src/bin/Makefile
patching file src/bin/pg_basebackup/Makefile
patching file src/bin/pg_basebackup/nls.mk
patching file src/bin/pg_basebackup/pg_basebackup.c
patch: **** malformed patch at line 1428: diff --git
a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
Weird, no idea. Will have to look into that later - meanwhile you can grab
the branch tip from my github repo if you want to review it.
/Magnus
On Mon, Jan 17, 2011 at 5:44 PM, Magnus Hagander <magnus@hagander.net> wrote:
Weird, no idea. Will have to look into that later - meanwhile you can grab
the branch tip from my github repo if you want to review it.
Which repo should I grab? You seem to have many repos :)
http://git.postgresql.org/gitweb
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Mon, Jan 17, 2011 at 09:50, Fujii Masao <masao.fujii@gmail.com> wrote:
On Mon, Jan 17, 2011 at 5:44 PM, Magnus Hagander <magnus@hagander.net> wrote:
Weird, no idea. Will have to look into that later - meanwhile you can grab
the branch tip from my github repo if you want to review it.Which repo should I grab? You seem to have many repos :)
http://git.postgresql.org/gitweb
Oh, sorry about that. There is only one that contains postgresql though :P
http://github.com/mhagander/postgres, branch streaming_base.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Magnus Hagander <magnus@hagander.net> writes:
The walsender can't read pg_class for example, so it can't generate
that mapping file.
I don't see any way out here. So let's call <oid>.tar good enough for now…
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Mon, Jan 17, 2011 at 6:53 PM, Magnus Hagander <magnus@hagander.net> wrote:
On Mon, Jan 17, 2011 at 09:50, Fujii Masao <masao.fujii@gmail.com> wrote:
On Mon, Jan 17, 2011 at 5:44 PM, Magnus Hagander <magnus@hagander.net> wrote:
Weird, no idea. Will have to look into that later - meanwhile you can grab
the branch tip from my github repo if you want to review it.Which repo should I grab? You seem to have many repos :)
http://git.postgresql.org/gitwebOh, sorry about that. There is only one that contains postgresql though :P
http://github.com/mhagander/postgres, branch streaming_base.
Thanks!
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On lör, 2011-01-15 at 19:10 +0100, Magnus Hagander wrote:
This patch creates pg_basebackup in bin/, being a client program for
the streaming base backup feature.I think it's more or less done now. I've again split it out of
pg_streamrecv, because it had very little shared code with that
(basically just the PQconnectdb() wrapper).One thing I'm thinking about - right now the tool just takes -c
<conninfo> to connect to the database. Should it instead be taught to
take the connection parameters that for example pg_dump does - one for
each of host, port, user, password? (shouldn't be hard to do..)
Probably yes, for consistency. I have been thinking for a while,
however, that it would be very good if the tools also supported a
conninfo string, so you don't have to invent a new option for every new
connection option. psql already supports that.
Some other comments:
I had trouble at first interpreting the documentation. In particular,
where does the data come from, and where does it go to? -d speaks of
restoring, but I was just looking for making a backup, not restoring it.
Needs some clarification, and some complete examples. Also what happens
if -c, or -d and -t are omitted.
Downthread you say that this tool is also useful for making base backups
independent of replication functionality. Sounds good. But then the
documentation says that the connection must be with a user that has the
replication permission. Something is conceptually wrong here: why would
I have to grant replication permission just to take a base backup for
the purpose of making a backup?
On Mon, Jan 17, 2011 at 13:38, Peter Eisentraut <peter_e@gmx.net> wrote:
On lör, 2011-01-15 at 19:10 +0100, Magnus Hagander wrote:
This patch creates pg_basebackup in bin/, being a client program for
the streaming base backup feature.I think it's more or less done now. I've again split it out of
pg_streamrecv, because it had very little shared code with that
(basically just the PQconnectdb() wrapper).One thing I'm thinking about - right now the tool just takes -c
<conninfo> to connect to the database. Should it instead be taught to
take the connection parameters that for example pg_dump does - one for
each of host, port, user, password? (shouldn't be hard to do..)Probably yes, for consistency. I have been thinking for a while,
however, that it would be very good if the tools also supported a
conninfo string, so you don't have to invent a new option for every new
connection option. psql already supports that.
libpq has an option to expand a connection string if it's passed in
the database name, it seems. The problem is that this is done on the
dbname parameter - won't work in pg_dump for example, without special
treatment, since the db name is used in the client there.
Some other comments:
I had trouble at first interpreting the documentation. In particular,
where does the data come from, and where does it go to? -d speaks of
restoring, but I was just looking for making a backup, not restoring it.
Needs some clarification, and some complete examples. Also what happens
if -c, or -d and -t are omitted.
You get an error. (not with -c anymore)
I'll look at adding some further clarifications to it. Concrete
suggestions from you or others are of course also appreciated :-)
Downthread you say that this tool is also useful for making base backups
independent of replication functionality. Sounds good. But then the
documentation says that the connection must be with a user that has the
replication permission. Something is conceptually wrong here: why would
I have to grant replication permission just to take a base backup for
the purpose of making a backup?
It uses the replication features for it. You also have to set
max_walsenders > 0, which is in the replication section of the
postgresql.conf file.
The point I wanted to make downthread was that it's useful without
having a replication *slave*. But yes, you need the master.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Mon, Jan 17, 2011 at 7:14 PM, Fujii Masao <masao.fujii@gmail.com> wrote:
Oh, sorry about that. There is only one that contains postgresql though :P
http://github.com/mhagander/postgres, branch streaming_base.
Thanks!
Though I haven't seen the core part of the patch (i.e.,
ReceiveTarFile, etc..) yet,
here is the comments against others.
+ if (strcmp(argv[1], "-h") == 0 || strcmp(argv[1], "--help") == 0 ||
+ strcmp(argv[1], "-?") == 0)
strcmp(argv[1], "-h") should be removed
+ printf(_(" -p, --progress show progress information\n"));
-p needs to be changed to -P
+ printf(_(" -D, --pgdata=directory receive base backup into directory\n"));
+ printf(_(" -T, --tardir=directory receive base backup into tar files\n"
+ " stored in specified directory\n"));
+ printf(_(" -Z, --compress=0-9 compress tar output\n"));
+ printf(_(" -l, --label=label set backup label\n"));
+ printf(_(" -p, --progress show progress information\n"));
+ printf(_(" -v, --verbose output verbose messages\n"));
Can we list those options in alphabetical order as other tools do?
Like -f and -F option of pg_dump, it's more intuitive to provide one option for
output directory and that for format. Something like
-d directory
--dir=directory
-F format
--format=format
p
plain
t
tar
+ case 'p':
+ if (atoi(optarg) == 0)
+ {
+ fprintf(stderr, _("%s: invalid port number \"%s\""),
+ progname, optarg);
+ exit(1);
+ }
Shouldn't we emit an error message when the result of atoi is *less than* or
equal to 0? \n should be in the tail of the error message. Is this error check
really required here? IIRC libpq does. If it's required, atoi for compresslevel
should also be error-checked.
+ case 'v':
+ verbose++;
Why is the verbose defined as integer?
+ if (optind < argc)
+ {
+ fprintf(stderr,
+ _("%s: too many command-line arguments (first is \"%s\")\n"),
+ progname, argv[optind + 1]);
You need to reference to argv[optind] instead.
What about using PGDATA environment variable when no target directory is
specified?
+ * Verify that the given directory exists and is empty. If it does not
+ * exist, it is created. If it exists but is not empty, an error will
+ * be give and the process ended.
+ */
+static void
+verify_dir_is_empty_or_create(char *dirname)
Since verify_dir_is_empty_or_create can be called after the connection has
been established, it should call PQfinish before exit(1).
+ keywords[2] = "fallback_application_name";
+ values[2] = "pg_basebackup";
Using the progname variable seems better rather than the fixed word
"pg_basebackup".
+ if (dbgetpassword == 1)
+ {
+ /* Prompt for a password */
+ password = simple_prompt(_("Password: "), 100, false);
PQfinish should be called for the password retry case.
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ fprintf(stderr, _("%s: could not connect to server: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
PQfinish seems required before exit(1).
+ if (PQsendQuery(conn, current_path) == 0)
+ {
+ fprintf(stderr, _("%s: coult not start base backup: %s\n"),
Typo: s/coult/could
+ /*
+ * Get the header
+ */
+ res = PQgetResult(conn);
After this, PQclear seems required before each exit(1) call.
+ if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
+ {
+ fprintf(stderr, _("%s: final receive failed: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
PQfinish seems required before exit(1).
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Mon, Jan 17, 2011 at 9:43 PM, Magnus Hagander <magnus@hagander.net> wrote:
Probably yes, for consistency. I have been thinking for a while,
however, that it would be very good if the tools also supported a
conninfo string, so you don't have to invent a new option for every new
connection option. psql already supports that.libpq has an option to expand a connection string if it's passed in
the database name, it seems. The problem is that this is done on the
dbname parameter - won't work in pg_dump for example, without special
treatment, since the db name is used in the client there.
If conninfo is specified, you can just append the "dbname=replication"
into it and pass it to libpq as dbname. I don't think that supporting
conninfo is difficult.
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Mon, 2011-01-17 at 13:43 +0100, Magnus Hagander wrote:
Downthread you say that this tool is also useful for making base
backupsindependent of replication functionality. Sounds good. But then
the
documentation says that the connection must be with a user that has
the
replication permission. Something is conceptually wrong here: why
would
I have to grant replication permission just to take a base backup
for
the purpose of making a backup?
It uses the replication features for it. You also have to set
max_walsenders > 0, which is in the replication section of the
postgresql.conf file.The point I wanted to make downthread was that it's useful without
having a replication *slave*. But yes, you need the master.
Suggest calling this utility pg_replication_backup
and the other utility pg_replication_stream
It will be easier to explain the connection with replication.
--
Simon Riggs http://www.2ndQuadrant.com/books/
PostgreSQL Development, 24x7 Support, Training and Services
On Mon, Jan 17, 2011 at 14:30, Fujii Masao <masao.fujii@gmail.com> wrote:
On Mon, Jan 17, 2011 at 7:14 PM, Fujii Masao <masao.fujii@gmail.com> wrote:
Oh, sorry about that. There is only one that contains postgresql though :P
http://github.com/mhagander/postgres, branch streaming_base.
Thanks!
Though I haven't seen the core part of the patch (i.e.,
ReceiveTarFile, etc..) yet,
here is the comments against others.+ if (strcmp(argv[1], "-h") == 0 || strcmp(argv[1], "--help") == 0 || + strcmp(argv[1], "-?") == 0)strcmp(argv[1], "-h") should be removed
Oh, crap. From the addition of -h for host. oopsie.
+ printf(_(" -p, --progress show progress information\n"));
-p needs to be changed to -P
Indeed.
+ printf(_(" -D, --pgdata=directory receive base backup into directory\n")); + printf(_(" -T, --tardir=directory receive base backup into tar files\n" + " stored in specified directory\n")); + printf(_(" -Z, --compress=0-9 compress tar output\n")); + printf(_(" -l, --label=label set backup label\n")); + printf(_(" -p, --progress show progress information\n")); + printf(_(" -v, --verbose output verbose messages\n"));Can we list those options in alphabetical order as other tools do?
Certainly. But it makes more sense to have -D and -T next to each
other, I think - they'd end up way elsewhere. Perhaps we need a group
taht says "target"?
Like -f and -F option of pg_dump, it's more intuitive to provide one option for
output directory and that for format. Something like-d directory
--dir=directory-F format
--format=formatp
plaint
tar
That's another option. It would certainly make for more consistency -
probably a better idea.
+ case 'p': + if (atoi(optarg) == 0) + { + fprintf(stderr, _("%s: invalid port number \"%s\""), + progname, optarg); + exit(1); + }Shouldn't we emit an error message when the result of atoi is *less than* or
equal to 0? \n should be in the tail of the error message. Is this error check
really required here? IIRC libpq does. If it's required, atoi for compresslevel
should also be error-checked.
Yes on all.
+ case 'v': + verbose++;Why is the verbose defined as integer?
I envisioned multiple level of verbosity (which I had in
pg_streamrecv), where multiple -v's would add things.
+ if (optind < argc) + { + fprintf(stderr, + _("%s: too many command-line arguments (first is \"%s\")\n"), + progname, argv[optind + 1]);You need to reference to argv[optind] instead.
Check. Copy/paste:o.
What about using PGDATA environment variable when no target directory is
specified?
Hmm. I don't really like that. I prefer requiring it to be specified.
+ * Verify that the given directory exists and is empty. If it does not + * exist, it is created. If it exists but is not empty, an error will + * be give and the process ended. + */ +static void +verify_dir_is_empty_or_create(char *dirname)Since verify_dir_is_empty_or_create can be called after the connection has
been established, it should call PQfinish before exit(1).
Yeah, that's a general thing - do we need to actually bother doing
that? At most exit() places I haven't bothered free:ing memory or
closing the connection.
It's not just the PQclear() that you refer to further down - it's also
all the xstrdup()ed strings for example. Do we really need to care
about those before we do exit(1)? I think not.
Requiring PQfinish() might be more reasonable since it will give you a
log on the server if you don't, but I'm not convinced that's necessary
either?
<snip similar requirements>
+ keywords[2] = "fallback_application_name"; + values[2] = "pg_basebackup";Using the progname variable seems better rather than the fixed word
"pg_basebackup".
I don't think so - that turns into argv[0], which means that if you
use the full path of the program (/usr/local/pgsql/bin/pg_basebackup
for example), the entire path goes into fallback_application_name -
not just the program name.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Mon, Jan 17, 2011 at 14:43, Fujii Masao <masao.fujii@gmail.com> wrote:
On Mon, Jan 17, 2011 at 9:43 PM, Magnus Hagander <magnus@hagander.net> wrote:
Probably yes, for consistency. I have been thinking for a while,
however, that it would be very good if the tools also supported a
conninfo string, so you don't have to invent a new option for every new
connection option. psql already supports that.libpq has an option to expand a connection string if it's passed in
the database name, it seems. The problem is that this is done on the
dbname parameter - won't work in pg_dump for example, without special
treatment, since the db name is used in the client there.If conninfo is specified, you can just append the "dbname=replication"
into it and pass it to libpq as dbname. I don't think that supporting
conninfo is difficult.
Yeah, it's easy enough for pg_basebackup. But not for example for
pg_dump. (I meant for being able to use it more or less with zero
modification to the current code - it can certainly be adapted to be
able to deal with it)
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Mon, Jan 17, 2011 at 14:49, Simon Riggs <simon@2ndquadrant.com> wrote:
On Mon, 2011-01-17 at 13:43 +0100, Magnus Hagander wrote:
Downthread you say that this tool is also useful for making base
backupsindependent of replication functionality. Sounds good. But then
the
documentation says that the connection must be with a user that has
the
replication permission. Something is conceptually wrong here: why
would
I have to grant replication permission just to take a base backup
for
the purpose of making a backup?
It uses the replication features for it. You also have to set
max_walsenders > 0, which is in the replication section of the
postgresql.conf file.The point I wanted to make downthread was that it's useful without
having a replication *slave*. But yes, you need the master.Suggest calling this utility pg_replication_backup
and the other utility pg_replication_streamIt will be easier to explain the connection with replication.
Hmm. I don't like those names at all :(
But that's just me - and I don't totally hate them. So I'm opening the
floor for other peoples votes :-)
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Mon, 2011-01-17 at 14:55 +0100, Magnus Hagander wrote:
It uses the replication features for it. You also have to set
max_walsenders > 0, which is in the replication section of the
postgresql.conf file.The point I wanted to make downthread was that it's useful without
having a replication *slave*. But yes, you need the master.Suggest calling this utility pg_replication_backup
and the other utility pg_replication_streamIt will be easier to explain the connection with replication.
Hmm. I don't like those names at all :(
But that's just me - and I don't totally hate them. So I'm opening the
floor for other peoples votes :-)
No problem. My point is that we should look for a name that illustrates
the function more clearly. If Peter was confused, others will be also.
--
Simon Riggs http://www.2ndQuadrant.com/books/
PostgreSQL Development, 24x7 Support, Training and Services
On Mon, Jan 17, 2011 at 8:55 AM, Magnus Hagander <magnus@hagander.net> wrote:
Hmm. I don't like those names at all :(
I agree. I don't think your original names are bad, as long as
they're well-documented. I sympathize with Simon's desire to make it
clear that these use the replication framework, but I really don't
want the command names to be that long.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Mon, Jan 17, 2011 at 16:18, Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jan 17, 2011 at 8:55 AM, Magnus Hagander <magnus@hagander.net> wrote:
Hmm. I don't like those names at all :(
I agree. I don't think your original names are bad, as long as
they're well-documented. I sympathize with Simon's desire to make it
clear that these use the replication framework, but I really don't
want the command names to be that long.
Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Mon, 2011-01-17 at 16:20 +0100, Magnus Hagander wrote:
On Mon, Jan 17, 2011 at 16:18, Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jan 17, 2011 at 8:55 AM, Magnus Hagander <magnus@hagander.net> wrote:
Hmm. I don't like those names at all :(
I agree. I don't think your original names are bad, as long as
they're well-documented. I sympathize with Simon's desire to make it
clear that these use the replication framework, but I really don't
want the command names to be that long.Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)
pg_stream_log
pg_stream_backup
?
--
Simon Riggs http://www.2ndQuadrant.com/books/
PostgreSQL Development, 24x7 Support, Training and Services
Magnus Hagander <magnus@hagander.net> writes:
Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)
What I like about streamrecv is it's fairly clear which end of the
connection it's supposed to be used on. I find "pg_basebackup"
quite lacking from that perspective, and the same for the names
above. Or are you proposing to merge the send and receive sides
into one executable?
regards, tom lane
On Mon, Jan 17, 2011 at 20:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Magnus Hagander <magnus@hagander.net> writes:
Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)What I like about streamrecv is it's fairly clear which end of the
connection it's supposed to be used on. I find "pg_basebackup"
quite lacking from that perspective, and the same for the names
above. Or are you proposing to merge the send and receive sides
into one executable?
No, the sending side is in walsender.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Mon, Jan 17, 2011 at 10:50 PM, Magnus Hagander <magnus@hagander.net> wrote:
+ printf(_(" -D, --pgdata=directory receive base backup into directory\n")); + printf(_(" -T, --tardir=directory receive base backup into tar files\n" + " stored in specified directory\n")); + printf(_(" -Z, --compress=0-9 compress tar output\n")); + printf(_(" -l, --label=label set backup label\n")); + printf(_(" -p, --progress show progress information\n")); + printf(_(" -v, --verbose output verbose messages\n"));Can we list those options in alphabetical order as other tools do?
Certainly. But it makes more sense to have -D and -T next to each
other, I think - they'd end up way elsewhere. Perhaps we need a group
taht says "target"?
I agree with you if we end up choosing -D and -T for a target rather
than pg_dump-like options I proposed.
+ * Verify that the given directory exists and is empty. If it does not + * exist, it is created. If it exists but is not empty, an error will + * be give and the process ended. + */ +static void +verify_dir_is_empty_or_create(char *dirname)Since verify_dir_is_empty_or_create can be called after the connection has
been established, it should call PQfinish before exit(1).Yeah, that's a general thing - do we need to actually bother doing
that? At most exit() places I haven't bothered free:ing memory or
closing the connection.It's not just the PQclear() that you refer to further down - it's also
all the xstrdup()ed strings for example. Do we really need to care
about those before we do exit(1)? I think not.
Probably true. The allocated memory would be free'd right after
exit.
Requiring PQfinish() might be more reasonable since it will give you a
log on the server if you don't, but I'm not convinced that's necessary
either?
At least it's required for each password-retry. Otherwise, previous
connections remain during backup. You can see this problem easily
by repeating the password input in pg_basebackup.
LOG: could not send data to client: Connection reset by peer
LOG: could not send data to client: Broken pipe
FATAL: base backup could not send data, aborting backup
As you said, if PQfinish is not called at exit(1), the above messages
would be output. Those messages look ugly and should be
suppressed whenever we *can*. Also I believe other tools would
do that.
+ keywords[2] = "fallback_application_name"; + values[2] = "pg_basebackup";Using the progname variable seems better rather than the fixed word
"pg_basebackup".I don't think so - that turns into argv[0], which means that if you
use the full path of the program (/usr/local/pgsql/bin/pg_basebackup
for example), the entire path goes into fallback_application_name -
not just the program name.
No. get_progname extracts the actual name.
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Mon, Jan 17, 2011 at 10:30 PM, Fujii Masao <masao.fujii@gmail.com> wrote:
Though I haven't seen the core part of the patch (i.e.,
ReceiveTarFile, etc..) yet,
here is the comments against others.
Here are another comments:
When I untar the tar file taken by pg_basebackup, I got the following
messages:
$ tar xf base.tar
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: Error exit delayed from previous errors
Is this a bug? This happens only when I create $PGDATA by using
initdb -X (i.e., I relocated the pg_xlog directory elsewhere than
$PGDATA).
+ if (compresslevel > 0)
+ {
+ snprintf(fn, sizeof(fn), "%s/%s.tar.gz", tardir, PQgetvalue(res,
rownum, 0));
+ ztarfile = gzopen(fn, "wb");
Though I'm not familiar with zlib, isn't gzsetparams() required
here?
+#ifdef HAVE_LIBZ
+ if (!tarfile && !ztarfile)
+#else
+ if (!tarfile)
+#endif
+ {
+ fprintf(stderr, _("%s: could not create file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
Instead of strerror, get_gz_error seems required when using zlib.
+ if (!res || PQresultStatus(res) != PGRES_COPY_OUT)
The check for "!res" is not needed since PQresultStatus checks that.
+ r = PQgetCopyData(conn, ©buf, 0);
+ if (r == -1)
Since -1 of PQgetCopyData might indicate an error, in this case,
we would need to call PQgetResult?.
ReceiveTarFile seems refactorable by using GZWRITE and GZCLOSE
macros.
+ fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
%m in fprintf is portable?
Can't you change '%s' to \"%s\" for consistency?
+ /*
+ * Make sure we're unpacking into an empty directory
+ */
+ verify_dir_is_empty_or_create(current_path);
Can pg_basebackup take a backup of $PGDATA including a tablespace
directory, without an error? The above code seems to prevent that....
+ if (compresslevel <= 0)
+ {
+ fprintf(stderr, _("%s: invalid compression level \"%s\"\n"),
It's better to check "compresslevel > 9" here.
+/*
+ * Print a progress report based on the global variables. If verbose output
+ * is disabled, also print the current file name.
Typo: s/disabled/enabled
I request new option which specifies whether pg_start_backup
executes immediate checkpoint or not. Currently it always executes
immediate one. But I'd like to run smoothed one in busy system.
What's your opinion?
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
2011/1/18 Fujii Masao <masao.fujii@gmail.com>:
On Mon, Jan 17, 2011 at 10:30 PM, Fujii Masao <masao.fujii@gmail.com> wrote:
Though I haven't seen the core part of the patch (i.e.,
ReceiveTarFile, etc..) yet,
here is the comments against others.Here are another comments:
When I untar the tar file taken by pg_basebackup, I got the following
messages:$ tar xf base.tar
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: Error exit delayed from previous errorsIs this a bug? This happens only when I create $PGDATA by using
initdb -X (i.e., I relocated the pg_xlog directory elsewhere than
$PGDATA).+ if (compresslevel > 0) + { + snprintf(fn, sizeof(fn), "%s/%s.tar.gz", tardir, PQgetvalue(res, rownum, 0)); + ztarfile = gzopen(fn, "wb");Though I'm not familiar with zlib, isn't gzsetparams() required
here?+#ifdef HAVE_LIBZ + if (!tarfile && !ztarfile) +#else + if (!tarfile) +#endif + { + fprintf(stderr, _("%s: could not create file \"%s\": %s\n"), + progname, fn, strerror(errno));Instead of strerror, get_gz_error seems required when using zlib.
+ if (!res || PQresultStatus(res) != PGRES_COPY_OUT)
The check for "!res" is not needed since PQresultStatus checks that.
+ r = PQgetCopyData(conn, ©buf, 0); + if (r == -1)Since -1 of PQgetCopyData might indicate an error, in this case,
we would need to call PQgetResult?.ReceiveTarFile seems refactorable by using GZWRITE and GZCLOSE
macros.+ fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
%m in fprintf is portable?
Can't you change '%s' to \"%s\" for consistency?
+ /* + * Make sure we're unpacking into an empty directory + */ + verify_dir_is_empty_or_create(current_path);Can pg_basebackup take a backup of $PGDATA including a tablespace
directory, without an error? The above code seems to prevent that....+ if (compresslevel <= 0) + { + fprintf(stderr, _("%s: invalid compression level \"%s\"\n"),It's better to check "compresslevel > 9" here.
+/* + * Print a progress report based on the global variables. If verbose output + * is disabled, also print the current file name.Typo: s/disabled/enabled
I request new option which specifies whether pg_start_backup
executes immediate checkpoint or not. Currently it always executes
immediate one. But I'd like to run smoothed one in busy system.
What's your opinion?
*if* it is possible, this is welcome, the checkpoint hit due to
pg_start_backup is visible, even outside pg_basebasckup.
(it sync everything then it blast cache memory)
--
Cédric Villemain 2ndQuadrant
http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support
On Tue, Jan 18, 2011 at 03:14, Fujii Masao <masao.fujii@gmail.com> wrote:
On Mon, Jan 17, 2011 at 10:50 PM, Magnus Hagander <magnus@hagander.net> wrote:
+ printf(_(" -D, --pgdata=directory receive base backup into directory\n")); + printf(_(" -T, --tardir=directory receive base backup into tar files\n" + " stored in specified directory\n")); + printf(_(" -Z, --compress=0-9 compress tar output\n")); + printf(_(" -l, --label=label set backup label\n")); + printf(_(" -p, --progress show progress information\n")); + printf(_(" -v, --verbose output verbose messages\n"));Can we list those options in alphabetical order as other tools do?
Certainly. But it makes more sense to have -D and -T next to each
other, I think - they'd end up way elsewhere. Perhaps we need a group
taht says "target"?I agree with you if we end up choosing -D and -T for a target rather
than pg_dump-like options I proposed.
Yeah. What do others think between tohse two options? -D/-T followed
by directory, or -D <dir> and -F<format>?
Requiring PQfinish() might be more reasonable since it will give you a
log on the server if you don't, but I'm not convinced that's necessary
either?At least it's required for each password-retry. Otherwise, previous
connections remain during backup. You can see this problem easily
Oh yeah, I've put that one in my git branch already.
by repeating the password input in pg_basebackup.
LOG: could not send data to client: Connection reset by peer
LOG: could not send data to client: Broken pipe
FATAL: base backup could not send data, aborting backupAs you said, if PQfinish is not called at exit(1), the above messages
would be output. Those messages look ugly and should be
suppressed whenever we *can*. Also I believe other tools would
do that.
Yeah, agreed. I'll add that, shouldn't be too hard.
+ keywords[2] = "fallback_application_name"; + values[2] = "pg_basebackup";Using the progname variable seems better rather than the fixed word
"pg_basebackup".I don't think so - that turns into argv[0], which means that if you
use the full path of the program (/usr/local/pgsql/bin/pg_basebackup
for example), the entire path goes into fallback_application_name -
not just the program name.No. get_progname extracts the actual name.
Hmm. I see it does. I wonder what I did to make that not work.
Then I agree with the change :-)
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Tue, Jan 18, 2011 at 10:49, Fujii Masao <masao.fujii@gmail.com> wrote:
On Mon, Jan 17, 2011 at 10:30 PM, Fujii Masao <masao.fujii@gmail.com> wrote:
Though I haven't seen the core part of the patch (i.e.,
ReceiveTarFile, etc..) yet,
here is the comments against others.Here are another comments:
Thanks! These are all good and useful comments!
When I untar the tar file taken by pg_basebackup, I got the following
messages:$ tar xf base.tar
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: Error exit delayed from previous errorsIs this a bug? This happens only when I create $PGDATA by using
initdb -X (i.e., I relocated the pg_xlog directory elsewhere than
$PGDATA).
Interesting. What version of tar and what platform? I can't reproduce
that here...
It certainly is a bug, that should not happen.
+ if (compresslevel > 0) + { + snprintf(fn, sizeof(fn), "%s/%s.tar.gz", tardir, PQgetvalue(res, rownum, 0)); + ztarfile = gzopen(fn, "wb");Though I'm not familiar with zlib, isn't gzsetparams() required
here?
Uh. It certainly is! I clearly forgot it there...
+#ifdef HAVE_LIBZ + if (!tarfile && !ztarfile) +#else + if (!tarfile) +#endif + { + fprintf(stderr, _("%s: could not create file \"%s\": %s\n"), + progname, fn, strerror(errno));Instead of strerror, get_gz_error seems required when using zlib.
Indeed it is. I think it needs to be this:
#ifdef HAVE_LIBZ
if (compresslevel > 0 && !ztarfile)
{
/* Compression is in use */
fprintf(stderr, _("%s: could not create compressed file \"%s\": %s\n"),
progname, fn, get_gz_error(ztarfile));
exit(1);
}
else
#endif
{
/* Either no zlib support, or zlib support but compresslevel = 0 */
if (!tarfile)
{
fprintf(stderr, _("%s: could not create file \"%s\": %s\n"),
progname, fn, strerror(errno));
exit(1);
}
}
+ if (!res || PQresultStatus(res) != PGRES_COPY_OUT)
The check for "!res" is not needed since PQresultStatus checks that.
Hah. I still keep doing that from old habit. I know you've pointed
that out before, with libpqwalreceiver :-)
+ r = PQgetCopyData(conn, ©buf, 0); + if (r == -1)Since -1 of PQgetCopyData might indicate an error, in this case,
we would need to call PQgetResult?.
Uh, -1 means end of data, no? -2 means error?
ReceiveTarFile seems refactorable by using GZWRITE and GZCLOSE
macros.
You mean the ones from pg_dump? I don't think so. We can't use
gzwrite() with compression level 0 on the tar output, because it will
still write a gz header. With pg_dump, that is ok because it's our
format, but with a .tar (without .gz) I don't think it is.
at least that's how I interpreted the function.
+ fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
%m in fprintf is portable?
Hmm. I just assumed it was because we use it elsewhere, but I now see
we only really use it for ereport() stuff. Bottom line is, I don't
know - perhaps it needs to be changed to use strerror()?
Can't you change '%s' to \"%s\" for consistency?
Yeah, absolutely. Clearly I was way inconsistent, and it got worse
with some copy/paste :-(
+ /* + * Make sure we're unpacking into an empty directory + */ + verify_dir_is_empty_or_create(current_path);Can pg_basebackup take a backup of $PGDATA including a tablespace
directory, without an error? The above code seems to prevent that....
Uh, how do you mean it woul dprevent that? It requires that the
directory you're writing the tablespace to is empty or nonexistant,
but that shouldn't prevent a backup, no? It will prevent you from
overwriting things with your backup, but that's intentional - if you
don't need the old dir, just remove it.
+ if (compresslevel <= 0) + { + fprintf(stderr, _("%s: invalid compression level \"%s\"\n"),It's better to check "compresslevel > 9" here.
Agreed (well, check for both of them of course)
+/* + * Print a progress report based on the global variables. If verbose output + * is disabled, also print the current file name.Typo: s/disabled/enabled
Indeed.
I request new option which specifies whether pg_start_backup
executes immediate checkpoint or not. Currently it always executes
immediate one. But I'd like to run smoothed one in busy system.
What's your opinion?
Yeah that sounds like a good idea. Shouldn't be too hard to do (will
reuqire a backend patch as well, of course). Should we use "-f" for
fast? Though that may be an unfortunate overload of the usual usecase
for -f, so maybe -c <fast|slow> for "checkpoint fast/slow"?
I've updated my git branch with the simple fixes, will get the bigger
ones in there as soon as I've done them.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Mon, Jan 17, 2011 at 16:27, Simon Riggs <simon@2ndquadrant.com> wrote:
On Mon, 2011-01-17 at 16:20 +0100, Magnus Hagander wrote:
On Mon, Jan 17, 2011 at 16:18, Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jan 17, 2011 at 8:55 AM, Magnus Hagander <magnus@hagander.net> wrote:
Hmm. I don't like those names at all :(
I agree. I don't think your original names are bad, as long as
they're well-documented. I sympathize with Simon's desire to make it
clear that these use the replication framework, but I really don't
want the command names to be that long.Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)pg_stream_log
pg_stream_backup
Those seem better.
Tom, would those solve your concerns about it being clear which side
they are on? Or do you think you'd still risk reading them as the
sending side?
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Tue, Jan 18, 2011 at 12:40, Magnus Hagander <magnus@hagander.net> wrote:
On Tue, Jan 18, 2011 at 10:49, Fujii Masao <masao.fujii@gmail.com> wrote:
Yeah that sounds like a good idea. Shouldn't be too hard to do (will
reuqire a backend patch as well, of course). Should we use "-f" for
fast? Though that may be an unfortunate overload of the usual usecase
for -f, so maybe -c <fast|slow> for "checkpoint fast/slow"?
Was easy, done with "-c <fast|slow>".
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Excerpts from Magnus Hagander's message of mar ene 18 08:40:50 -0300 2011:
On Tue, Jan 18, 2011 at 10:49, Fujii Masao <masao.fujii@gmail.com> wrote:
+ fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
%m in fprintf is portable?
Hmm. I just assumed it was because we use it elsewhere, but I now see
we only really use it for ereport() stuff. Bottom line is, I don't
know - perhaps it needs to be changed to use strerror()?
Some libc's (such as glibc) know about %m, others presumably don't (it's
a GNU extension, according to my manpage). ereport does the expansion
by itself, see expand_fmt_string(). Probably just using strerror() is
the easiest.
--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
On Tue, Jan 18, 2011 at 14:26, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
Excerpts from Magnus Hagander's message of mar ene 18 08:40:50 -0300 2011:
On Tue, Jan 18, 2011 at 10:49, Fujii Masao <masao.fujii@gmail.com> wrote:
+ fprintf(stderr, _("%s: could not write to file '%s': %m\n"),
%m in fprintf is portable?
Hmm. I just assumed it was because we use it elsewhere, but I now see
we only really use it for ereport() stuff. Bottom line is, I don't
know - perhaps it needs to be changed to use strerror()?Some libc's (such as glibc) know about %m, others presumably don't (it's
a GNU extension, according to my manpage). ereport does the expansion
by itself, see expand_fmt_string(). Probably just using strerror() is
the easiest.
Ok, thanks for clarifying. I've updated to use strerror(). Guess it's
time for another patch, PFA :-)
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Attachments:
pg_basebackup.patchtext/x-patch; charset=US-ASCII; name=pg_basebackup.patchDownload
diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index db7c834..c14ae43 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -813,6 +813,16 @@ SELECT pg_stop_backup();
</para>
<para>
+ You can also use the <xref linkend="app-pgbasebackup"> tool to take
+ the backup, instead of manually copying the files. This tool will take
+ care of the <function>pg_start_backup()</>, copy and
+ <function>pg_stop_backup()</> steps automatically, and transfers the
+ backup over a regular <productname>PostgreSQL</productname> connection
+ using the replication protocol, instead of requiring filesystem level
+ access.
+ </para>
+
+ <para>
Some file system backup tools emit warnings or errors
if the files they are trying to copy change while the copy proceeds.
When taking a base backup of an active database, this situation is normal
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 76c062f..73f26b4 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1460,7 +1460,7 @@ The commands accepted in walsender mode are:
</varlistentry>
<varlistentry>
- <term>BASE_BACKUP [<literal>LABEL</literal> <replaceable>'label'</replaceable>] [<literal>PROGRESS</literal>]</term>
+ <term>BASE_BACKUP [<literal>LABEL</literal> <replaceable>'label'</replaceable>] [<literal>PROGRESS</literal>] [<literal>FAST</literal>]</term>
<listitem>
<para>
Instructs the server to start streaming a base backup.
@@ -1496,6 +1496,15 @@ The commands accepted in walsender mode are:
</para>
</listitem>
</varlistentry>
+
+ <varlistentry>
+ <term><literal>FAST</></term>
+ <listitem>
+ <para>
+ Request a fast checkpoint.
+ </para>
+ </listitem>
+ </varlistentry>
</variablelist>
</para>
<para>
diff --git a/doc/src/sgml/ref/allfiles.sgml b/doc/src/sgml/ref/allfiles.sgml
index f40fa9d..c44d11e 100644
--- a/doc/src/sgml/ref/allfiles.sgml
+++ b/doc/src/sgml/ref/allfiles.sgml
@@ -160,6 +160,7 @@ Complete list of usable sgml source files in this directory.
<!entity dropuser system "dropuser.sgml">
<!entity ecpgRef system "ecpg-ref.sgml">
<!entity initdb system "initdb.sgml">
+<!entity pgBasebackup system "pg_basebackup.sgml">
<!entity pgConfig system "pg_config-ref.sgml">
<!entity pgControldata system "pg_controldata.sgml">
<!entity pgCtl system "pg_ctl-ref.sgml">
diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml
new file mode 100644
index 0000000..e86d8bf
--- /dev/null
+++ b/doc/src/sgml/ref/pg_basebackup.sgml
@@ -0,0 +1,323 @@
+<!--
+doc/src/sgml/ref/pg_basebackup.sgml
+PostgreSQL documentation
+-->
+
+<refentry id="app-pgbasebackup">
+ <refmeta>
+ <refentrytitle>pg_basebackup</refentrytitle>
+ <manvolnum>1</manvolnum>
+ <refmiscinfo>Application</refmiscinfo>
+ </refmeta>
+
+ <refnamediv>
+ <refname>pg_basebackup</refname>
+ <refpurpose>take a base backup of a <productname>PostgreSQL</productname> cluster</refpurpose>
+ </refnamediv>
+
+ <indexterm zone="app-pgbasebackup">
+ <primary>pg_basebackup</primary>
+ </indexterm>
+
+ <refsynopsisdiv>
+ <cmdsynopsis>
+ <command>pg_basebackup</command>
+ <arg rep="repeat"><replaceable>option</></arg>
+ </cmdsynopsis>
+ </refsynopsisdiv>
+
+ <refsect1>
+ <title>
+ Description
+ </title>
+ <para>
+ <application>pg_basebackup</application> is used to take base backups of
+ a running <productname>PostgreSQL</productname> database cluster. These
+ are taken without affecting other clients to the database, and can be used
+ both for point-in-time recovery (see <xref linkend="continuous-archiving">)
+ and as the starting point for a log shipping or streaming replication standby
+ server (see <xref linkend="warm-standby">).
+ </para>
+
+ <para>
+ <application>pg_basebackup</application> makes a binary copy of the database
+ cluster files, while making sure the system is automatically put in and
+ out of backup mode automatically. Backups are always taken of the entire
+ database cluster, it is not possible to back up individual databases or
+ database objects. For individual database backups, a tool such as
+ <xref linkend="APP-PGDUMP"> must be used.
+ </para>
+
+ <para>
+ The backup is made over a regular <productname>PostgreSQL</productname>
+ connection, and uses the replication protocol. The connection must be
+ made with a user having <literal>REPLICATION</literal> permissions (see
+ <xref linkend="role-attributes">).
+ </para>
+
+ <para>
+ Only one backup can be concurrently active in
+ <productname>PostgreSQL</productname>, meaning that only one instance of
+ <application>pg_basebackup</application> can run at the same time
+ against a single database cluster.
+ </para>
+ </refsect1>
+
+ <refsect1>
+ <title>Options</title>
+
+ <para>
+ <variablelist>
+ <varlistentry>
+ <term><option>-D <replaceable class="parameter">directory</replaceable></option></term>
+ <term><option>--pgdata=<replaceable class="parameter">directory</replaceable></option></term>
+ <listitem>
+ <para>
+ Directory to restore the base data directory to. When the cluster has
+ no additional tablespaces, the whole database will be placed in this
+ directory. If the cluster contains additional tablespaces, the main
+ data directory will be placed in this directory, but all other
+ tablespaces will be placed in the same absolute path as they have
+ on the server.
+ </para>
+ <para>
+ Only one of <literal>-D</> and <literal>-T</> can be specified.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-l <replaceable class="parameter">label</replaceable></option></term>
+ <term><option>--label=<replaceable class="parameter">label</replaceable></option></term>
+ <listitem>
+ <para>
+ Sets the label for the backup. If none is specified, a default value of
+ <literal>pg_basebackup base backup</literal> will be used.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-p</option></term>
+ <term><option>--progress</option></term>
+ <listitem>
+ <para>
+ Enables progress reporting. Turning this on will deliver an approximate
+ progress report during the backup. Since the database may change during
+ the backup, this is only an approximation and may not end at exactly
+ <literal>100%</literal>.
+ </para>
+ <para>
+ When this is enabled, the backup will start by enumerating the size of
+ the entire database, and then go back and send the actual contents.
+ This may make the backup take slightly longer, and in particular it
+ will take longer before the first data is sent.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-T <replaceable class="parameter">directory</replaceable></option></term>
+ <term><option>--tardir=<replaceable class="parameter">directory</replaceable></option></term>
+ <listitem>
+ <para>
+ Directory to place tar format files in. When this is specified, the
+ backup will consist of a number of tar files, one for each tablespace
+ in the database, stored in this directory. The tar file for the main
+ data directory will be named <filename>base.tar</>, and all other
+ tablespaces will be named after the tablespace oid.
+ </para>
+ <para>
+ If the value <literal>-</> (dash) is specified as tar directory,
+ the tar contents will be written to standard output, suitable for
+ piping to for example <productname>gzip</>. This is only possible if
+ the cluster has no additional tablespaces.
+ </para>
+ <para>
+ Only one of <literal>-D</> and <literal>-T</> can be specified.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-c <replaceable class="parameter">fast|slow</replaceable></option></term>
+ <term><option>--checkpoint <replaceable class="parameter">fast|slow</replaceable></option></term>
+ <listitem>
+ <para>
+ Sets checkpoint mode to fast or slow (default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-v</option></term>
+ <term><option>--verbose</option></term>
+ <listitem>
+ <para>
+ Enables verbose mode. Will output some extra steps during startup and
+ shutdown, as well as show the exact filename that is currently being
+ processed if progress reporting is also enabled.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-Z <replaceable class="parameter">level</replaceable></option></term>
+ <term><option>--compress=<replaceable class="parameter">level</replaceable></option></term>
+ <listitem>
+ <para>
+ Enables gzip compression of tar file output. Compression is only
+ available when generating tar files, and is not available when sending
+ output to standard output.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ The following command-line options control the database connection parameters.
+
+ <variablelist>
+ <varlistentry>
+ <term><option>-h <replaceable class="parameter">host</replaceable></option></term>
+ <term><option>--host=<replaceable class="parameter">host</replaceable></option></term>
+ <listitem>
+ <para>
+ Specifies the host name of the machine on which the server is
+ running. If the value begins with a slash, it is used as the
+ directory for the Unix domain socket. The default is taken
+ from the <envar>PGHOST</envar> environment variable, if set,
+ else a Unix domain socket connection is attempted.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-p <replaceable class="parameter">port</replaceable></option></term>
+ <term><option>--port=<replaceable class="parameter">port</replaceable></option></term>
+ <listitem>
+ <para>
+ Specifies the TCP port or local Unix domain socket file
+ extension on which the server is listening for connections.
+ Defaults to the <envar>PGPORT</envar> environment variable, if
+ set, or a compiled-in default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-U <replaceable>username</replaceable></option></term>
+ <term><option>--username=<replaceable class="parameter">username</replaceable></option></term>
+ <listitem>
+ <para>
+ User name to connect as.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-w</></term>
+ <term><option>--no-password</></term>
+ <listitem>
+ <para>
+ Never issue a password prompt. If the server requires
+ password authentication and a password is not available by
+ other means such as a <filename>.pgpass</filename> file, the
+ connection attempt will fail. This option can be useful in
+ batch jobs and scripts where no user is present to enter a
+ password.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-W</option></term>
+ <term><option>--password</option></term>
+ <listitem>
+ <para>
+ Force <application>pg_basebackup</application> to prompt for a
+ password before connecting to a database.
+ </para>
+
+ <para>
+ This option is never essential, since
+ <application>pg_bsaebackup</application> will automatically prompt
+ for a password if the server demands password authentication.
+ However, <application>pg_basebackup</application> will waste a
+ connection attempt finding out that the server wants a password.
+ In some cases it is worth typing <option>-W</> to avoid the extra
+ connection attempt.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ Other, less commonly used, parameters are also available:
+
+ <variablelist>
+ <varlistentry>
+ <term><option>-V</></term>
+ <term><option>--version</></term>
+ <listitem>
+ <para>
+ Print the <application>pg_basebackup</application> version and exit.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-?</></term>
+ <term><option>--help</></term>
+ <listitem>
+ <para>
+ Show help about <application>pg_basebackup</application> command line
+ arguments, and exit.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
+
+ </refsect1>
+
+ <refsect1>
+ <title>Environment</title>
+
+ <para>
+ This utility, like most other <productname>PostgreSQL</> utilities,
+ uses the environment variables supported by <application>libpq</>
+ (see <xref linkend="libpq-envars">).
+ </para>
+
+ </refsect1>
+
+ <refsect1>
+ <title>Notes</title>
+
+ <para>
+ The backup will include all files in the data directory and tablespaces,
+ including the configuration files and any additional files placed in the
+ directory by third parties. Only regular files and directories are allowed
+ in the data directory, no symbolic links or special device files.
+ </para>
+
+ <para>
+ The way <productname>PostgreSQL</productname> manages tablespaces, the path
+ for all additional tablespaces must be identical whenever a backup is
+ restored. The main data directory, however, is relocatable to any location.
+ </para>
+ </refsect1>
+
+ <refsect1>
+ <title>See Also</title>
+
+ <simplelist type="inline">
+ <member><xref linkend="APP-PGDUMP"></member>
+ </simplelist>
+ </refsect1>
+
+</refentry>
diff --git a/doc/src/sgml/reference.sgml b/doc/src/sgml/reference.sgml
index 84babf6..6ee8e5b 100644
--- a/doc/src/sgml/reference.sgml
+++ b/doc/src/sgml/reference.sgml
@@ -202,6 +202,7 @@
&droplang;
&dropuser;
&ecpgRef;
+ &pgBasebackup;
&pgConfig;
&pgDump;
&pgDumpall;
diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c
index b4d5bbe..ee1b6ee 100644
--- a/src/backend/replication/basebackup.c
+++ b/src/backend/replication/basebackup.c
@@ -40,7 +40,7 @@ static void send_int8_string(StringInfoData *buf, int64 intval);
static void SendBackupHeader(List *tablespaces);
static void SendBackupDirectory(char *location, char *spcoid);
static void base_backup_cleanup(int code, Datum arg);
-static void perform_base_backup(const char *backup_label, bool progress, DIR *tblspcdir);
+static void perform_base_backup(const char *backup_label, bool progress, DIR *tblspcdir, bool fastcheckpoint);
typedef struct
{
@@ -67,9 +67,9 @@ base_backup_cleanup(int code, Datum arg)
* clobbered by longjmp" from stupider versions of gcc.
*/
static void
-perform_base_backup(const char *backup_label, bool progress, DIR *tblspcdir)
+perform_base_backup(const char *backup_label, bool progress, DIR *tblspcdir, bool fastcheckpoint)
{
- do_pg_start_backup(backup_label, true);
+ do_pg_start_backup(backup_label, fastcheckpoint);
PG_ENSURE_ERROR_CLEANUP(base_backup_cleanup, (Datum) 0);
{
@@ -135,7 +135,7 @@ perform_base_backup(const char *backup_label, bool progress, DIR *tblspcdir)
* pg_stop_backup() for the user.
*/
void
-SendBaseBackup(const char *backup_label, bool progress)
+SendBaseBackup(const char *backup_label, bool progress, bool fastcheckpoint)
{
DIR *dir;
MemoryContext backup_context;
@@ -168,7 +168,7 @@ SendBaseBackup(const char *backup_label, bool progress)
ereport(ERROR,
(errmsg("unable to open directory pg_tblspc: %m")));
- perform_base_backup(backup_label, progress, dir);
+ perform_base_backup(backup_label, progress, dir, fastcheckpoint);
FreeDir(dir);
diff --git a/src/backend/replication/repl_gram.y b/src/backend/replication/repl_gram.y
index 0ef33dd..e4f4c47 100644
--- a/src/backend/replication/repl_gram.y
+++ b/src/backend/replication/repl_gram.y
@@ -66,11 +66,12 @@ Node *replication_parse_result;
%token K_IDENTIFY_SYSTEM
%token K_LABEL
%token K_PROGRESS
+%token K_FAST
%token K_START_REPLICATION
%type <node> command
%type <node> base_backup start_replication identify_system
-%type <boolval> opt_progress
+%type <boolval> opt_progress opt_fast
%type <str> opt_label
%%
@@ -102,15 +103,16 @@ identify_system:
;
/*
- * BASE_BACKUP [LABEL <label>] [PROGRESS]
+ * BASE_BACKUP [LABEL <label>] [PROGRESS] [FAST]
*/
base_backup:
- K_BASE_BACKUP opt_label opt_progress
+ K_BASE_BACKUP opt_label opt_progress opt_fast
{
BaseBackupCmd *cmd = (BaseBackupCmd *) makeNode(BaseBackupCmd);
cmd->label = $2;
cmd->progress = $3;
+ cmd->fastcheckpoint = $4;
$$ = (Node *) cmd;
}
@@ -123,6 +125,9 @@ opt_label: K_LABEL SCONST { $$ = $2; }
opt_progress: K_PROGRESS { $$ = true; }
| /* EMPTY */ { $$ = false; }
;
+opt_fast: K_FAST { $$ = true; }
+ | /* EMPTY */ { $$ = false; }
+ ;
/*
* START_REPLICATION %X/%X
diff --git a/src/backend/replication/repl_scanner.l b/src/backend/replication/repl_scanner.l
index 014a720..e6dfb04 100644
--- a/src/backend/replication/repl_scanner.l
+++ b/src/backend/replication/repl_scanner.l
@@ -57,6 +57,7 @@ quotestop {quote}
%%
BASE_BACKUP { return K_BASE_BACKUP; }
+FAST { return K_FAST; }
IDENTIFY_SYSTEM { return K_IDENTIFY_SYSTEM; }
LABEL { return K_LABEL; }
PROGRESS { return K_PROGRESS; }
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index 0ad6804..14b43d8 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -402,7 +402,7 @@ HandleReplicationCommand(const char *cmd_string)
{
BaseBackupCmd *cmd = (BaseBackupCmd *) cmd_node;
- SendBaseBackup(cmd->label, cmd->progress);
+ SendBaseBackup(cmd->label, cmd->progress, cmd->fastcheckpoint);
/* Send CommandComplete and ReadyForQuery messages */
EndCommand("SELECT", DestRemote);
diff --git a/src/bin/Makefile b/src/bin/Makefile
index c18c05c..3809412 100644
--- a/src/bin/Makefile
+++ b/src/bin/Makefile
@@ -14,7 +14,7 @@ top_builddir = ../..
include $(top_builddir)/src/Makefile.global
SUBDIRS = initdb pg_ctl pg_dump \
- psql scripts pg_config pg_controldata pg_resetxlog
+ psql scripts pg_config pg_controldata pg_resetxlog pg_basebackup
ifeq ($(PORTNAME), win32)
SUBDIRS+=pgevent
endif
diff --git a/src/bin/pg_basebackup/Makefile b/src/bin/pg_basebackup/Makefile
new file mode 100644
index 0000000..ccb1502
--- /dev/null
+++ b/src/bin/pg_basebackup/Makefile
@@ -0,0 +1,38 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/bin/pg_basebackup
+#
+# Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/bin/pg_basebackup/Makefile
+#
+#-------------------------------------------------------------------------
+
+PGFILEDESC = "pg_basebackup - takes a streaming base backup of a PostgreSQL instance"
+PGAPPICON=win32
+
+subdir = src/bin/pg_basebackup
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS= pg_basebackup.o $(WIN32RES)
+
+all: pg_basebackup
+
+pg_basebackup: $(OBJS) | submake-libpq submake-libpgport
+ $(CC) $(CFLAGS) $(OBJS) $(libpq_pgport) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+
+install: all installdirs
+ $(INSTALL_PROGRAM) pg_basebackup$(X) '$(DESTDIR)$(bindir)/pg_basebackup$(X)'
+
+installdirs:
+ $(MKDIR_P) '$(DESTDIR)$(bindir)'
+
+uninstall:
+ rm -f '$(DESTDIR)$(bindir)/pg_basebackup$(X)'
+
+clean distclean maintainer-clean:
+ rm -f pg_basebackup$(X) $(OBJS)
diff --git a/src/bin/pg_basebackup/nls.mk b/src/bin/pg_basebackup/nls.mk
new file mode 100644
index 0000000..760ee1d
--- /dev/null
+++ b/src/bin/pg_basebackup/nls.mk
@@ -0,0 +1,5 @@
+# src/bin/pg_basebackup/nls.mk
+CATALOG_NAME := pg_basebackup
+AVAIL_LANGUAGES :=
+GETTEXT_FILES := pg_basebackup.c
+GETTEXT_TRIGGERS:= _
diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c
new file mode 100644
index 0000000..6ecfc54
--- /dev/null
+++ b/src/bin/pg_basebackup/pg_basebackup.c
@@ -0,0 +1,1044 @@
+/*-------------------------------------------------------------------------
+ *
+ * pg_basebackup.c - receive a base backup using streaming replication protocol
+ *
+ * Author: Magnus Hagander <magnus@hagander.net>
+ *
+ * Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ * src/bin/pg_basebackup.c
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+#include "libpq-fe.h"
+
+#include <unistd.h>
+#include <dirent.h>
+#include <sys/stat.h>
+
+#ifdef HAVE_LIBZ
+#include <zlib.h>
+#endif
+
+#include "getopt_long.h"
+
+
+/* Global options */
+static const char *progname;
+char *basedir = NULL;
+char *tardir = NULL;
+char *label = "pg_basebackup base backup";
+bool showprogress = false;
+int verbose = 0;
+int compresslevel = 0;
+bool fastcheckpoint = false;
+char *dbhost = NULL;
+char *dbuser = NULL;
+char *dbport = NULL;
+int dbgetpassword = 0; /* 0=auto, -1=never, 1=always */
+
+/* Progress counters */
+static uint64 totalsize;
+static uint64 totaldone;
+static int tablespacecount;
+
+/* Connection kept global so we can disconnect easily */
+static PGconn *conn = NULL;
+
+#define disconnect_and_exit(code) \
+ { \
+ if (conn != NULL) PQfinish(conn); \
+ exit(code); \
+ }
+
+/* Function headers */
+static char *xstrdup(const char *s);
+static void *xmalloc0(int size);
+static void usage(void);
+static void verify_dir_is_empty_or_create(char *dirname);
+static void progress_report(int tablespacenum, char *fn);
+static PGconn *GetConnection(void);
+
+static void ReceiveTarFile(PGconn *conn, PGresult *res, int rownum);
+static void ReceiveAndUnpackTarFile(PGconn *conn, PGresult *res, int rownum);
+static void BaseBackup();
+
+#ifdef HAVE_LIBZ
+static const char *
+get_gz_error(gzFile *gzf)
+{
+ int errnum;
+ const char *errmsg;
+
+ errmsg = gzerror(gzf, &errnum);
+ if (errnum == Z_ERRNO)
+ return strerror(errno);
+ else
+ return errmsg;
+}
+#endif
+
+/*
+ * strdup() and malloc() replacements that prints an error and exits
+ * if something goes wrong. Can never return NULL.
+ */
+static char *
+xstrdup(const char *s)
+{
+ char *result;
+
+ result = strdup(s);
+ if (!result)
+ {
+ fprintf(stderr, _("%s: out of memory\n"), progname);
+ exit(1);
+ }
+ return result;
+}
+
+static void *
+xmalloc0(int size)
+{
+ void *result;
+
+ result = malloc(size);
+ if (!result)
+ {
+ fprintf(stderr, _("%s: out of memory\n"), progname);
+ exit(1);
+ }
+ MemSet(result, 0, size);
+ return result;
+}
+
+
+static void
+usage(void)
+{
+ printf(_("%s takes base backups of running PostgreSQL servers\n\n"),
+ progname);
+ printf(_("Usage:\n"));
+ printf(_(" %s [OPTION]...\n"), progname);
+ printf(_("\nOptions:\n"));
+ printf(_(" -D, --pgdata=directory receive base backup into directory\n"));
+ printf(_(" -T, --tardir=directory receive base backup into tar files\n"
+ " stored in specified directory\n"));
+ printf(_(" -Z, --compress=0-9 compress tar output\n"));
+ printf(_(" -l, --label=label set backup label\n"));
+ printf(_(" -c, --checkpoint=fast|slow\n"
+ " set fast or slow checkpoinging\n"));
+ printf(_(" -P, --progress show progress information\n"));
+ printf(_(" -v, --verbose output verbose messages\n"));
+ printf(_("\nConnection options:\n"));
+ printf(_(" -h, --host=HOSTNAME database server host or socket directory\n"));
+ printf(_(" -p, --port=PORT database server port number\n"));
+ printf(_(" -U, --username=NAME connect as specified database user\n"));
+ printf(_(" -w, --no-password never prompt for password\n"));
+ printf(_(" -W, --password force password prompt (should happen automatically)\n"));
+ printf(_("\nOther options:\n"));
+ printf(_(" -?, --help show this help, then exit\n"));
+ printf(_(" -V, --version output version information, then exit\n"));
+ printf(_("\nReport bugs to <pgsql-bugs@postgresql.org>.\n"));
+}
+
+
+/*
+ * Verify that the given directory exists and is empty. If it does not
+ * exist, it is created. If it exists but is not empty, an error will
+ * be give and the process ended.
+ */
+static void
+verify_dir_is_empty_or_create(char *dirname)
+{
+ switch (pg_check_dir(dirname))
+ {
+ case 0:
+
+ /*
+ * Does not exist, so create
+ */
+ if (pg_mkdir_p(dirname, S_IRWXU) == -1)
+ {
+ fprintf(stderr,
+ _("%s: could not create directory \"%s\": %s\n"),
+ progname, dirname, strerror(errno));
+ disconnect_and_exit(1);
+ }
+ return;
+ case 1:
+
+ /*
+ * Exists, empty
+ */
+ return;
+ case 2:
+
+ /*
+ * Exists, not empty
+ */
+ fprintf(stderr,
+ _("%s: directory \"%s\" exists but is not empty\n"),
+ progname, dirname);
+ disconnect_and_exit(1);
+ case -1:
+
+ /*
+ * Access problem
+ */
+ fprintf(stderr, _("%s: could not access directory \"%s\": %s\n"),
+ progname, dirname, strerror(errno));
+ disconnect_and_exit(1);
+ }
+}
+
+
+/*
+ * Print a progress report based on the global variables. If verbose output
+ * is enabled, also print the current file name.
+ */
+static void
+progress_report(int tablespacenum, char *fn)
+{
+ if (verbose)
+ fprintf(stderr,
+ INT64_FORMAT "/" INT64_FORMAT " kB (%i%%) %i/%i tablespaces (%-30s)\r",
+ totaldone / 1024, totalsize,
+ (int) ((totaldone / 1024) * 100 / totalsize),
+ tablespacenum, tablespacecount, fn);
+ else
+ fprintf(stderr, INT64_FORMAT "/" INT64_FORMAT " kB (%i%%) %i/%i tablespaces\r",
+ totaldone / 1024, totalsize,
+ (int) ((totaldone / 1024) * 100 / totalsize),
+ tablespacenum, tablespacecount);
+}
+
+
+/*
+ * Receive a tar format file from the connection to the server, and write
+ * the data from this file directly into a tar file. If compression is
+ * enabled, the data will be compressed while written to the file.
+ *
+ * The file will be named base.tar[.gz] if it's for the main data directory
+ * or <tablespaceoid>.tar[.gz] if it's for another tablespace.
+ *
+ * No attempt to inspect or validate the contents of the file is done.
+ */
+static void
+ReceiveTarFile(PGconn *conn, PGresult *res, int rownum)
+{
+ char fn[MAXPGPATH];
+ char *copybuf = NULL;
+ FILE *tarfile = NULL;
+
+#ifdef HAVE_LIBZ
+ gzFile *ztarfile = NULL;
+#endif
+
+ if (PQgetisnull(res, rownum, 0))
+
+ /*
+ * Base tablespaces
+ */
+ if (strcmp(tardir, "-") == 0)
+ tarfile = stdout;
+ else
+ {
+#ifdef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ snprintf(fn, sizeof(fn), "%s/base.tar.gz", tardir);
+ ztarfile = gzopen(fn, "wb");
+ if (gzsetparams(ztarfile, compresslevel, Z_DEFAULT_STRATEGY) != Z_OK)
+ {
+ fprintf(stderr, _("%s: could not set compression level %i\n"),
+ progname, compresslevel);
+ disconnect_and_exit(1);
+ }
+ }
+ else
+#endif
+ {
+ snprintf(fn, sizeof(fn), "%s/base.tar", tardir);
+ tarfile = fopen(fn, "wb");
+ }
+ }
+ else
+ {
+ /*
+ * Specific tablespace
+ */
+#ifdef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ snprintf(fn, sizeof(fn), "%s/%s.tar.gz", tardir, PQgetvalue(res, rownum, 0));
+ ztarfile = gzopen(fn, "wb");
+ if (gzsetparams(ztarfile, compresslevel, Z_DEFAULT_STRATEGY) != Z_OK)
+ {
+ fprintf(stderr, _("%s: could not set compression level %i\n"),
+ progname, compresslevel);
+ disconnect_and_exit(1);
+ }
+ }
+ else
+#endif
+ {
+ snprintf(fn, sizeof(fn), "%s/%s.tar", tardir, PQgetvalue(res, rownum, 0));
+ tarfile = fopen(fn, "wb");
+ }
+ }
+
+#ifdef HAVE_LIBZ
+ if (compresslevel > 0 && !ztarfile)
+ {
+ /* Compression is in use */
+ fprintf(stderr, _("%s: could not create compressed file \"%s\": %s\n"),
+ progname, fn, get_gz_error(ztarfile));
+ disconnect_and_exit(1);
+ }
+ else
+#endif
+ {
+ /* Either no zlib support, or zlib support but compresslevel = 0 */
+ if (!tarfile)
+ {
+ fprintf(stderr, _("%s: could not create file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+ }
+
+ /*
+ * Get the COPY data stream
+ */
+ res = PQgetResult(conn);
+ if (PQresultStatus(res) != PGRES_COPY_OUT)
+ {
+ fprintf(stderr, _("%s: could not get COPY data stream: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+ while (1)
+ {
+ int r;
+
+ if (copybuf != NULL)
+ {
+ PQfreemem(copybuf);
+ copybuf = NULL;
+ }
+
+ r = PQgetCopyData(conn, ©buf, 0);
+ if (r == -1)
+ {
+ /*
+ * End of chunk. Close file (but not stdout).
+ *
+ * Also, write two completely empty blocks at the end of the tar
+ * file, as required by some tar programs.
+ */
+ char zerobuf[1024];
+
+ MemSet(zerobuf, 0, sizeof(zerobuf));
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ {
+ if (gzwrite(ztarfile, zerobuf, sizeof(zerobuf)) != sizeof(zerobuf))
+ {
+ fprintf(stderr, _("%s: could not write to compressed file \"%s\": %s\n"),
+ progname, fn, get_gz_error(ztarfile));
+ }
+ }
+ else
+#endif
+ {
+ if (fwrite(zerobuf, sizeof(zerobuf), 1, tarfile) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+ }
+
+ if (strcmp(tardir, "-") != 0)
+ {
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ gzclose(ztarfile);
+#endif
+ if (tarfile != NULL)
+ fclose(tarfile);
+ }
+
+ break;
+ }
+ else if (r == -2)
+ {
+ fprintf(stderr, _("%s: could not read COPY data: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ {
+ if (gzwrite(ztarfile, copybuf, r) != r)
+ {
+ fprintf(stderr, _("%s: could not write to compressed file \"%s\": %s\n"),
+ progname, fn, get_gz_error(ztarfile));
+ }
+ }
+ else
+#endif
+ {
+ if (fwrite(copybuf, r, 1, tarfile) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+ }
+ totaldone += r;
+ if (showprogress)
+ progress_report(rownum, fn);
+ } /* while (1) */
+
+ if (copybuf != NULL)
+ PQfreemem(copybuf);
+}
+
+/*
+ * Receive a tar format stream from the connection to the server, and unpack
+ * the contents of it into a directory. Only files, directories and
+ * symlinks are supported, no other kinds of special files.
+ *
+ * If the data is for the main data directory, it will be restored in the
+ * specified directory. If it's for another tablespace, it will be restored
+ * in the original directory, since relocation of tablespaces is not
+ * supported.
+ */
+static void
+ReceiveAndUnpackTarFile(PGconn *conn, PGresult *res, int rownum)
+{
+ char current_path[MAXPGPATH];
+ char fn[MAXPGPATH];
+ int current_len_left;
+ int current_padding;
+ char *copybuf = NULL;
+ FILE *file = NULL;
+
+ if (PQgetisnull(res, rownum, 0))
+ strcpy(current_path, basedir);
+ else
+ strcpy(current_path, PQgetvalue(res, rownum, 1));
+
+ /*
+ * Make sure we're unpacking into an empty directory
+ */
+ verify_dir_is_empty_or_create(current_path);
+
+ /*
+ * Get the COPY data
+ */
+ res = PQgetResult(conn);
+ if (PQresultStatus(res) != PGRES_COPY_OUT)
+ {
+ fprintf(stderr, _("%s: could not get COPY data stream: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+ while (1)
+ {
+ int r;
+
+ if (copybuf != NULL)
+ {
+ PQfreemem(copybuf);
+ copybuf = NULL;
+ }
+
+ r = PQgetCopyData(conn, ©buf, 0);
+
+ if (r == -1)
+ {
+ /*
+ * End of chunk
+ */
+ if (file)
+ fclose(file);
+
+ break;
+ }
+ else if (r == -2)
+ {
+ fprintf(stderr, _("%s: could not read COPY data: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+ if (file == NULL)
+ {
+#ifndef WIN32
+ mode_t filemode;
+#endif
+
+ /*
+ * No current file, so this must be the header for a new file
+ */
+ if (r != 512)
+ {
+ fprintf(stderr, _("%s: Invalid tar block header size: %i\n"),
+ progname, r);
+ disconnect_and_exit(1);
+ }
+ totaldone += 512;
+
+ if (sscanf(copybuf + 124, "%11o", ¤t_len_left) != 1)
+ {
+ fprintf(stderr, _("%s: could not parse file size!\n"),
+ progname);
+ disconnect_and_exit(1);
+ }
+
+ /* Set permissions on the file */
+ if (sscanf(©buf[100], "%07o ", &filemode) != 1)
+ {
+ fprintf(stderr, _("%s: could not parse file mode!\n"),
+ progname);
+ disconnect_and_exit(1);
+ }
+
+ /*
+ * All files are padded up to 512 bytes
+ */
+ current_padding =
+ ((current_len_left + 511) & ~511) - current_len_left;
+
+ /*
+ * First part of header is zero terminated filename
+ */
+ snprintf(fn, sizeof(fn), "%s/%s", current_path, copybuf);
+ if (fn[strlen(fn) - 1] == '/')
+ {
+ /*
+ * Ends in a slash means directory or symlink to directory
+ */
+ if (copybuf[156] == '5')
+ {
+ /*
+ * Directory
+ */
+ fn[strlen(fn) - 1] = '\0'; /* Remove trailing slash */
+ if (mkdir(fn, S_IRWXU) != 0)
+ {
+ fprintf(stderr,
+ _("%s: could not create directory \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+#ifndef WIN32
+ if (chmod(fn, filemode))
+ fprintf(stderr, _("%s: could not set permissions on directory \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+#endif
+ }
+ else if (copybuf[156] == '2')
+ {
+ /*
+ * Symbolic link
+ */
+ fn[strlen(fn) - 1] = '\0'; /* Remove trailing slash */
+ if (symlink(©buf[157], fn) != 0)
+ {
+ fprintf(stderr,
+ _("%s: could not create symbolic link from %s to %s: %s\n"),
+ progname, fn, ©buf[157], strerror(errno));
+ disconnect_and_exit(1);
+ }
+ }
+ else
+ {
+ fprintf(stderr, _("%s: unknown link indicator \"%c\"\n"),
+ progname, copybuf[156]);
+ disconnect_and_exit(1);
+ }
+ continue; /* directory or link handled */
+ }
+
+ /*
+ * regular file
+ */
+ file = fopen(fn, "wb");
+ if (!file)
+ {
+ fprintf(stderr, _("%s: could not create file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+
+#ifndef WIN32
+ if (chmod(fn, filemode))
+ fprintf(stderr, _("%s: could not set permissions on file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+#endif
+
+ if (current_len_left == 0)
+ {
+ /*
+ * Done with this file, next one will be a new tar header
+ */
+ fclose(file);
+ file = NULL;
+ continue;
+ }
+ } /* new file */
+ else
+ {
+ /*
+ * Continuing blocks in existing file
+ */
+ if (current_len_left == 0 && r == current_padding)
+ {
+ /*
+ * Received the padding block for this file, ignore it and
+ * close the file, then move on to the next tar header.
+ */
+ fclose(file);
+ file = NULL;
+ totaldone += r;
+ continue;
+ }
+
+ if (fwrite(copybuf, r, 1, file) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+ totaldone += r;
+ if (showprogress)
+ progress_report(rownum, fn);
+
+ current_len_left -= r;
+ if (current_len_left == 0 && current_padding == 0)
+ {
+ /*
+ * Received the last block, and there is no padding to be
+ * expected. Close the file and move on to the next tar
+ * header.
+ */
+ fclose(file);
+ file = NULL;
+ continue;
+ }
+ } /* continuing data in existing file */
+ } /* loop over all data blocks */
+
+ if (file != NULL)
+ {
+ fprintf(stderr, _("%s: last file was never finsihed!\n"), progname);
+ disconnect_and_exit(1);
+ }
+
+ if (copybuf != NULL)
+ PQfreemem(copybuf);
+}
+
+
+static PGconn *
+GetConnection(void)
+{
+ PGconn *tmpconn;
+ int argcount = 4; /* dbname, replication, fallback_app_name,
+ * password */
+ int i;
+ const char **keywords;
+ const char **values;
+ char *password = NULL;
+
+ if (dbhost)
+ argcount++;
+ if (dbuser)
+ argcount++;
+ if (dbport)
+ argcount++;
+
+ keywords = xmalloc0((argcount + 1) * sizeof(*keywords));
+ values = xmalloc0((argcount + 1) * sizeof(*values));
+
+ keywords[0] = "dbname";
+ values[0] = "replication";
+ keywords[1] = "replication";
+ values[1] = "true";
+ keywords[2] = "fallback_application_name";
+ values[2] = progname;
+ i = 3;
+ if (dbhost)
+ {
+ keywords[i] = "host";
+ values[i] = dbhost;
+ i++;
+ }
+ if (dbuser)
+ {
+ keywords[i] = "user";
+ values[i] = dbuser;
+ i++;
+ }
+ if (dbport)
+ {
+ keywords[i] = "port";
+ values[i] = dbport;
+ i++;
+ }
+
+ while (true)
+ {
+ if (dbgetpassword == 1)
+ {
+ /* Prompt for a password */
+ password = simple_prompt(_("Password: "), 100, false);
+ keywords[argcount - 1] = "password";
+ values[argcount - 1] = password;
+ }
+
+ tmpconn = PQconnectdbParams(keywords, values, true);
+ if (password)
+ free(password);
+
+ if (PQstatus(tmpconn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(tmpconn) &&
+ dbgetpassword != -1)
+ {
+ dbgetpassword = 1; /* ask for password next time */
+ PQfinish(tmpconn);
+ continue;
+ }
+
+ if (PQstatus(tmpconn) != CONNECTION_OK)
+ {
+ fprintf(stderr, _("%s: could not connect to server: %s\n"),
+ progname, PQerrorMessage(tmpconn));
+ exit(1);
+ }
+
+ /* Connection ok! */
+ free(values);
+ free(keywords);
+ return tmpconn;
+ }
+}
+
+static void
+BaseBackup()
+{
+ PGresult *res;
+ char current_path[MAXPGPATH];
+ char escaped_label[MAXPGPATH];
+ int i;
+
+ /*
+ * Connect in replication mode to the server
+ */
+ conn = GetConnection();
+
+ PQescapeStringConn(conn, escaped_label, label, sizeof(escaped_label), &i);
+ snprintf(current_path, sizeof(current_path), "BASE_BACKUP LABEL '%s' %s %s",
+ escaped_label,
+ showprogress ? "PROGRESS" : "",
+ fastcheckpoint ? "FAST" : "");
+
+ if (PQsendQuery(conn, current_path) == 0)
+ {
+ fprintf(stderr, _("%s: could not start base backup: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+ /*
+ * Get the header
+ */
+ res = PQgetResult(conn);
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ fprintf(stderr, _("%s: could not initiate base backup: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+ if (PQntuples(res) < 1)
+ {
+ fprintf(stderr, _("%s: no data returned from server.\n"), progname);
+ disconnect_and_exit(1);
+ }
+
+ /*
+ * Sum up the total size, for progress reporting
+ */
+ totalsize = totaldone = 0;
+ tablespacecount = PQntuples(res);
+ for (i = 0; i < PQntuples(res); i++)
+ {
+ if (showprogress)
+ totalsize += atol(PQgetvalue(res, i, 2));
+
+ /*
+ * Verify tablespace directories are empty Don't bother with the first
+ * once since it can be relocated, and it will be checked before we do
+ * anything anyway.
+ */
+ if (basedir != NULL && i > 0)
+ verify_dir_is_empty_or_create(PQgetvalue(res, i, 1));
+ }
+
+ /*
+ * When writing to stdout, require a single tablespace
+ */
+ if (tardir != NULL && strcmp(tardir, "-") == 0 && PQntuples(res) > 1)
+ {
+ fprintf(stderr, _("%s: can only write single tablespace to stdout, database has %i.\n"),
+ progname, PQntuples(res));
+ disconnect_and_exit(1);
+ }
+
+ /*
+ * Start receiving chunks
+ */
+ for (i = 0; i < PQntuples(res); i++)
+ {
+ if (tardir != NULL)
+ ReceiveTarFile(conn, res, i);
+ else
+ ReceiveAndUnpackTarFile(conn, res, i);
+ } /* Loop over all tablespaces */
+
+ if (showprogress)
+ {
+ progress_report(PQntuples(res), "");
+ fprintf(stderr, "\n"); /* Need to move to next line */
+ }
+ PQclear(res);
+
+ res = PQgetResult(conn);
+ if (PQresultStatus(res) != PGRES_COMMAND_OK)
+ {
+ fprintf(stderr, _("%s: final receive failed: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+ /*
+ * End of copy data. Final result is already checked inside the loop.
+ */
+ PQfinish(conn);
+
+ if (verbose)
+ fprintf(stderr, "%s: base backup completed.\n", progname);
+}
+
+
+int
+main(int argc, char **argv)
+{
+ static struct option long_options[] = {
+ {"help", no_argument, NULL, '?'},
+ {"version", no_argument, NULL, 'V'},
+ {"pgdata", required_argument, NULL, 'D'},
+ {"tardir", required_argument, NULL, 'T'},
+ {"compress", required_argument, NULL, 'Z'},
+ {"label", required_argument, NULL, 'l'},
+ {"host", required_argument, NULL, 'h'},
+ {"port", required_argument, NULL, 'p'},
+ {"username", required_argument, NULL, 'U'},
+ {"no-password", no_argument, NULL, 'w'},
+ {"password", no_argument, NULL, 'W'},
+ {"verbose", no_argument, NULL, 'v'},
+ {"progress", no_argument, NULL, 'P'},
+ {NULL, 0, NULL, 0}
+ };
+ int c;
+
+ int option_index;
+
+ progname = get_progname(argv[0]);
+ set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_basebackup"));
+
+ if (argc > 1)
+ {
+ if (strcmp(argv[1], "--help") == 0 || strcmp(argv[1], "-?") == 0)
+ {
+ usage();
+ exit(0);
+ }
+ else if (strcmp(argv[1], "-V") == 0
+ || strcmp(argv[1], "--version") == 0)
+ {
+ puts("pg_basebackup (PostgreSQL) " PG_VERSION);
+ exit(0);
+ }
+ }
+
+ while ((c = getopt_long(argc, argv, "D:T:l:Z:c:h:p:U:wWvP",
+ long_options, &option_index)) != -1)
+ {
+ switch (c)
+ {
+ case 'D':
+ basedir = xstrdup(optarg);
+ break;
+ case 'T':
+ tardir = xstrdup(optarg);
+ break;
+ case 'l':
+ label = xstrdup(optarg);
+ break;
+ case 'Z':
+ compresslevel = atoi(optarg);
+ if (compresslevel <= 0 || compresslevel > 9)
+ {
+ fprintf(stderr, _("%s: invalid compression level \"%s\"\n"),
+ progname, optarg);
+ exit(1);
+ }
+ break;
+ case 'c':
+ if (strcasecmp(optarg, "fast") == 0)
+ fastcheckpoint = true;
+ else if (strcasecmp(optarg, "slow") == 0)
+ fastcheckpoint = false;
+ else
+ {
+ fprintf(stderr, _("%s: invalid checkpoint argument \"%s\", must be \"fast\" or \"slow\"\n"),
+ progname, optarg);
+ exit(1);
+ }
+ break;
+ case 'h':
+ dbhost = xstrdup(optarg);
+ break;
+ case 'p':
+ if (atoi(optarg) <= 0)
+ {
+ fprintf(stderr, _("%s: invalid port number \"%s\"\n"),
+ progname, optarg);
+ exit(1);
+ }
+ dbport = xstrdup(optarg);
+ break;
+ case 'U':
+ dbuser = xstrdup(optarg);
+ break;
+ case 'w':
+ dbgetpassword = -1;
+ break;
+ case 'W':
+ dbgetpassword = 1;
+ break;
+ case 'v':
+ verbose++;
+ break;
+ case 'P':
+ showprogress = true;
+ break;
+ default:
+
+ /*
+ * getopt_long already emitted a complaint
+ */
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+ }
+
+ /*
+ * Any non-option arguments?
+ */
+ if (optind < argc)
+ {
+ fprintf(stderr,
+ _("%s: too many command-line arguments (first is \"%s\")\n"),
+ progname, argv[optind]);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ /*
+ * Required arguments
+ */
+ if (basedir == NULL && tardir == NULL)
+ {
+ fprintf(stderr, _("%s: no target directory specified\n"), progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ /*
+ * Mutually exclusive arguments
+ */
+ if (basedir != NULL && tardir != NULL)
+ {
+ fprintf(stderr,
+ _("%s: both directory mode and tar mode cannot be specified\n"),
+ progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ if (basedir != NULL && compresslevel > 0)
+ {
+ fprintf(stderr,
+ _("%s: only tar mode backups can be compressed\n"),
+ progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+#ifndef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ fprintf(stderr,
+ _("%s: this build does not support compression\n"),
+ progname);
+ exit(1);
+ }
+#else
+ if (compresslevel > 0 && strcmp(tardir, "-") == 0)
+ {
+ fprintf(stderr,
+ _("%s: compression is not supported on standard output\n"),
+ progname);
+ exit(1);
+ }
+#endif
+
+ /*
+ * Verify directories
+ */
+ if (basedir)
+ verify_dir_is_empty_or_create(basedir);
+ else if (strcmp(tardir, "-") != 0)
+ verify_dir_is_empty_or_create(tardir);
+
+
+
+ BaseBackup();
+
+ return 0;
+}
diff --git a/src/include/replication/basebackup.h b/src/include/replication/basebackup.h
index eb2e160..80f814b 100644
--- a/src/include/replication/basebackup.h
+++ b/src/include/replication/basebackup.h
@@ -12,6 +12,6 @@
#ifndef _BASEBACKUP_H
#define _BASEBACKUP_H
-extern void SendBaseBackup(const char *backup_label, bool progress);
+extern void SendBaseBackup(const char *backup_label, bool progress, bool fastcheckpoint);
#endif /* _BASEBACKUP_H */
diff --git a/src/include/replication/replnodes.h b/src/include/replication/replnodes.h
index 4f4a1a3..fc81414 100644
--- a/src/include/replication/replnodes.h
+++ b/src/include/replication/replnodes.h
@@ -47,6 +47,7 @@ typedef struct BaseBackupCmd
NodeTag type;
char *label;
bool progress;
+ bool fastcheckpoint;
} BaseBackupCmd;
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 29c3c77..40fb130 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -273,6 +273,8 @@ sub mkvcbuild
$initdb->AddLibrary('wsock32.lib');
$initdb->AddLibrary('ws2_32.lib');
+ my $pgbasebackup = AddSimpleFrontend('pg_basebackup', 1);
+
my $pgconfig = AddSimpleFrontend('pg_config');
my $pgcontrol = AddSimpleFrontend('pg_controldata');
Excerpts from Magnus Hagander's message of mar ene 18 10:47:03 -0300 2011:
Ok, thanks for clarifying. I've updated to use strerror(). Guess it's
time for another patch, PFA :-)
Thanks ... Message nitpick:
+ if (compresslevel > 0)
+ {
+ fprintf(stderr,
+ _("%s: this build does not support compression\n"),
+ progname);
+ exit(1);
+ }
pg_dump uses the following wording:
"WARNING: archive is compressed, but this installation does not support "
"compression -- no data will be available\n"
So perhaps yours should s/build/installation/
Also, in messages of this kind,
+ if (gzsetparams(ztarfile, compresslevel, Z_DEFAULT_STRATEGY) != Z_OK)
+ {
+ fprintf(stderr, _("%s: could not set compression level %i\n"),
+ progname, compresslevel);
Shouldn't you also be emitting the gzerror()? ... oh I see you're
already doing it for most gz calls.
--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
On Tue, Jan 18, 2011 at 15:49, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
Excerpts from Magnus Hagander's message of mar ene 18 10:47:03 -0300 2011:
Ok, thanks for clarifying. I've updated to use strerror(). Guess it's
time for another patch, PFA :-)Thanks ... Message nitpick: + if (compresslevel > 0) + { + fprintf(stderr, + _("%s: this build does not support compression\n"), + progname); + exit(1); + }pg_dump uses the following wording:
"WARNING: archive is compressed, but this installation does not support "
"compression -- no data will be available\n"So perhaps yours should s/build/installation/
That shows up when a the *archive* is compressed, though? There are a
number of other cases that use build in the backend, such as:
src/backend/utils/misc/guc.c: errmsg("assertion checking is not
supported by this build")));
src/backend/utils/misc/guc.c: errmsg("Bonjour is not supported by
this build")));
src/backend/utils/misc/guc.c: errmsg("SSL is not supported by this
build")));
Also, in messages of this kind, + if (gzsetparams(ztarfile, compresslevel, Z_DEFAULT_STRATEGY) != Z_OK) + { + fprintf(stderr, _("%s: could not set compression level %i\n"), + progname, compresslevel);Shouldn't you also be emitting the gzerror()? ... oh I see you're
already doing it for most gz calls.
It's not clear from the zlib documentation I have that gzerror() works
after a gzsetparams(). Do you have any docs that say differently by
any chance?
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Magnus Hagander <magnus@hagander.net> writes:
Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)
pg_stream_log
pg_stream_backup
Those seem better.
Tom, would those solve your concerns about it being clear which side
they are on? Or do you think you'd still risk reading them as the
sending side?
It's still totally unclear what they do. How about "pg_receive_log"
etc?
regards, tom lane
On Tue, Jan 18, 2011 at 17:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Magnus Hagander <magnus@hagander.net> writes:
Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)pg_stream_log
pg_stream_backupThose seem better.
Tom, would those solve your concerns about it being clear which side
they are on? Or do you think you'd still risk reading them as the
sending side?It's still totally unclear what they do. How about "pg_receive_log"
etc?
I agree with whomever said using "wal" is better than "log" to be unambiguous.
So it'd be pg_receive_wal and pg_receive_base_backup then? Votes from
others? (it's easy to rename so far, so I'll keep plugging away under
the name pg_basebackup based on Fujii-sans comments until such a time
as we have a reasonable consensus :-)
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
2011/1/18 Magnus Hagander <magnus@hagander.net>:
On Tue, Jan 18, 2011 at 17:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Magnus Hagander <magnus@hagander.net> writes:
Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)pg_stream_log
pg_stream_backupThose seem better.
Tom, would those solve your concerns about it being clear which side
they are on? Or do you think you'd still risk reading them as the
sending side?It's still totally unclear what they do. How about "pg_receive_log"
etc?I agree with whomever said using "wal" is better than "log" to be unambiguous.
So it'd be pg_receive_wal and pg_receive_base_backup then? Votes from
others? (it's easy to rename so far, so I'll keep plugging away under
the name pg_basebackup based on Fujii-sans comments until such a time
as we have a reasonable consensus :-)
pg_receive_wal is good for me.
pg_receive_base_backup in french base is a shortcut for database. here
we backup the whole cluster, I would suggest
pg_receive_cluster(_backup ?)
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
--
Cédric Villemain 2ndQuadrant
http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support
On Tue, Jan 18, 2011 at 12:03 PM, Magnus Hagander <magnus@hagander.net> wrote:
So it'd be pg_receive_wal and pg_receive_base_backup then? Votes from
others? (it's easy to rename so far, so I'll keep plugging away under
the name pg_basebackup based on Fujii-sans comments until such a time
as we have a reasonable consensus :-)
I like pg_receive_wal. pg_receive_base_backup I would be inclined to
shorten to pg_basebackup or pg_streambackup, but I just work here.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Excerpts from Magnus Hagander's message of mar ene 18 11:53:55 -0300 2011:
On Tue, Jan 18, 2011 at 15:49, Alvaro Herrera
<alvherre@commandprompt.com> wrote:Excerpts from Magnus Hagander's message of mar ene 18 10:47:03 -0300 2011:
Ok, thanks for clarifying. I've updated to use strerror(). Guess it's
time for another patch, PFA :-)Thanks ... Message nitpick: + if (compresslevel > 0) + { + fprintf(stderr, + _("%s: this build does not support compression\n"), + progname); + exit(1); + }pg_dump uses the following wording:
"WARNING: archive is compressed, but this installation does not support "
"compression -- no data will be available\n"So perhaps yours should s/build/installation/
That shows up when a the *archive* is compressed, though? There are a
number of other cases that use build in the backend, such as:
src/backend/utils/misc/guc.c: errmsg("assertion checking is not
supported by this build")));
src/backend/utils/misc/guc.c: errmsg("Bonjour is not supported by
this build")));
src/backend/utils/misc/guc.c: errmsg("SSL is not supported by this
build")));
Hmm. I think I'd s/build/installation/ on all those messages for
consistency.
Also, in messages of this kind, + if (gzsetparams(ztarfile, compresslevel, Z_DEFAULT_STRATEGY) != Z_OK) + { + fprintf(stderr, _("%s: could not set compression level %i\n"), + progname, compresslevel);Shouldn't you also be emitting the gzerror()? ... oh I see you're
already doing it for most gz calls.It's not clear from the zlib documentation I have that gzerror() works
after a gzsetparams(). Do you have any docs that say differently by
any chance?
Ah, no. I was reading zlib.h, which is ambiguous as you say:
ZEXTERN int ZEXPORT gzsetparams OF((gzFile file, int level, int strategy));
/*
Dynamically update the compression level or strategy. See the description
of deflateInit2 for the meaning of these parameters.
gzsetparams returns Z_OK if success, or Z_STREAM_ERROR if the file was not
opened for writing.
*/
ZEXTERN const char * ZEXPORT gzerror OF((gzFile file, int *errnum));
/*
Returns the error message for the last error which occurred on the
given compressed file. errnum is set to zlib error number. If an
error occurred in the file system and not in the compression library,
errnum is set to Z_ERRNO and the application may consult errno
to get the exact error code.
... but a quick look at the code says that it sets gz_stream->z_err
which is what gzerror returns:
int ZEXPORT gzsetparams (file, level, strategy)
gzFile file;
int level;
int strategy;
{
gz_stream *s = (gz_stream*)file;
if (s == NULL || s->mode != 'w') return Z_STREAM_ERROR;
/* Make room to allow flushing */
if (s->stream.avail_out == 0) {
s->stream.next_out = s->outbuf;
if (fwrite(s->outbuf, 1, Z_BUFSIZE, s->file) != Z_BUFSIZE) {
s->z_err = Z_ERRNO;
}
s->stream.avail_out = Z_BUFSIZE;
}
return deflateParams (&(s->stream), level, strategy);
}
--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
On Tue, 2011-01-18 at 18:03 +0100, Magnus Hagander wrote:
So it'd be pg_receive_wal and pg_receive_base_backup then?
OK for me.
Maybe even pg_receive_wal_stream
Don't see any reason why command names can't be long. We have many
function names already that long.
--
Simon Riggs http://www.2ndQuadrant.com/books/
PostgreSQL Development, 24x7 Support, Training and Services
On Tue, Jan 18, 2011 at 19:20, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
Excerpts from Magnus Hagander's message of mar ene 18 11:53:55 -0300 2011:
On Tue, Jan 18, 2011 at 15:49, Alvaro Herrera
<alvherre@commandprompt.com> wrote:Also, in messages of this kind, + if (gzsetparams(ztarfile, compresslevel, Z_DEFAULT_STRATEGY) != Z_OK) + { + fprintf(stderr, _("%s: could not set compression level %i\n"), + progname, compresslevel);Shouldn't you also be emitting the gzerror()? ... oh I see you're
already doing it for most gz calls.It's not clear from the zlib documentation I have that gzerror() works
after a gzsetparams(). Do you have any docs that say differently by
any chance?Ah, no. I was reading zlib.h, which is ambiguous as you say:
ZEXTERN int ZEXPORT gzsetparams OF((gzFile file, int level, int strategy));
/*
Dynamically update the compression level or strategy. See the description
of deflateInit2 for the meaning of these parameters.
gzsetparams returns Z_OK if success, or Z_STREAM_ERROR if the file was not
opened for writing.
*/ZEXTERN const char * ZEXPORT gzerror OF((gzFile file, int *errnum));
/*
Returns the error message for the last error which occurred on the
given compressed file. errnum is set to zlib error number. If an
error occurred in the file system and not in the compression library,
errnum is set to Z_ERRNO and the application may consult errno
to get the exact error code.... but a quick look at the code says that it sets gz_stream->z_err
which is what gzerror returns:int ZEXPORT gzsetparams (file, level, strategy)
gzFile file;
int level;
int strategy;
{
gz_stream *s = (gz_stream*)file;if (s == NULL || s->mode != 'w') return Z_STREAM_ERROR;
/* Make room to allow flushing */
if (s->stream.avail_out == 0) {s->stream.next_out = s->outbuf;
if (fwrite(s->outbuf, 1, Z_BUFSIZE, s->file) != Z_BUFSIZE) {
s->z_err = Z_ERRNO;
}
s->stream.avail_out = Z_BUFSIZE;
}return deflateParams (&(s->stream), level, strategy);
}
Ah, ok. I've added the errorcode now, PFA. I also fixed an error in
the change for result codes I broke in the last patch. github branch
updated as usual.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Attachments:
pg_basebackup.patchtext/x-patch; charset=US-ASCII; name=pg_basebackup.patchDownload
diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index db7c834..c14ae43 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -813,6 +813,16 @@ SELECT pg_stop_backup();
</para>
<para>
+ You can also use the <xref linkend="app-pgbasebackup"> tool to take
+ the backup, instead of manually copying the files. This tool will take
+ care of the <function>pg_start_backup()</>, copy and
+ <function>pg_stop_backup()</> steps automatically, and transfers the
+ backup over a regular <productname>PostgreSQL</productname> connection
+ using the replication protocol, instead of requiring filesystem level
+ access.
+ </para>
+
+ <para>
Some file system backup tools emit warnings or errors
if the files they are trying to copy change while the copy proceeds.
When taking a base backup of an active database, this situation is normal
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 76c062f..73f26b4 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1460,7 +1460,7 @@ The commands accepted in walsender mode are:
</varlistentry>
<varlistentry>
- <term>BASE_BACKUP [<literal>LABEL</literal> <replaceable>'label'</replaceable>] [<literal>PROGRESS</literal>]</term>
+ <term>BASE_BACKUP [<literal>LABEL</literal> <replaceable>'label'</replaceable>] [<literal>PROGRESS</literal>] [<literal>FAST</literal>]</term>
<listitem>
<para>
Instructs the server to start streaming a base backup.
@@ -1496,6 +1496,15 @@ The commands accepted in walsender mode are:
</para>
</listitem>
</varlistentry>
+
+ <varlistentry>
+ <term><literal>FAST</></term>
+ <listitem>
+ <para>
+ Request a fast checkpoint.
+ </para>
+ </listitem>
+ </varlistentry>
</variablelist>
</para>
<para>
diff --git a/doc/src/sgml/ref/allfiles.sgml b/doc/src/sgml/ref/allfiles.sgml
index f40fa9d..c44d11e 100644
--- a/doc/src/sgml/ref/allfiles.sgml
+++ b/doc/src/sgml/ref/allfiles.sgml
@@ -160,6 +160,7 @@ Complete list of usable sgml source files in this directory.
<!entity dropuser system "dropuser.sgml">
<!entity ecpgRef system "ecpg-ref.sgml">
<!entity initdb system "initdb.sgml">
+<!entity pgBasebackup system "pg_basebackup.sgml">
<!entity pgConfig system "pg_config-ref.sgml">
<!entity pgControldata system "pg_controldata.sgml">
<!entity pgCtl system "pg_ctl-ref.sgml">
diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml
new file mode 100644
index 0000000..e86d8bf
--- /dev/null
+++ b/doc/src/sgml/ref/pg_basebackup.sgml
@@ -0,0 +1,323 @@
+<!--
+doc/src/sgml/ref/pg_basebackup.sgml
+PostgreSQL documentation
+-->
+
+<refentry id="app-pgbasebackup">
+ <refmeta>
+ <refentrytitle>pg_basebackup</refentrytitle>
+ <manvolnum>1</manvolnum>
+ <refmiscinfo>Application</refmiscinfo>
+ </refmeta>
+
+ <refnamediv>
+ <refname>pg_basebackup</refname>
+ <refpurpose>take a base backup of a <productname>PostgreSQL</productname> cluster</refpurpose>
+ </refnamediv>
+
+ <indexterm zone="app-pgbasebackup">
+ <primary>pg_basebackup</primary>
+ </indexterm>
+
+ <refsynopsisdiv>
+ <cmdsynopsis>
+ <command>pg_basebackup</command>
+ <arg rep="repeat"><replaceable>option</></arg>
+ </cmdsynopsis>
+ </refsynopsisdiv>
+
+ <refsect1>
+ <title>
+ Description
+ </title>
+ <para>
+ <application>pg_basebackup</application> is used to take base backups of
+ a running <productname>PostgreSQL</productname> database cluster. These
+ are taken without affecting other clients to the database, and can be used
+ both for point-in-time recovery (see <xref linkend="continuous-archiving">)
+ and as the starting point for a log shipping or streaming replication standby
+ server (see <xref linkend="warm-standby">).
+ </para>
+
+ <para>
+ <application>pg_basebackup</application> makes a binary copy of the database
+ cluster files, while making sure the system is automatically put in and
+ out of backup mode automatically. Backups are always taken of the entire
+ database cluster, it is not possible to back up individual databases or
+ database objects. For individual database backups, a tool such as
+ <xref linkend="APP-PGDUMP"> must be used.
+ </para>
+
+ <para>
+ The backup is made over a regular <productname>PostgreSQL</productname>
+ connection, and uses the replication protocol. The connection must be
+ made with a user having <literal>REPLICATION</literal> permissions (see
+ <xref linkend="role-attributes">).
+ </para>
+
+ <para>
+ Only one backup can be concurrently active in
+ <productname>PostgreSQL</productname>, meaning that only one instance of
+ <application>pg_basebackup</application> can run at the same time
+ against a single database cluster.
+ </para>
+ </refsect1>
+
+ <refsect1>
+ <title>Options</title>
+
+ <para>
+ <variablelist>
+ <varlistentry>
+ <term><option>-D <replaceable class="parameter">directory</replaceable></option></term>
+ <term><option>--pgdata=<replaceable class="parameter">directory</replaceable></option></term>
+ <listitem>
+ <para>
+ Directory to restore the base data directory to. When the cluster has
+ no additional tablespaces, the whole database will be placed in this
+ directory. If the cluster contains additional tablespaces, the main
+ data directory will be placed in this directory, but all other
+ tablespaces will be placed in the same absolute path as they have
+ on the server.
+ </para>
+ <para>
+ Only one of <literal>-D</> and <literal>-T</> can be specified.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-l <replaceable class="parameter">label</replaceable></option></term>
+ <term><option>--label=<replaceable class="parameter">label</replaceable></option></term>
+ <listitem>
+ <para>
+ Sets the label for the backup. If none is specified, a default value of
+ <literal>pg_basebackup base backup</literal> will be used.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-p</option></term>
+ <term><option>--progress</option></term>
+ <listitem>
+ <para>
+ Enables progress reporting. Turning this on will deliver an approximate
+ progress report during the backup. Since the database may change during
+ the backup, this is only an approximation and may not end at exactly
+ <literal>100%</literal>.
+ </para>
+ <para>
+ When this is enabled, the backup will start by enumerating the size of
+ the entire database, and then go back and send the actual contents.
+ This may make the backup take slightly longer, and in particular it
+ will take longer before the first data is sent.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-T <replaceable class="parameter">directory</replaceable></option></term>
+ <term><option>--tardir=<replaceable class="parameter">directory</replaceable></option></term>
+ <listitem>
+ <para>
+ Directory to place tar format files in. When this is specified, the
+ backup will consist of a number of tar files, one for each tablespace
+ in the database, stored in this directory. The tar file for the main
+ data directory will be named <filename>base.tar</>, and all other
+ tablespaces will be named after the tablespace oid.
+ </para>
+ <para>
+ If the value <literal>-</> (dash) is specified as tar directory,
+ the tar contents will be written to standard output, suitable for
+ piping to for example <productname>gzip</>. This is only possible if
+ the cluster has no additional tablespaces.
+ </para>
+ <para>
+ Only one of <literal>-D</> and <literal>-T</> can be specified.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-c <replaceable class="parameter">fast|slow</replaceable></option></term>
+ <term><option>--checkpoint <replaceable class="parameter">fast|slow</replaceable></option></term>
+ <listitem>
+ <para>
+ Sets checkpoint mode to fast or slow (default).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-v</option></term>
+ <term><option>--verbose</option></term>
+ <listitem>
+ <para>
+ Enables verbose mode. Will output some extra steps during startup and
+ shutdown, as well as show the exact filename that is currently being
+ processed if progress reporting is also enabled.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-Z <replaceable class="parameter">level</replaceable></option></term>
+ <term><option>--compress=<replaceable class="parameter">level</replaceable></option></term>
+ <listitem>
+ <para>
+ Enables gzip compression of tar file output. Compression is only
+ available when generating tar files, and is not available when sending
+ output to standard output.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ The following command-line options control the database connection parameters.
+
+ <variablelist>
+ <varlistentry>
+ <term><option>-h <replaceable class="parameter">host</replaceable></option></term>
+ <term><option>--host=<replaceable class="parameter">host</replaceable></option></term>
+ <listitem>
+ <para>
+ Specifies the host name of the machine on which the server is
+ running. If the value begins with a slash, it is used as the
+ directory for the Unix domain socket. The default is taken
+ from the <envar>PGHOST</envar> environment variable, if set,
+ else a Unix domain socket connection is attempted.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-p <replaceable class="parameter">port</replaceable></option></term>
+ <term><option>--port=<replaceable class="parameter">port</replaceable></option></term>
+ <listitem>
+ <para>
+ Specifies the TCP port or local Unix domain socket file
+ extension on which the server is listening for connections.
+ Defaults to the <envar>PGPORT</envar> environment variable, if
+ set, or a compiled-in default.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-U <replaceable>username</replaceable></option></term>
+ <term><option>--username=<replaceable class="parameter">username</replaceable></option></term>
+ <listitem>
+ <para>
+ User name to connect as.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-w</></term>
+ <term><option>--no-password</></term>
+ <listitem>
+ <para>
+ Never issue a password prompt. If the server requires
+ password authentication and a password is not available by
+ other means such as a <filename>.pgpass</filename> file, the
+ connection attempt will fail. This option can be useful in
+ batch jobs and scripts where no user is present to enter a
+ password.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-W</option></term>
+ <term><option>--password</option></term>
+ <listitem>
+ <para>
+ Force <application>pg_basebackup</application> to prompt for a
+ password before connecting to a database.
+ </para>
+
+ <para>
+ This option is never essential, since
+ <application>pg_bsaebackup</application> will automatically prompt
+ for a password if the server demands password authentication.
+ However, <application>pg_basebackup</application> will waste a
+ connection attempt finding out that the server wants a password.
+ In some cases it is worth typing <option>-W</> to avoid the extra
+ connection attempt.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ Other, less commonly used, parameters are also available:
+
+ <variablelist>
+ <varlistentry>
+ <term><option>-V</></term>
+ <term><option>--version</></term>
+ <listitem>
+ <para>
+ Print the <application>pg_basebackup</application> version and exit.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-?</></term>
+ <term><option>--help</></term>
+ <listitem>
+ <para>
+ Show help about <application>pg_basebackup</application> command line
+ arguments, and exit.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+ </para>
+
+ </refsect1>
+
+ <refsect1>
+ <title>Environment</title>
+
+ <para>
+ This utility, like most other <productname>PostgreSQL</> utilities,
+ uses the environment variables supported by <application>libpq</>
+ (see <xref linkend="libpq-envars">).
+ </para>
+
+ </refsect1>
+
+ <refsect1>
+ <title>Notes</title>
+
+ <para>
+ The backup will include all files in the data directory and tablespaces,
+ including the configuration files and any additional files placed in the
+ directory by third parties. Only regular files and directories are allowed
+ in the data directory, no symbolic links or special device files.
+ </para>
+
+ <para>
+ The way <productname>PostgreSQL</productname> manages tablespaces, the path
+ for all additional tablespaces must be identical whenever a backup is
+ restored. The main data directory, however, is relocatable to any location.
+ </para>
+ </refsect1>
+
+ <refsect1>
+ <title>See Also</title>
+
+ <simplelist type="inline">
+ <member><xref linkend="APP-PGDUMP"></member>
+ </simplelist>
+ </refsect1>
+
+</refentry>
diff --git a/doc/src/sgml/reference.sgml b/doc/src/sgml/reference.sgml
index 84babf6..6ee8e5b 100644
--- a/doc/src/sgml/reference.sgml
+++ b/doc/src/sgml/reference.sgml
@@ -202,6 +202,7 @@
&droplang;
&dropuser;
&ecpgRef;
+ &pgBasebackup;
&pgConfig;
&pgDump;
&pgDumpall;
diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c
index b4d5bbe..ee1b6ee 100644
--- a/src/backend/replication/basebackup.c
+++ b/src/backend/replication/basebackup.c
@@ -40,7 +40,7 @@ static void send_int8_string(StringInfoData *buf, int64 intval);
static void SendBackupHeader(List *tablespaces);
static void SendBackupDirectory(char *location, char *spcoid);
static void base_backup_cleanup(int code, Datum arg);
-static void perform_base_backup(const char *backup_label, bool progress, DIR *tblspcdir);
+static void perform_base_backup(const char *backup_label, bool progress, DIR *tblspcdir, bool fastcheckpoint);
typedef struct
{
@@ -67,9 +67,9 @@ base_backup_cleanup(int code, Datum arg)
* clobbered by longjmp" from stupider versions of gcc.
*/
static void
-perform_base_backup(const char *backup_label, bool progress, DIR *tblspcdir)
+perform_base_backup(const char *backup_label, bool progress, DIR *tblspcdir, bool fastcheckpoint)
{
- do_pg_start_backup(backup_label, true);
+ do_pg_start_backup(backup_label, fastcheckpoint);
PG_ENSURE_ERROR_CLEANUP(base_backup_cleanup, (Datum) 0);
{
@@ -135,7 +135,7 @@ perform_base_backup(const char *backup_label, bool progress, DIR *tblspcdir)
* pg_stop_backup() for the user.
*/
void
-SendBaseBackup(const char *backup_label, bool progress)
+SendBaseBackup(const char *backup_label, bool progress, bool fastcheckpoint)
{
DIR *dir;
MemoryContext backup_context;
@@ -168,7 +168,7 @@ SendBaseBackup(const char *backup_label, bool progress)
ereport(ERROR,
(errmsg("unable to open directory pg_tblspc: %m")));
- perform_base_backup(backup_label, progress, dir);
+ perform_base_backup(backup_label, progress, dir, fastcheckpoint);
FreeDir(dir);
diff --git a/src/backend/replication/repl_gram.y b/src/backend/replication/repl_gram.y
index 0ef33dd..e4f4c47 100644
--- a/src/backend/replication/repl_gram.y
+++ b/src/backend/replication/repl_gram.y
@@ -66,11 +66,12 @@ Node *replication_parse_result;
%token K_IDENTIFY_SYSTEM
%token K_LABEL
%token K_PROGRESS
+%token K_FAST
%token K_START_REPLICATION
%type <node> command
%type <node> base_backup start_replication identify_system
-%type <boolval> opt_progress
+%type <boolval> opt_progress opt_fast
%type <str> opt_label
%%
@@ -102,15 +103,16 @@ identify_system:
;
/*
- * BASE_BACKUP [LABEL <label>] [PROGRESS]
+ * BASE_BACKUP [LABEL <label>] [PROGRESS] [FAST]
*/
base_backup:
- K_BASE_BACKUP opt_label opt_progress
+ K_BASE_BACKUP opt_label opt_progress opt_fast
{
BaseBackupCmd *cmd = (BaseBackupCmd *) makeNode(BaseBackupCmd);
cmd->label = $2;
cmd->progress = $3;
+ cmd->fastcheckpoint = $4;
$$ = (Node *) cmd;
}
@@ -123,6 +125,9 @@ opt_label: K_LABEL SCONST { $$ = $2; }
opt_progress: K_PROGRESS { $$ = true; }
| /* EMPTY */ { $$ = false; }
;
+opt_fast: K_FAST { $$ = true; }
+ | /* EMPTY */ { $$ = false; }
+ ;
/*
* START_REPLICATION %X/%X
diff --git a/src/backend/replication/repl_scanner.l b/src/backend/replication/repl_scanner.l
index 014a720..e6dfb04 100644
--- a/src/backend/replication/repl_scanner.l
+++ b/src/backend/replication/repl_scanner.l
@@ -57,6 +57,7 @@ quotestop {quote}
%%
BASE_BACKUP { return K_BASE_BACKUP; }
+FAST { return K_FAST; }
IDENTIFY_SYSTEM { return K_IDENTIFY_SYSTEM; }
LABEL { return K_LABEL; }
PROGRESS { return K_PROGRESS; }
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index 0ad6804..14b43d8 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -402,7 +402,7 @@ HandleReplicationCommand(const char *cmd_string)
{
BaseBackupCmd *cmd = (BaseBackupCmd *) cmd_node;
- SendBaseBackup(cmd->label, cmd->progress);
+ SendBaseBackup(cmd->label, cmd->progress, cmd->fastcheckpoint);
/* Send CommandComplete and ReadyForQuery messages */
EndCommand("SELECT", DestRemote);
diff --git a/src/bin/Makefile b/src/bin/Makefile
index c18c05c..3809412 100644
--- a/src/bin/Makefile
+++ b/src/bin/Makefile
@@ -14,7 +14,7 @@ top_builddir = ../..
include $(top_builddir)/src/Makefile.global
SUBDIRS = initdb pg_ctl pg_dump \
- psql scripts pg_config pg_controldata pg_resetxlog
+ psql scripts pg_config pg_controldata pg_resetxlog pg_basebackup
ifeq ($(PORTNAME), win32)
SUBDIRS+=pgevent
endif
diff --git a/src/bin/pg_basebackup/Makefile b/src/bin/pg_basebackup/Makefile
new file mode 100644
index 0000000..ccb1502
--- /dev/null
+++ b/src/bin/pg_basebackup/Makefile
@@ -0,0 +1,38 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/bin/pg_basebackup
+#
+# Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/bin/pg_basebackup/Makefile
+#
+#-------------------------------------------------------------------------
+
+PGFILEDESC = "pg_basebackup - takes a streaming base backup of a PostgreSQL instance"
+PGAPPICON=win32
+
+subdir = src/bin/pg_basebackup
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
+
+OBJS= pg_basebackup.o $(WIN32RES)
+
+all: pg_basebackup
+
+pg_basebackup: $(OBJS) | submake-libpq submake-libpgport
+ $(CC) $(CFLAGS) $(OBJS) $(libpq_pgport) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+
+install: all installdirs
+ $(INSTALL_PROGRAM) pg_basebackup$(X) '$(DESTDIR)$(bindir)/pg_basebackup$(X)'
+
+installdirs:
+ $(MKDIR_P) '$(DESTDIR)$(bindir)'
+
+uninstall:
+ rm -f '$(DESTDIR)$(bindir)/pg_basebackup$(X)'
+
+clean distclean maintainer-clean:
+ rm -f pg_basebackup$(X) $(OBJS)
diff --git a/src/bin/pg_basebackup/nls.mk b/src/bin/pg_basebackup/nls.mk
new file mode 100644
index 0000000..760ee1d
--- /dev/null
+++ b/src/bin/pg_basebackup/nls.mk
@@ -0,0 +1,5 @@
+# src/bin/pg_basebackup/nls.mk
+CATALOG_NAME := pg_basebackup
+AVAIL_LANGUAGES :=
+GETTEXT_FILES := pg_basebackup.c
+GETTEXT_TRIGGERS:= _
diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c
new file mode 100644
index 0000000..3312d8e
--- /dev/null
+++ b/src/bin/pg_basebackup/pg_basebackup.c
@@ -0,0 +1,1047 @@
+/*-------------------------------------------------------------------------
+ *
+ * pg_basebackup.c - receive a base backup using streaming replication protocol
+ *
+ * Author: Magnus Hagander <magnus@hagander.net>
+ *
+ * Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ * src/bin/pg_basebackup.c
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+#include "libpq-fe.h"
+
+#include <unistd.h>
+#include <dirent.h>
+#include <sys/stat.h>
+
+#ifdef HAVE_LIBZ
+#include <zlib.h>
+#endif
+
+#include "getopt_long.h"
+
+
+/* Global options */
+static const char *progname;
+char *basedir = NULL;
+char *tardir = NULL;
+char *label = "pg_basebackup base backup";
+bool showprogress = false;
+int verbose = 0;
+int compresslevel = 0;
+bool fastcheckpoint = false;
+char *dbhost = NULL;
+char *dbuser = NULL;
+char *dbport = NULL;
+int dbgetpassword = 0; /* 0=auto, -1=never, 1=always */
+
+/* Progress counters */
+static uint64 totalsize;
+static uint64 totaldone;
+static int tablespacecount;
+
+/* Connection kept global so we can disconnect easily */
+static PGconn *conn = NULL;
+
+#define disconnect_and_exit(code) \
+ { \
+ if (conn != NULL) PQfinish(conn); \
+ exit(code); \
+ }
+
+/* Function headers */
+static char *xstrdup(const char *s);
+static void *xmalloc0(int size);
+static void usage(void);
+static void verify_dir_is_empty_or_create(char *dirname);
+static void progress_report(int tablespacenum, char *fn);
+static PGconn *GetConnection(void);
+
+static void ReceiveTarFile(PGconn *conn, PGresult *res, int rownum);
+static void ReceiveAndUnpackTarFile(PGconn *conn, PGresult *res, int rownum);
+static void BaseBackup();
+
+#ifdef HAVE_LIBZ
+static const char *
+get_gz_error(gzFile *gzf)
+{
+ int errnum;
+ const char *errmsg;
+
+ errmsg = gzerror(gzf, &errnum);
+ if (errnum == Z_ERRNO)
+ return strerror(errno);
+ else
+ return errmsg;
+}
+#endif
+
+/*
+ * strdup() and malloc() replacements that prints an error and exits
+ * if something goes wrong. Can never return NULL.
+ */
+static char *
+xstrdup(const char *s)
+{
+ char *result;
+
+ result = strdup(s);
+ if (!result)
+ {
+ fprintf(stderr, _("%s: out of memory\n"), progname);
+ exit(1);
+ }
+ return result;
+}
+
+static void *
+xmalloc0(int size)
+{
+ void *result;
+
+ result = malloc(size);
+ if (!result)
+ {
+ fprintf(stderr, _("%s: out of memory\n"), progname);
+ exit(1);
+ }
+ MemSet(result, 0, size);
+ return result;
+}
+
+
+static void
+usage(void)
+{
+ printf(_("%s takes base backups of running PostgreSQL servers\n\n"),
+ progname);
+ printf(_("Usage:\n"));
+ printf(_(" %s [OPTION]...\n"), progname);
+ printf(_("\nOptions:\n"));
+ printf(_(" -D, --pgdata=directory receive base backup into directory\n"));
+ printf(_(" -T, --tardir=directory receive base backup into tar files\n"
+ " stored in specified directory\n"));
+ printf(_(" -Z, --compress=0-9 compress tar output\n"));
+ printf(_(" -l, --label=label set backup label\n"));
+ printf(_(" -c, --checkpoint=fast|slow\n"
+ " set fast or slow checkpoinging\n"));
+ printf(_(" -P, --progress show progress information\n"));
+ printf(_(" -v, --verbose output verbose messages\n"));
+ printf(_("\nConnection options:\n"));
+ printf(_(" -h, --host=HOSTNAME database server host or socket directory\n"));
+ printf(_(" -p, --port=PORT database server port number\n"));
+ printf(_(" -U, --username=NAME connect as specified database user\n"));
+ printf(_(" -w, --no-password never prompt for password\n"));
+ printf(_(" -W, --password force password prompt (should happen automatically)\n"));
+ printf(_("\nOther options:\n"));
+ printf(_(" -?, --help show this help, then exit\n"));
+ printf(_(" -V, --version output version information, then exit\n"));
+ printf(_("\nReport bugs to <pgsql-bugs@postgresql.org>.\n"));
+}
+
+
+/*
+ * Verify that the given directory exists and is empty. If it does not
+ * exist, it is created. If it exists but is not empty, an error will
+ * be give and the process ended.
+ */
+static void
+verify_dir_is_empty_or_create(char *dirname)
+{
+ /*
+ * ** * * * * * * * * * * * * * XXX: hack to allow restoring backups
+ * locally, remove before commit!!!
+ * =======================================
+ */
+ if (dirname[0] == '/')
+ {
+ dirname[0] = '_';
+ }
+
+ switch (pg_check_dir(dirname))
+ {
+ case 0:
+
+ /*
+ * Does not exist, so create
+ */
+ if (pg_mkdir_p(dirname, S_IRWXU) == -1)
+ {
+ fprintf(stderr,
+ _("%s: could not create directory \"%s\": %s\n"),
+ progname, dirname, strerror(errno));
+ disconnect_and_exit(1);
+ }
+ return;
+ case 1:
+
+ /*
+ * Exists, empty
+ */
+ return;
+ case 2:
+
+ /*
+ * Exists, not empty
+ */
+ fprintf(stderr,
+ _("%s: directory \"%s\" exists but is not empty\n"),
+ progname, dirname);
+ disconnect_and_exit(1);
+ case -1:
+
+ /*
+ * Access problem
+ */
+ fprintf(stderr, _("%s: could not access directory \"%s\": %s\n"),
+ progname, dirname, strerror(errno));
+ disconnect_and_exit(1);
+ }
+}
+
+
+/*
+ * Print a progress report based on the global variables. If verbose output
+ * is enabled, also print the current file name.
+ */
+static void
+progress_report(int tablespacenum, char *fn)
+{
+ if (verbose)
+ fprintf(stderr,
+ INT64_FORMAT "/" INT64_FORMAT " kB (%i%%) %i/%i tablespaces (%-30s)\r",
+ totaldone / 1024, totalsize,
+ (int) ((totaldone / 1024) * 100 / totalsize),
+ tablespacenum, tablespacecount, fn);
+ else
+ fprintf(stderr, INT64_FORMAT "/" INT64_FORMAT " kB (%i%%) %i/%i tablespaces\r",
+ totaldone / 1024, totalsize,
+ (int) ((totaldone / 1024) * 100 / totalsize),
+ tablespacenum, tablespacecount);
+}
+
+
+/*
+ * Receive a tar format file from the connection to the server, and write
+ * the data from this file directly into a tar file. If compression is
+ * enabled, the data will be compressed while written to the file.
+ *
+ * The file will be named base.tar[.gz] if it's for the main data directory
+ * or <tablespaceoid>.tar[.gz] if it's for another tablespace.
+ *
+ * No attempt to inspect or validate the contents of the file is done.
+ */
+static void
+ReceiveTarFile(PGconn *conn, PGresult *res, int rownum)
+{
+ char fn[MAXPGPATH];
+ char *copybuf = NULL;
+ FILE *tarfile = NULL;
+
+#ifdef HAVE_LIBZ
+ gzFile *ztarfile = NULL;
+#endif
+
+ if (PQgetisnull(res, rownum, 0))
+
+ /*
+ * Base tablespaces
+ */
+ if (strcmp(tardir, "-") == 0)
+ tarfile = stdout;
+ else
+ {
+#ifdef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ snprintf(fn, sizeof(fn), "%s/base.tar.gz", tardir);
+ ztarfile = gzopen(fn, "wb");
+ if (gzsetparams(ztarfile, compresslevel, Z_DEFAULT_STRATEGY) != Z_OK)
+ {
+ fprintf(stderr, _("%s: could not set compression level %i: %s\n"),
+ progname, compresslevel, get_gz_error(ztarfile));
+ disconnect_and_exit(1);
+ }
+ }
+ else
+#endif
+ {
+ snprintf(fn, sizeof(fn), "%s/base.tar", tardir);
+ tarfile = fopen(fn, "wb");
+ }
+ }
+ else
+ {
+ /*
+ * Specific tablespace
+ */
+#ifdef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ snprintf(fn, sizeof(fn), "%s/%s.tar.gz", tardir, PQgetvalue(res, rownum, 0));
+ ztarfile = gzopen(fn, "wb");
+ if (gzsetparams(ztarfile, compresslevel, Z_DEFAULT_STRATEGY) != Z_OK)
+ {
+ fprintf(stderr, _("%s: could not set compression level %i: %s\n"),
+ progname, compresslevel, get_gz_error(ztarfile));
+ disconnect_and_exit(1);
+ }
+ }
+ else
+#endif
+ {
+ snprintf(fn, sizeof(fn), "%s/%s.tar", tardir, PQgetvalue(res, rownum, 0));
+ tarfile = fopen(fn, "wb");
+ }
+ }
+
+#ifdef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ if (!ztarfile)
+ {
+ /* Compression is in use */
+ fprintf(stderr, _("%s: could not create compressed file \"%s\": %s\n"),
+ progname, fn, get_gz_error(ztarfile));
+ disconnect_and_exit(1);
+ }
+ }
+ else
+#endif
+ {
+ /* Either no zlib support, or zlib support but compresslevel = 0 */
+ if (!tarfile)
+ {
+ fprintf(stderr, _("%s: could not create file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+ }
+
+ /*
+ * Get the COPY data stream
+ */
+ res = PQgetResult(conn);
+ if (PQresultStatus(res) != PGRES_COPY_OUT)
+ {
+ fprintf(stderr, _("%s: could not get COPY data stream: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+ while (1)
+ {
+ int r;
+
+ if (copybuf != NULL)
+ {
+ PQfreemem(copybuf);
+ copybuf = NULL;
+ }
+
+ r = PQgetCopyData(conn, ©buf, 0);
+ if (r == -1)
+ {
+ /*
+ * End of chunk. Close file (but not stdout).
+ *
+ * Also, write two completely empty blocks at the end of the tar
+ * file, as required by some tar programs.
+ */
+ char zerobuf[1024];
+
+ MemSet(zerobuf, 0, sizeof(zerobuf));
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ {
+ if (gzwrite(ztarfile, zerobuf, sizeof(zerobuf)) != sizeof(zerobuf))
+ {
+ fprintf(stderr, _("%s: could not write to compressed file \"%s\": %s\n"),
+ progname, fn, get_gz_error(ztarfile));
+ }
+ }
+ else
+#endif
+ {
+ if (fwrite(zerobuf, sizeof(zerobuf), 1, tarfile) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+ }
+
+ if (strcmp(tardir, "-") != 0)
+ {
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ gzclose(ztarfile);
+#endif
+ if (tarfile != NULL)
+ fclose(tarfile);
+ }
+
+ break;
+ }
+ else if (r == -2)
+ {
+ fprintf(stderr, _("%s: could not read COPY data: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+#ifdef HAVE_LIBZ
+ if (ztarfile != NULL)
+ {
+ if (gzwrite(ztarfile, copybuf, r) != r)
+ {
+ fprintf(stderr, _("%s: could not write to compressed file \"%s\": %s\n"),
+ progname, fn, get_gz_error(ztarfile));
+ }
+ }
+ else
+#endif
+ {
+ if (fwrite(copybuf, r, 1, tarfile) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+ }
+ totaldone += r;
+ if (showprogress)
+ progress_report(rownum, fn);
+ } /* while (1) */
+
+ if (copybuf != NULL)
+ PQfreemem(copybuf);
+}
+
+/*
+ * Receive a tar format stream from the connection to the server, and unpack
+ * the contents of it into a directory. Only files, directories and
+ * symlinks are supported, no other kinds of special files.
+ *
+ * If the data is for the main data directory, it will be restored in the
+ * specified directory. If it's for another tablespace, it will be restored
+ * in the original directory, since relocation of tablespaces is not
+ * supported.
+ */
+static void
+ReceiveAndUnpackTarFile(PGconn *conn, PGresult *res, int rownum)
+{
+ char current_path[MAXPGPATH];
+ char fn[MAXPGPATH];
+ int current_len_left;
+ int current_padding;
+ char *copybuf = NULL;
+ FILE *file = NULL;
+
+ if (PQgetisnull(res, rownum, 0))
+ strcpy(current_path, basedir);
+ else
+ strcpy(current_path, PQgetvalue(res, rownum, 1));
+
+ /*
+ * Make sure we're unpacking into an empty directory
+ */
+ verify_dir_is_empty_or_create(current_path);
+
+ /*
+ * Get the COPY data
+ */
+ res = PQgetResult(conn);
+ if (PQresultStatus(res) != PGRES_COPY_OUT)
+ {
+ fprintf(stderr, _("%s: could not get COPY data stream: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+ while (1)
+ {
+ int r;
+
+ if (copybuf != NULL)
+ {
+ PQfreemem(copybuf);
+ copybuf = NULL;
+ }
+
+ r = PQgetCopyData(conn, ©buf, 0);
+
+ if (r == -1)
+ {
+ /*
+ * End of chunk
+ */
+ if (file)
+ fclose(file);
+
+ break;
+ }
+ else if (r == -2)
+ {
+ fprintf(stderr, _("%s: could not read COPY data: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+ if (file == NULL)
+ {
+#ifndef WIN32
+ mode_t filemode;
+#endif
+
+ /*
+ * No current file, so this must be the header for a new file
+ */
+ if (r != 512)
+ {
+ fprintf(stderr, _("%s: Invalid tar block header size: %i\n"),
+ progname, r);
+ disconnect_and_exit(1);
+ }
+ totaldone += 512;
+
+ if (sscanf(copybuf + 124, "%11o", ¤t_len_left) != 1)
+ {
+ fprintf(stderr, _("%s: could not parse file size!\n"),
+ progname);
+ disconnect_and_exit(1);
+ }
+
+ /* Set permissions on the file */
+ if (sscanf(©buf[100], "%07o ", &filemode) != 1)
+ {
+ fprintf(stderr, _("%s: could not parse file mode!\n"),
+ progname);
+ disconnect_and_exit(1);
+ }
+
+ /*
+ * All files are padded up to 512 bytes
+ */
+ current_padding =
+ ((current_len_left + 511) & ~511) - current_len_left;
+
+ /*
+ * First part of header is zero terminated filename
+ */
+ snprintf(fn, sizeof(fn), "%s/%s", current_path, copybuf);
+ if (fn[strlen(fn) - 1] == '/')
+ {
+ /*
+ * Ends in a slash means directory or symlink to directory
+ */
+ if (copybuf[156] == '5')
+ {
+ /*
+ * Directory
+ */
+ fn[strlen(fn) - 1] = '\0'; /* Remove trailing slash */
+ if (mkdir(fn, S_IRWXU) != 0)
+ {
+ fprintf(stderr,
+ _("%s: could not create directory \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+#ifndef WIN32
+ if (chmod(fn, filemode))
+ fprintf(stderr, _("%s: could not set permissions on directory \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+#endif
+ }
+ else if (copybuf[156] == '2')
+ {
+ /*
+ * Symbolic link
+ */
+ fn[strlen(fn) - 1] = '\0'; /* Remove trailing slash */
+ if (symlink(©buf[157], fn) != 0)
+ {
+ fprintf(stderr,
+ _("%s: could not create symbolic link from %s to %s: %s\n"),
+ progname, fn, ©buf[157], strerror(errno));
+ disconnect_and_exit(1);
+ }
+ }
+ else
+ {
+ fprintf(stderr, _("%s: unknown link indicator \"%c\"\n"),
+ progname, copybuf[156]);
+ disconnect_and_exit(1);
+ }
+ continue; /* directory or link handled */
+ }
+
+ /*
+ * regular file
+ */
+ file = fopen(fn, "wb");
+ if (!file)
+ {
+ fprintf(stderr, _("%s: could not create file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+
+#ifndef WIN32
+ if (chmod(fn, filemode))
+ fprintf(stderr, _("%s: could not set permissions on file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+#endif
+
+ if (current_len_left == 0)
+ {
+ /*
+ * Done with this file, next one will be a new tar header
+ */
+ fclose(file);
+ file = NULL;
+ continue;
+ }
+ } /* new file */
+ else
+ {
+ /*
+ * Continuing blocks in existing file
+ */
+ if (current_len_left == 0 && r == current_padding)
+ {
+ /*
+ * Received the padding block for this file, ignore it and
+ * close the file, then move on to the next tar header.
+ */
+ fclose(file);
+ file = NULL;
+ totaldone += r;
+ continue;
+ }
+
+ if (fwrite(copybuf, r, 1, file) != 1)
+ {
+ fprintf(stderr, _("%s: could not write to file \"%s\": %s\n"),
+ progname, fn, strerror(errno));
+ disconnect_and_exit(1);
+ }
+ totaldone += r;
+ if (showprogress)
+ progress_report(rownum, fn);
+
+ current_len_left -= r;
+ if (current_len_left == 0 && current_padding == 0)
+ {
+ /*
+ * Received the last block, and there is no padding to be
+ * expected. Close the file and move on to the next tar
+ * header.
+ */
+ fclose(file);
+ file = NULL;
+ continue;
+ }
+ } /* continuing data in existing file */
+ } /* loop over all data blocks */
+
+ if (file != NULL)
+ {
+ fprintf(stderr, _("%s: last file was never finsihed!\n"), progname);
+ disconnect_and_exit(1);
+ }
+
+ if (copybuf != NULL)
+ PQfreemem(copybuf);
+}
+
+
+static PGconn *
+GetConnection(void)
+{
+ PGconn *tmpconn;
+ int argcount = 4; /* dbname, replication, fallback_app_name,
+ * password */
+ int i;
+ const char **keywords;
+ const char **values;
+ char *password = NULL;
+
+ if (dbhost)
+ argcount++;
+ if (dbuser)
+ argcount++;
+ if (dbport)
+ argcount++;
+
+ keywords = xmalloc0((argcount + 1) * sizeof(*keywords));
+ values = xmalloc0((argcount + 1) * sizeof(*values));
+
+ keywords[0] = "dbname";
+ values[0] = "replication";
+ keywords[1] = "replication";
+ values[1] = "true";
+ keywords[2] = "fallback_application_name";
+ values[2] = progname;
+ i = 3;
+ if (dbhost)
+ {
+ keywords[i] = "host";
+ values[i] = dbhost;
+ i++;
+ }
+ if (dbuser)
+ {
+ keywords[i] = "user";
+ values[i] = dbuser;
+ i++;
+ }
+ if (dbport)
+ {
+ keywords[i] = "port";
+ values[i] = dbport;
+ i++;
+ }
+
+ while (true)
+ {
+ if (dbgetpassword == 1)
+ {
+ /* Prompt for a password */
+ password = simple_prompt(_("Password: "), 100, false);
+ keywords[argcount - 1] = "password";
+ values[argcount - 1] = password;
+ }
+
+ tmpconn = PQconnectdbParams(keywords, values, true);
+ if (password)
+ free(password);
+
+ if (PQstatus(tmpconn) == CONNECTION_BAD &&
+ PQconnectionNeedsPassword(tmpconn) &&
+ dbgetpassword != -1)
+ {
+ dbgetpassword = 1; /* ask for password next time */
+ PQfinish(tmpconn);
+ continue;
+ }
+
+ if (PQstatus(tmpconn) != CONNECTION_OK)
+ {
+ fprintf(stderr, _("%s: could not connect to server: %s\n"),
+ progname, PQerrorMessage(tmpconn));
+ exit(1);
+ }
+
+ /* Connection ok! */
+ free(values);
+ free(keywords);
+ return tmpconn;
+ }
+}
+
+static void
+BaseBackup()
+{
+ PGresult *res;
+ char current_path[MAXPGPATH];
+ char escaped_label[MAXPGPATH];
+ int i;
+
+ /*
+ * Connect in replication mode to the server
+ */
+ conn = GetConnection();
+
+ PQescapeStringConn(conn, escaped_label, label, sizeof(escaped_label), &i);
+ snprintf(current_path, sizeof(current_path), "BASE_BACKUP LABEL '%s' %s %s",
+ escaped_label,
+ showprogress ? "PROGRESS" : "",
+ fastcheckpoint ? "FAST" : "");
+
+ if (PQsendQuery(conn, current_path) == 0)
+ {
+ fprintf(stderr, _("%s: could not start base backup: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+ /*
+ * Get the header
+ */
+ res = PQgetResult(conn);
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ fprintf(stderr, _("%s: could not initiate base backup: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+ if (PQntuples(res) < 1)
+ {
+ fprintf(stderr, _("%s: no data returned from server.\n"), progname);
+ disconnect_and_exit(1);
+ }
+
+ /*
+ * Sum up the total size, for progress reporting
+ */
+ totalsize = totaldone = 0;
+ tablespacecount = PQntuples(res);
+ for (i = 0; i < PQntuples(res); i++)
+ {
+ if (showprogress)
+ totalsize += atol(PQgetvalue(res, i, 2));
+
+ /*
+ * Verify tablespace directories are empty Don't bother with the first
+ * once since it can be relocated, and it will be checked before we do
+ * anything anyway.
+ */
+ if (basedir != NULL && i > 0)
+ verify_dir_is_empty_or_create(PQgetvalue(res, i, 1));
+ }
+
+ /*
+ * When writing to stdout, require a single tablespace
+ */
+ if (tardir != NULL && strcmp(tardir, "-") == 0 && PQntuples(res) > 1)
+ {
+ fprintf(stderr, _("%s: can only write single tablespace to stdout, database has %i.\n"),
+ progname, PQntuples(res));
+ disconnect_and_exit(1);
+ }
+
+ /*
+ * Start receiving chunks
+ */
+ for (i = 0; i < PQntuples(res); i++)
+ {
+ if (tardir != NULL)
+ ReceiveTarFile(conn, res, i);
+ else
+ ReceiveAndUnpackTarFile(conn, res, i);
+ } /* Loop over all tablespaces */
+
+ if (showprogress)
+ {
+ progress_report(PQntuples(res), "");
+ fprintf(stderr, "\n"); /* Need to move to next line */
+ }
+ PQclear(res);
+
+ res = PQgetResult(conn);
+ if (PQresultStatus(res) != PGRES_COMMAND_OK)
+ {
+ fprintf(stderr, _("%s: final receive failed: %s\n"),
+ progname, PQerrorMessage(conn));
+ disconnect_and_exit(1);
+ }
+
+ /*
+ * End of copy data. Final result is already checked inside the loop.
+ */
+ PQfinish(conn);
+
+ if (verbose)
+ fprintf(stderr, "%s: base backup completed.\n", progname);
+}
+
+
+int
+main(int argc, char **argv)
+{
+ static struct option long_options[] = {
+ {"help", no_argument, NULL, '?'},
+ {"version", no_argument, NULL, 'V'},
+ {"pgdata", required_argument, NULL, 'D'},
+ {"tardir", required_argument, NULL, 'T'},
+ {"compress", required_argument, NULL, 'Z'},
+ {"label", required_argument, NULL, 'l'},
+ {"host", required_argument, NULL, 'h'},
+ {"port", required_argument, NULL, 'p'},
+ {"username", required_argument, NULL, 'U'},
+ {"no-password", no_argument, NULL, 'w'},
+ {"password", no_argument, NULL, 'W'},
+ {"verbose", no_argument, NULL, 'v'},
+ {"progress", no_argument, NULL, 'P'},
+ {NULL, 0, NULL, 0}
+ };
+ int c;
+
+ int option_index;
+
+ progname = get_progname(argv[0]);
+ set_pglocale_pgservice(argv[0], PG_TEXTDOMAIN("pg_basebackup"));
+
+ if (argc > 1)
+ {
+ if (strcmp(argv[1], "--help") == 0 || strcmp(argv[1], "-?") == 0)
+ {
+ usage();
+ exit(0);
+ }
+ else if (strcmp(argv[1], "-V") == 0
+ || strcmp(argv[1], "--version") == 0)
+ {
+ puts("pg_basebackup (PostgreSQL) " PG_VERSION);
+ exit(0);
+ }
+ }
+
+ while ((c = getopt_long(argc, argv, "D:T:l:Z:c:h:p:U:wWvP",
+ long_options, &option_index)) != -1)
+ {
+ switch (c)
+ {
+ case 'D':
+ basedir = xstrdup(optarg);
+ break;
+ case 'T':
+ tardir = xstrdup(optarg);
+ break;
+ case 'l':
+ label = xstrdup(optarg);
+ break;
+ case 'Z':
+ compresslevel = atoi(optarg);
+ if (compresslevel <= 0 || compresslevel > 9)
+ {
+ fprintf(stderr, _("%s: invalid compression level \"%s\"\n"),
+ progname, optarg);
+ exit(1);
+ }
+ break;
+ case 'c':
+ if (strcasecmp(optarg, "fast") == 0)
+ fastcheckpoint = true;
+ else if (strcasecmp(optarg, "slow") == 0)
+ fastcheckpoint = false;
+ else
+ {
+ fprintf(stderr, _("%s: invalid checkpoint argument \"%s\", must be \"fast\" or \"slow\"\n"),
+ progname, optarg);
+ exit(1);
+ }
+ break;
+ case 'h':
+ dbhost = xstrdup(optarg);
+ break;
+ case 'p':
+ if (atoi(optarg) <= 0)
+ {
+ fprintf(stderr, _("%s: invalid port number \"%s\"\n"),
+ progname, optarg);
+ exit(1);
+ }
+ dbport = xstrdup(optarg);
+ break;
+ case 'U':
+ dbuser = xstrdup(optarg);
+ break;
+ case 'w':
+ dbgetpassword = -1;
+ break;
+ case 'W':
+ dbgetpassword = 1;
+ break;
+ case 'v':
+ verbose++;
+ break;
+ case 'P':
+ showprogress = true;
+ break;
+ default:
+
+ /*
+ * getopt_long already emitted a complaint
+ */
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+ }
+
+ /*
+ * Any non-option arguments?
+ */
+ if (optind < argc)
+ {
+ fprintf(stderr,
+ _("%s: too many command-line arguments (first is \"%s\")\n"),
+ progname, argv[optind]);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ /*
+ * Required arguments
+ */
+ if (basedir == NULL && tardir == NULL)
+ {
+ fprintf(stderr, _("%s: no target directory specified\n"), progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ /*
+ * Mutually exclusive arguments
+ */
+ if (basedir != NULL && tardir != NULL)
+ {
+ fprintf(stderr,
+ _("%s: both directory mode and tar mode cannot be specified\n"),
+ progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+ if (basedir != NULL && compresslevel > 0)
+ {
+ fprintf(stderr,
+ _("%s: only tar mode backups can be compressed\n"),
+ progname);
+ fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+ progname);
+ exit(1);
+ }
+
+#ifndef HAVE_LIBZ
+ if (compresslevel > 0)
+ {
+ fprintf(stderr,
+ _("%s: this build does not support compression\n"),
+ progname);
+ exit(1);
+ }
+#else
+ if (compresslevel > 0 && strcmp(tardir, "-") == 0)
+ {
+ fprintf(stderr,
+ _("%s: compression is not supported on standard output\n"),
+ progname);
+ exit(1);
+ }
+#endif
+
+ /*
+ * Verify directories
+ */
+ if (basedir)
+ verify_dir_is_empty_or_create(basedir);
+ else if (strcmp(tardir, "-") != 0)
+ verify_dir_is_empty_or_create(tardir);
+
+
+
+ BaseBackup();
+
+ return 0;
+}
diff --git a/src/include/replication/basebackup.h b/src/include/replication/basebackup.h
index eb2e160..80f814b 100644
--- a/src/include/replication/basebackup.h
+++ b/src/include/replication/basebackup.h
@@ -12,6 +12,6 @@
#ifndef _BASEBACKUP_H
#define _BASEBACKUP_H
-extern void SendBaseBackup(const char *backup_label, bool progress);
+extern void SendBaseBackup(const char *backup_label, bool progress, bool fastcheckpoint);
#endif /* _BASEBACKUP_H */
diff --git a/src/include/replication/replnodes.h b/src/include/replication/replnodes.h
index 4f4a1a3..fc81414 100644
--- a/src/include/replication/replnodes.h
+++ b/src/include/replication/replnodes.h
@@ -47,6 +47,7 @@ typedef struct BaseBackupCmd
NodeTag type;
char *label;
bool progress;
+ bool fastcheckpoint;
} BaseBackupCmd;
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 29c3c77..40fb130 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -273,6 +273,8 @@ sub mkvcbuild
$initdb->AddLibrary('wsock32.lib');
$initdb->AddLibrary('ws2_32.lib');
+ my $pgbasebackup = AddSimpleFrontend('pg_basebackup', 1);
+
my $pgconfig = AddSimpleFrontend('pg_config');
my $pgcontrol = AddSimpleFrontend('pg_controldata');
On Tue, Jan 18, 2011 at 8:40 PM, Magnus Hagander <magnus@hagander.net> wrote:
When I untar the tar file taken by pg_basebackup, I got the following
messages:$ tar xf base.tar
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: Error exit delayed from previous errorsIs this a bug? This happens only when I create $PGDATA by using
initdb -X (i.e., I relocated the pg_xlog directory elsewhere than
$PGDATA).Interesting. What version of tar and what platform? I can't reproduce
that here...
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.4 (Tikanga)
$ uname -a
Linux hermes 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64
x86_64 x86_64 GNU/Linux
$ tar --version
tar (GNU tar) 1.15.1
+ r = PQgetCopyData(conn, ©buf, 0); + if (r == -1)Since -1 of PQgetCopyData might indicate an error, in this case,
we would need to call PQgetResult?.Uh, -1 means end of data, no? -2 means error?
The comment in pqGetCopyData3 says
/*
* On end-of-copy, exit COPY_OUT or COPY_BOTH mode and let caller
* read status with PQgetResult(). The normal case is that it's
* Copy Done, but we let parseInput read that. If error, we expect
* the state was already changed.
*/
Also the comment in getCopyDataMessage says
/*
* If it's a legitimate async message type, process it. (NOTIFY
* messages are not currently possible here, but we handle them for
* completeness.) Otherwise, if it's anything except Copy Data,
* report end-of-copy.
*/
So I thought that. BTW, walreceiver has already done that.
ReceiveTarFile seems refactorable by using GZWRITE and GZCLOSE
macros.You mean the ones from pg_dump? I don't think so. We can't use
gzwrite() with compression level 0 on the tar output, because it will
still write a gz header. With pg_dump, that is ok because it's our
format, but with a .tar (without .gz) I don't think it is.
Right. I withdrow the comment.
+ /* + * Make sure we're unpacking into an empty directory + */ + verify_dir_is_empty_or_create(current_path);Can pg_basebackup take a backup of $PGDATA including a tablespace
directory, without an error? The above code seems to prevent that....Uh, how do you mean it woul dprevent that? It requires that the
directory you're writing the tablespace to is empty or nonexistant,
but that shouldn't prevent a backup, no? It will prevent you from
overwriting things with your backup, but that's intentional - if you
don't need the old dir, just remove it.
What I'm worried about is the case where a tablespace is created
under the $PGDATA directory. In this case, ISTM that pg_basebackup
takes the backup of $PGDATA including the tablespace directory first,
and then takes the backup of the tablespace directory again. But,
since the tablespace directory is not already empty, the backup of
the tablespace seems to fail.
Was easy, done with "-c <fast|slow>".
Thanks a lot!
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Wed, Jan 19, 2011 at 4:12 AM, Magnus Hagander <magnus@hagander.net> wrote:
Ah, ok. I've added the errorcode now, PFA. I also fixed an error in
the change for result codes I broke in the last patch. github branch
updated as usual.
Great. Thanks for the quick update!
Here are another comments:
+ * IDENTIFICATION
+ * src/bin/pg_basebackup.c
Typo: s/"src/bin/pg_basebackup.c"/"src/bin/pg_basebackup/pg_basebackup.c"
+ printf(_(" -c, --checkpoint=fast|slow\n"
+ " set fast or slow checkpoinging\n"));
Typo: s/checkpoinging/checkpointing
The "fast or slow" seems to lead users to always choose "fast". Instead,
what about "fast or smooth", "fast or spread" or "immediate or delayed"?
You seem to have forgotten to support "--checkpoint" long option.
The struct long_options needs to be updated.
What if pg_basebackup receives a signal while doing a backup?
For example, users might do Ctrl-C to cancel the long-running backup.
We should define a signal handler and send a Terminate message
to the server to cancel the backup?
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Wed, Jan 19, 2011 at 06:14, Fujii Masao <masao.fujii@gmail.com> wrote:
On Wed, Jan 19, 2011 at 4:12 AM, Magnus Hagander <magnus@hagander.net> wrote:
Ah, ok. I've added the errorcode now, PFA. I also fixed an error in
the change for result codes I broke in the last patch. github branch
updated as usual.Great. Thanks for the quick update!
Here are another comments:
+ * IDENTIFICATION + * src/bin/pg_basebackup.cTypo: s/"src/bin/pg_basebackup.c"/"src/bin/pg_basebackup/pg_basebackup.c"
Oops.
+ printf(_(" -c, --checkpoint=fast|slow\n" + " set fast or slow checkpoinging\n"));Typo: s/checkpoinging/checkpointing
The "fast or slow" seems to lead users to always choose "fast". Instead,
what about "fast or smooth", "fast or spread" or "immediate or delayed"?
Hmm. "fast or spread" seems reasonable to me. And I want to use "fast"
for the fast version, because that's what we call it when you use
pg_start_backup(). I'll go change it to spread for now - it's the one
I can find used in the docs.
You seem to have forgotten to support "--checkpoint" long option.
The struct long_options needs to be updated.
Wow, that clearly went too fast. Fixed as wlel.
What if pg_basebackup receives a signal while doing a backup?
For example, users might do Ctrl-C to cancel the long-running backup.
We should define a signal handler and send a Terminate message
to the server to cancel the backup?
Nah, we'll just disconnect and it'll deal with things that way. Just
like we do with e.g. pg_dump. I don't see the need to complicate it
with that.
(new patch on github in 5 minutes)
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Fujii Masao <masao.fujii@gmail.com> writes:
What I'm worried about is the case where a tablespace is created
under the $PGDATA directory.
What would be the sense of that? If you're concerned about whether the
code handles it correctly, maybe the right solution is to add code to
CREATE TABLESPACE to disallow it.
regards, tom lane
On Wed, Jan 19, 2011 at 9:37 PM, Magnus Hagander <magnus@hagander.net> wrote:
The "fast or slow" seems to lead users to always choose "fast". Instead,
what about "fast or smooth", "fast or spread" or "immediate or delayed"?Hmm. "fast or spread" seems reasonable to me. And I want to use "fast"
for the fast version, because that's what we call it when you use
pg_start_backup(). I'll go change it to spread for now - it's the one
I can find used in the docs.
Fair enough.
What if pg_basebackup receives a signal while doing a backup?
For example, users might do Ctrl-C to cancel the long-running backup.
We should define a signal handler and send a Terminate message
to the server to cancel the backup?Nah, we'll just disconnect and it'll deal with things that way. Just
like we do with e.g. pg_dump. I don't see the need to complicate it
with that.
Okay.
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Thu, Jan 20, 2011 at 2:21 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Fujii Masao <masao.fujii@gmail.com> writes:
What I'm worried about is the case where a tablespace is created
under the $PGDATA directory.What would be the sense of that? If you're concerned about whether the
code handles it correctly, maybe the right solution is to add code to
CREATE TABLESPACE to disallow it.
I'm not sure why that's the right solution. Why do you think that we should
not create the tablespace under the $PGDATA directory? I'm not surprised
that people mounts the filesystem on $PGDATA/mnt and creates the
tablespace on it.
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
Fujii Masao <masao.fujii@gmail.com> writes:
On Thu, Jan 20, 2011 at 2:21 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Fujii Masao <masao.fujii@gmail.com> writes:
What I'm worried about is the case where a tablespace is created
under the $PGDATA directory.
What would be the sense of that? �If you're concerned about whether the
code handles it correctly, maybe the right solution is to add code to
CREATE TABLESPACE to disallow it.
I'm not sure why that's the right solution. Why do you think that we should
not create the tablespace under the $PGDATA directory? I'm not surprised
that people mounts the filesystem on $PGDATA/mnt and creates the
tablespace on it.
No? Usually, having a mount point in a non-root-owned directory is
considered a Bad Thing.
regards, tom lane
On Thu, Jan 20, 2011 at 10:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I'm not sure why that's the right solution. Why do you think that we should
not create the tablespace under the $PGDATA directory? I'm not surprised
that people mounts the filesystem on $PGDATA/mnt and creates the
tablespace on it.No? Usually, having a mount point in a non-root-owned directory is
considered a Bad Thing.
Hmm.. but ISTM we can have a root-owned mount point in $PGDATA
and create a tablespace there.
$ su -
# mkdir $PGDATA/mnt
# mount -t tmpfs tmpfs $PGDATA/mnt
# exit
$ mkdir $PGDATA/mnt/tblspcdir
$ psql
=# CREATE TABLESPACE tblspc LOCATION '$PGDATA/mnt/tblspcdir';
CREATE TABLESPACE
Am I missing something?
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Wed, Jan 19, 2011 at 9:37 PM, Magnus Hagander <magnus@hagander.net> wrote:
Great. Thanks for the quick update!
Here are another comments:
Here are comments against the documents. The other code looks good.
It's helpful to document what to set to allow pg_basebackup connection.
That is not only the REPLICATION privilege but also max_wal_senders and
pg_hba.conf.
+ <refsect1>
+ <title>Options</title>
Can we list the descriptions of option in the same order as
"pg_basebackup --help" does?
It's helpful to document that the target directory must be specified and
it must be empty.
+ <para>
+ The backup will include all files in the data directory and tablespaces,
+ including the configuration files and any additional files placed in the
+ directory by third parties. Only regular files and directories are allowed
+ in the data directory, no symbolic links or special device files.
The latter sentence means that the backup of the database cluster
created by initdb -X is not supported? Because the symlink to the
actual WAL directory is included in it.
OTOH, I found the following source code comments:
+ * Receive a tar format stream from the connection to the server, and unpack
+ * the contents of it into a directory. Only files, directories and
+ * symlinks are supported, no other kinds of special files.
This says that symlinks are supported. Which is true? Is the symlink
supported only in tar format?
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Thu, Jan 20, 2011 at 05:23, Fujii Masao <masao.fujii@gmail.com> wrote:
On Wed, Jan 19, 2011 at 9:37 PM, Magnus Hagander <magnus@hagander.net> wrote:
Great. Thanks for the quick update!
Here are another comments:
Here are comments against the documents. The other code looks good.
Thanks!
It's helpful to document what to set to allow pg_basebackup connection.
That is not only the REPLICATION privilege but also max_wal_senders and
pg_hba.conf.
Hmm. Yeha, i guess that wouldn't hurt. Will add that.
+ <refsect1>
+ <title>Options</title>Can we list the descriptions of option in the same order as
"pg_basebackup --help" does?It's helpful to document that the target directory must be specified and
it must be empty.
Yeah, that's on the list - I just wanted to make any other changes
first before I did that. I based on (no further) feedback and a few
extra questions, I'm going to change it per your suggestion to use -D
<dir> -F <format>, instead of -D/-T, which will change that stuff
anyway. So I'll reorder them at that time.
+ <para> + The backup will include all files in the data directory and tablespaces, + including the configuration files and any additional files placed in the + directory by third parties. Only regular files and directories are allowed + in the data directory, no symbolic links or special device files.The latter sentence means that the backup of the database cluster
created by initdb -X is not supported? Because the symlink to the
actual WAL directory is included in it.
No, it's not. pg_xlog is specifically excluded, and sent as an empty
directory, so upon restore you will have an empty pg_xlog directory.
OTOH, I found the following source code comments:
+ * Receive a tar format stream from the connection to the server, and unpack + * the contents of it into a directory. Only files, directories and + * symlinks are supported, no other kinds of special files.This says that symlinks are supported. Which is true? Is the symlink
supported only in tar format?
That's actually a *backend* side restriction. If there is a symlink
anywhere other than pg_tblspc in the data directory, we simply won't
send it across (with a warning).
The frontend code supports creating symlinks, both in directory format
and in tar format (actually, in tar format it doesn't do anything, of
course, it just lets it through)
It wouldn't actually be hard to allow the inclusion of symlinks in the
backend side. But it would make verification a lot harder - for
example, if someone symlinked out pg_clog (as an example), we'd back
up the symlink but not the actual files since they're not actually
registered as a tablespace.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Thu, Jan 20, 2011 at 12:42, Magnus Hagander <magnus@hagander.net> wrote:
On Thu, Jan 20, 2011 at 05:23, Fujii Masao <masao.fujii@gmail.com> wrote:
It's helpful to document what to set to allow pg_basebackup connection.
That is not only the REPLICATION privilege but also max_wal_senders and
pg_hba.conf.Hmm. Yeha, i guess that wouldn't hurt. Will add that.
Added, see github branch.
+ <refsect1>
+ <title>Options</title>Can we list the descriptions of option in the same order as
"pg_basebackup --help" does?It's helpful to document that the target directory must be specified and
it must be empty.Yeah, that's on the list - I just wanted to make any other changes
first before I did that. I based on (no further) feedback and a few
extra questions, I'm going to change it per your suggestion to use -D
<dir> -F <format>, instead of -D/-T, which will change that stuff
anyway. So I'll reorder them at that time.
Updated on github.
+ <para> + The backup will include all files in the data directory and tablespaces, + including the configuration files and any additional files placed in the + directory by third parties. Only regular files and directories are allowed + in the data directory, no symbolic links or special device files.The latter sentence means that the backup of the database cluster
created by initdb -X is not supported? Because the symlink to the
actual WAL directory is included in it.No, it's not. pg_xlog is specifically excluded, and sent as an empty
directory, so upon restore you will have an empty pg_xlog directory.
Actually, when I verified that statement, I found a bug where we sent
the wrong thing if pg_xlog was a symlink, leading to a corrupt
tarfile! Patch is in the github branch.
OTOH, I found the following source code comments:
+ * Receive a tar format stream from the connection to the server, and unpack + * the contents of it into a directory. Only files, directories and + * symlinks are supported, no other kinds of special files.This says that symlinks are supported. Which is true? Is the symlink
supported only in tar format?That's actually a *backend* side restriction. If there is a symlink
anywhere other than pg_tblspc in the data directory, we simply won't
send it across (with a warning).The frontend code supports creating symlinks, both in directory format
and in tar format (actually, in tar format it doesn't do anything, of
course, it just lets it through)It wouldn't actually be hard to allow the inclusion of symlinks in the
backend side. But it would make verification a lot harder - for
example, if someone symlinked out pg_clog (as an example), we'd back
up the symlink but not the actual files since they're not actually
registered as a tablespace.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Magnus Hagander wrote:
On Mon, Jan 17, 2011 at 16:27, Simon Riggs <simon@2ndquadrant.com> wrote:
On Mon, 2011-01-17 at 16:20 +0100, Magnus Hagander wrote:
On Mon, Jan 17, 2011 at 16:18, Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jan 17, 2011 at 8:55 AM, Magnus Hagander <magnus@hagander.net> wrote:
Hmm. I don't like those names at all :(
I agree. ?I don't think your original names are bad, as long as
they're well-documented. ?I sympathize with Simon's desire to make it
clear that these use the replication framework, but I really don't
want the command names to be that long.Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)pg_stream_log
pg_stream_backupThose seem better.
Tom, would those solve your concerns about it being clear which side
they are on? Or do you think you'd still risk reading them as the
sending side?
It seems pg_create_backup would be the most natural because we already
have pg_start_backup and pg_stop_backup.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ It's impossible for everything to be true. +
On Thu, Jan 20, 2011 at 10:01 AM, Bruce Momjian <bruce@momjian.us> wrote:
Magnus Hagander wrote:
On Mon, Jan 17, 2011 at 16:27, Simon Riggs <simon@2ndquadrant.com> wrote:
On Mon, 2011-01-17 at 16:20 +0100, Magnus Hagander wrote:
On Mon, Jan 17, 2011 at 16:18, Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jan 17, 2011 at 8:55 AM, Magnus Hagander <magnus@hagander.net> wrote:
Hmm. I don't like those names at all :(
I agree. ?I don't think your original names are bad, as long as
they're well-documented. ?I sympathize with Simon's desire to make it
clear that these use the replication framework, but I really don't
want the command names to be that long.Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)pg_stream_log
pg_stream_backupThose seem better.
Tom, would those solve your concerns about it being clear which side
they are on? Or do you think you'd still risk reading them as the
sending side?It seems pg_create_backup would be the most natural because we already
have pg_start_backup and pg_stop_backup.
Uh, wow. That's really mixing apples and oranges.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas wrote:
On Thu, Jan 20, 2011 at 10:01 AM, Bruce Momjian <bruce@momjian.us> wrote:
Magnus Hagander wrote:
On Mon, Jan 17, 2011 at 16:27, Simon Riggs <simon@2ndquadrant.com> wrote:
On Mon, 2011-01-17 at 16:20 +0100, Magnus Hagander wrote:
On Mon, Jan 17, 2011 at 16:18, Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jan 17, 2011 at 8:55 AM, Magnus Hagander <magnus@hagander.net> wrote:
Hmm. I don't like those names at all :(
I agree. ?I don't think your original names are bad, as long as
they're well-documented. ?I sympathize with Simon's desire to make it
clear that these use the replication framework, but I really don't
want the command names to be that long.Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)pg_stream_log
pg_stream_backupThose seem better.
Tom, would those solve your concerns about it being clear which side
they are on? Or do you think you'd still risk reading them as the
sending side?It seems pg_create_backup would be the most natural because we already
have pg_start_backup and pg_stop_backup.Uh, wow. That's really mixing apples and oranges.
I read the description as:
+ You can also use the <xref linkend="app-pgbasebackup"> tool to take
+ the backup, instead of manually copying the files. This tool will take
+ care of the <function>pg_start_backup()</>, copy and
+ <function>pg_stop_backup()</> steps automatically, and transfers the
+ backup over a regular <productname>PostgreSQL</productname> connection
+ using the replication protocol, instead of requiring filesystem level
+ access.
so I thought, well it does pg_start_backup and pg_stop_backup, and also
creates the data directory.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ It's impossible for everything to be true. +
On Thu, Jan 20, 2011 at 10:15 AM, Bruce Momjian <bruce@momjian.us> wrote:
Robert Haas wrote:
On Thu, Jan 20, 2011 at 10:01 AM, Bruce Momjian <bruce@momjian.us> wrote:
Magnus Hagander wrote:
On Mon, Jan 17, 2011 at 16:27, Simon Riggs <simon@2ndquadrant.com> wrote:
On Mon, 2011-01-17 at 16:20 +0100, Magnus Hagander wrote:
On Mon, Jan 17, 2011 at 16:18, Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Jan 17, 2011 at 8:55 AM, Magnus Hagander <magnus@hagander.net> wrote:
Hmm. I don't like those names at all :(
I agree. ?I don't think your original names are bad, as long as
they're well-documented. ?I sympathize with Simon's desire to make it
clear that these use the replication framework, but I really don't
want the command names to be that long.Actually, after some IM chats, I think pg_streamrecv should be
renamed, probably to pg_walstream (or pg_logstream, but pg_walstream
is a lot more specific than that)pg_stream_log
pg_stream_backupThose seem better.
Tom, would those solve your concerns about it being clear which side
they are on? Or do you think you'd still risk reading them as the
sending side?It seems pg_create_backup would be the most natural because we already
have pg_start_backup and pg_stop_backup.Uh, wow. That's really mixing apples and oranges.
I read the description as:
+ You can also use the <xref linkend="app-pgbasebackup"> tool to take + the backup, instead of manually copying the files. This tool will take + care of the <function>pg_start_backup()</>, copy and + <function>pg_stop_backup()</> steps automatically, and transfers the + backup over a regular <productname>PostgreSQL</productname> connection + using the replication protocol, instead of requiring filesystem level + access.so I thought, well it does pg_start_backup and pg_stop_backup, and also
creates the data directory.
Yeah, but pg_start_backup() and pg_stop_backup() are server functions,
and this is an application.
Also, it won't actually work unless the server has replication
configured (wal_level!=minimal, max_wal_senders>0, and possibly some
setting for wal_keep_segments), which has been the main point of the
naming discussion thus far. Now, you know what would be REALLY cool?
Making this work without any special advance configuration. Like if
we somehow figured out a way to make max_wal_senders unnecessary, and
a way to change wal_level without bouncing the server, so that we
could temporarily boost the WAL level from minimal to archive if
someone's running a backup.
That, however, is not going to happen for 9.1... but it would be *really* cool.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas wrote:
I read the description as:
+ ? ?You can also use the <xref linkend="app-pgbasebackup"> tool to take + ? ?the backup, instead of manually copying the files. This tool will take + ? ?care of the <function>pg_start_backup()</>, copy and + ? ?<function>pg_stop_backup()</> steps automatically, and transfers the + ? ?backup over a regular <productname>PostgreSQL</productname> connection + ? ?using the replication protocol, instead of requiring filesystem level + ? ?access.so I thought, well it does pg_start_backup and pg_stop_backup, and also
creates the data directory.Yeah, but pg_start_backup() and pg_stop_backup() are server functions,
and this is an application.Also, it won't actually work unless the server has replication
configured (wal_level!=minimal, max_wal_senders>0, and possibly some
setting for wal_keep_segments), which has been the main point of the
naming discussion thus far. Now, you know what would be REALLY cool?
Making this work without any special advance configuration. Like if
we somehow figured out a way to make max_wal_senders unnecessary, and
a way to change wal_level without bouncing the server, so that we
could temporarily boost the WAL level from minimal to archive if
someone's running a backup.That, however, is not going to happen for 9.1... but it would be
*really* cool.
Well, when we originally implemented PITR, we could have found a way to
avoid using SQL commands to start/stop backup, but we envisioned that we
would want to hook things on to those commands so we created a stable
API that we could improve, and we have.
Do we envision pg_basebackup as something we will enahance, and if so,
should we consider a generic name?
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ It's impossible for everything to be true. +
Fujii Masao <masao.fujii@gmail.com> writes:
On Thu, Jan 20, 2011 at 10:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I'm not sure why that's the right solution. Why do you think that we should
not create the tablespace under the $PGDATA directory? I'm not surprised
that people mounts the filesystem on $PGDATA/mnt and creates the
tablespace on it.
No? �Usually, having a mount point in a non-root-owned directory is
considered a Bad Thing.
Hmm.. but ISTM we can have a root-owned mount point in $PGDATA
and create a tablespace there.
Nonsense. The more general statement is that it's a security hole
unless the mount point *and everything above it* is root owned.
In the case you sketch, there would be nothing to stop the (non root)
postgres user from renaming $PGDATA/mnt to something else and then
inserting his own trojan-horse directories.
Given that nobody except postgres and root could get to the mount point,
maybe there wouldn't be any really serious problems caused that way ---
but I still say that it's bad practice that no competent sysadmin would
accept.
Moreover, I see no positive *good* reason to do it. There isn't
anyplace under $PGDATA that users should be randomly creating
directories, much less mount points.
regards, tom lane
On Thu, Jan 20, 2011 at 16:45, Bruce Momjian <bruce@momjian.us> wrote:
Robert Haas wrote:
I read the description as:
+ ? ?You can also use the <xref linkend="app-pgbasebackup"> tool to take + ? ?the backup, instead of manually copying the files. This tool will take + ? ?care of the <function>pg_start_backup()</>, copy and + ? ?<function>pg_stop_backup()</> steps automatically, and transfers the + ? ?backup over a regular <productname>PostgreSQL</productname> connection + ? ?using the replication protocol, instead of requiring filesystem level + ? ?access.so I thought, well it does pg_start_backup and pg_stop_backup, and also
creates the data directory.Yeah, but pg_start_backup() and pg_stop_backup() are server functions,
and this is an application.Also, it won't actually work unless the server has replication
configured (wal_level!=minimal, max_wal_senders>0, and possibly some
setting for wal_keep_segments), which has been the main point of the
naming discussion thus far. Now, you know what would be REALLY cool?
Making this work without any special advance configuration. Like if
we somehow figured out a way to make max_wal_senders unnecessary, and
a way to change wal_level without bouncing the server, so that we
could temporarily boost the WAL level from minimal to archive if
someone's running a backup.That, however, is not going to happen for 9.1... but it would be
*really* cool.Well, when we originally implemented PITR, we could have found a way to
avoid using SQL commands to start/stop backup, but we envisioned that we
would want to hook things on to those commands so we created a stable
API that we could improve, and we have.
Yeah, we're certianly not taking those *away*.
Do we envision pg_basebackup as something we will enahance, and if so,
should we consider a generic name?
Well, it's certainly going to be enhanced. I think there are two main
uses for it - backups, and setting up replication slaves. I can't see
it expanding beyond those, really.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Robert Haas <robertmhaas@gmail.com> writes:
Also, it won't actually work unless the server has replication
configured (wal_level!=minimal, max_wal_senders>0, and possibly some
setting for wal_keep_segments), which has been the main point of the
naming discussion thus far. Now, you know what would be REALLY cool?
Making this work without any special advance configuration. Like if
we somehow figured out a way to make max_wal_senders unnecessary, and
a way to change wal_level without bouncing the server, so that we
could temporarily boost the WAL level from minimal to archive if
someone's running a backup.
Not using max_wal_senders we're on our way, you "just" have to use the
external walreceiver that Magnus the code for already. WAL level, I
don't know that we have that already, but a big part of what this base
backup tool is useful for is preparing a standby… so certainly you want
to change that setup there.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Thu, Jan 20, 2011 at 11:59 AM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
Also, it won't actually work unless the server has replication
configured (wal_level!=minimal, max_wal_senders>0, and possibly some
setting for wal_keep_segments), which has been the main point of the
naming discussion thus far. Now, you know what would be REALLY cool?
Making this work without any special advance configuration. Like if
we somehow figured out a way to make max_wal_senders unnecessary, and
a way to change wal_level without bouncing the server, so that we
could temporarily boost the WAL level from minimal to archive if
someone's running a backup.Not using max_wal_senders we're on our way, you "just" have to use the
external walreceiver that Magnus the code for already. WAL level, I
don't know that we have that already, but a big part of what this base
backup tool is useful for is preparing a standby… so certainly you want
to change that setup there.
Well, yeah, but it would be nice to also use it just to take a regular
old backup on a system that doesn't otherwise need replication.
I think that the basic problem with wal_level is that to increase it
you need to somehow ensure that all the backends have the new setting,
and then checkpoint. Right now, the backends get the value through
the GUC machinery, and so there's no particular bound on how long it
could take for them to pick up the new value. I think if we could
find some way of making sure that the backends got the new value in a
reasonably timely fashion, we'd be pretty close to being able to do
this. But it's hard to see how to do that.
I had some vague idea of creating a mechanism for broadcasting
critical parameter changes. You'd make a structure in shared memory
containing the "canonical" values of wal_level and all other critical
variables, and the structure would also contain a 64-bit counter.
Whenever you want to make a parameter change, you lock the structure,
make your change, bump the counter, and release the lock. Then,
there's a second structure, also in shared memory, where backends
report the value that the counter had the last time they updated their
local copies of the structure from the shared structure. You can
watch that to find out when everyone's guaranteed to have the new
value. If someone doesn't respond quickly enough, you could send them
a signal to get them moving. What would really be ideal is if you
could actually make this safe enough that the interrupt service
routine could do all the work, rather than just setting a flag. Or
maybe CHECK_FOR_INTERRUPTS(). If you can't make it safe enough to put
it in someplace pretty low-level like that, the whole idea might fall
apart, because it wouldn't be useful to have a way of doing this that
mostly works except sometimes it just sits there and hangs for a
really long time.
All pie in the sky at this point...
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes:
I think that the basic problem with wal_level is that to increase it
you need to somehow ensure that all the backends have the new setting,
and then checkpoint. Right now, the backends get the value through
the GUC machinery, and so there's no particular bound on how long it
could take for them to pick up the new value. I think if we could
find some way of making sure that the backends got the new value in a
reasonably timely fashion, we'd be pretty close to being able to do
this. But it's hard to see how to do that.
Well, you just said when to force the "reload" to take effect: at
checkpoint time. IIRC we already multiplex SIGUSR1, is that possible to
add that behavior here? And signal every backend at checkpoint time
when wal_level has changed?
I had some vague idea of creating a mechanism for broadcasting
critical parameter changes. You'd make a structure in shared memory
containing the "canonical" values of wal_level and all other critical
variables, and the structure would also contain a 64-bit counter.
Whenever you want to make a parameter change, you lock the structure,
make your change, bump the counter, and release the lock. Then,
there's a second structure, also in shared memory, where backends
report the value that the counter had the last time they updated their
local copies of the structure from the shared structure. You can
watch that to find out when everyone's guaranteed to have the new
value. If someone doesn't respond quickly enough, you could send them
a signal to get them moving. What would really be ideal is if you
could actually make this safe enough that the interrupt service
routine could do all the work, rather than just setting a flag. Or
maybe CHECK_FOR_INTERRUPTS(). If you can't make it safe enough to put
it in someplace pretty low-level like that, the whole idea might fall
apart, because it wouldn't be useful to have a way of doing this that
mostly works except sometimes it just sits there and hangs for a
really long time.All pie in the sky at this point...
Unless we manage to simplify enough the idea to have wal_level SIGHUP.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Thu, Jan 20, 2011 at 2:10 PM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
I think that the basic problem with wal_level is that to increase it
you need to somehow ensure that all the backends have the new setting,
and then checkpoint. Right now, the backends get the value through
the GUC machinery, and so there's no particular bound on how long it
could take for them to pick up the new value. I think if we could
find some way of making sure that the backends got the new value in a
reasonably timely fashion, we'd be pretty close to being able to do
this. But it's hard to see how to do that.Well, you just said when to force the "reload" to take effect: at
checkpoint time. IIRC we already multiplex SIGUSR1, is that possible to
add that behavior here? And signal every backend at checkpoint time
when wal_level has changed?
Sending them a signal seems like a promising approach, but the trick
is guaranteeing that they've actually acted on it before you start the
checkpoint.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas <robertmhaas@gmail.com> writes:
Sending them a signal seems like a promising approach, but the trick
is guaranteeing that they've actually acted on it before you start the
checkpoint.
How much using a latch here would help? Or be overkill?
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On 20.01.2011 22:15, Dimitri Fontaine wrote:
Robert Haas<robertmhaas@gmail.com> writes:
Sending them a signal seems like a promising approach, but the trick
is guaranteeing that they've actually acted on it before you start the
checkpoint.How much using a latch here would help? Or be overkill?
A latch doesn't give you an acknowledgment from the backends that
they've received and acted on the guc change. You could use it as a
building block to construct that, though.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes:
On Thu, Jan 20, 2011 at 2:10 PM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:Robert Haas <robertmhaas@gmail.com> writes:
I think that the basic problem with wal_level is that to increase it
you need to somehow ensure that all the backends have the new setting,
and then checkpoint.Well, you just said when to force the "reload" to take effect: at
checkpoint time. �IIRC we already multiplex SIGUSR1, is that possible to
add that behavior here? �And signal every backend at checkpoint time
when wal_level has changed?
Sending them a signal seems like a promising approach, but the trick
is guaranteeing that they've actually acted on it before you start the
checkpoint.
Have the backends show their current wal_level in their PGPROC entries.
Sleep till they're all reporting the right thing, then fire checkpoint.
regards, tom lane
On Fri, Jan 21, 2011 at 1:00 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Fujii Masao <masao.fujii@gmail.com> writes:
On Thu, Jan 20, 2011 at 10:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I'm not sure why that's the right solution. Why do you think that we should
not create the tablespace under the $PGDATA directory? I'm not surprised
that people mounts the filesystem on $PGDATA/mnt and creates the
tablespace on it.No? Usually, having a mount point in a non-root-owned directory is
considered a Bad Thing.Hmm.. but ISTM we can have a root-owned mount point in $PGDATA
and create a tablespace there.Nonsense. The more general statement is that it's a security hole
unless the mount point *and everything above it* is root owned.
Probably true. But we cannot create a tablespace for root-owned directory.
The directory must be owned by the PostgreSQL system user. So ISTM that
you says that creating a tablespace on a mount point itself is a security hole.
In the case you sketch, there would be nothing to stop the (non root)
postgres user from renaming $PGDATA/mnt to something else and then
inserting his own trojan-horse directories.
Hmm.. can non-root postgres user really rename the root-owned directory
while it's being mounted?
Moreover, I see no positive *good* reason to do it. There isn't
anyplace under $PGDATA that users should be randomly creating
directories, much less mount points.
When taking a base backup, you don't need to take a backup of tablespaces
separately from that of $PGDATA. You have only to take a backup of $PGDATA.
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Wed, Jan 19, 2011 at 1:12 PM, Fujii Masao <masao.fujii@gmail.com> wrote:
+ r = PQgetCopyData(conn, ©buf, 0); + if (r == -1)Since -1 of PQgetCopyData might indicate an error, in this case,
we would need to call PQgetResult?.Uh, -1 means end of data, no? -2 means error?
The comment in pqGetCopyData3 says
/*
* On end-of-copy, exit COPY_OUT or COPY_BOTH mode and let caller
* read status with PQgetResult(). The normal case is that it's
* Copy Done, but we let parseInput read that. If error, we expect
* the state was already changed.
*/Also the comment in getCopyDataMessage says
/*
* If it's a legitimate async message type, process it. (NOTIFY
* messages are not currently possible here, but we handle them for
* completeness.) Otherwise, if it's anything except Copy Data,
* report end-of-copy.
*/So I thought that. BTW, walreceiver has already done that.
When PQgetCopyData returns -1, PQgetResult should be called. This is true.
But when I read the patch again, I found that Magnus has already done that.
So my comment missed the point :( Sorry for noise.
+ res = PQgetResult(conn);
+ if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
+ {
+ fprintf(stderr, _("%s: final receive failed: %s\n"),
+ progname, PQerrorMessage(conn));
+ exit(1);
+ }
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
On Fri, Jan 21, 2011 at 07:02, Fujii Masao <masao.fujii@gmail.com> wrote:
On Fri, Jan 21, 2011 at 1:00 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Fujii Masao <masao.fujii@gmail.com> writes:
On Thu, Jan 20, 2011 at 10:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
In the case you sketch, there would be nothing to stop the (non root)
postgres user from renaming $PGDATA/mnt to something else and then
inserting his own trojan-horse directories.Hmm.. can non-root postgres user really rename the root-owned directory
while it's being mounted?
No, but you can rename the parent directory of it, and then create
another directory inside it with the same name as the root owned
directory had.
Moreover, I see no positive *good* reason to do it. There isn't
anyplace under $PGDATA that users should be randomly creating
directories, much less mount points.When taking a base backup, you don't need to take a backup of tablespaces
separately from that of $PGDATA. You have only to take a backup of $PGDATA.
But why are you creating tablespaces in the first place, if you're
sticking them in $PGDATA?
I'd put myself in the +1 camp for "throw an error when someone tries
to create a tablespace inside $PGDATA".
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
Fujii Masao <masao.fujii@gmail.com> writes:
Probably true. But we cannot create a tablespace for root-owned directory.
The directory must be owned by the PostgreSQL system user. So ISTM that
you says that creating a tablespace on a mount point itself is a security hole.
Generally, the root user would have to mount the filesystem and then
create a Postgres-owned directory under it, yes. This is a feature not
a bug.
In the case you sketch, there would be nothing to stop the (non root)
postgres user from renaming $PGDATA/mnt to something else and then
inserting his own trojan-horse directories.
Hmm.. can non-root postgres user really rename the root-owned directory
while it's being mounted?
If you have write privilege on the parent directory, you can rename any
filesystem entry.
Moreover, I see no positive *good* reason to do it. �There isn't
anyplace under $PGDATA that users should be randomly creating
directories, much less mount points.
When taking a base backup, you don't need to take a backup of tablespaces
separately from that of $PGDATA. You have only to take a backup of $PGDATA.
Doesn't work, and doesn't tell you it didn't work, if the mount point
isn't mounted. I believe "what happens if the secondary filesystem
isn't mounted" is exactly one of the basic reasons for the
mount-points-must-be-owned-by-root rule. Otherwise, applications may
scribble directly on the / drive, which results in serious problems when
the mount eventually comes back. There's an example in our archives
(from Joe Conway if memory serves) about someone destroying their
database that way.
regards, tom lane
On Thu, Jan 20, 2011 at 17:17, Magnus Hagander <magnus@hagander.net> wrote:
On Thu, Jan 20, 2011 at 16:45, Bruce Momjian <bruce@momjian.us> wrote:
Do we envision pg_basebackup as something we will enahance, and if so,
should we consider a generic name?Well, it's certainly going to be enhanced. I think there are two main
uses for it - backups, and setting up replication slaves. I can't see
it expanding beyond those, really.
I've committed this with the current name, pg_basebackup, before the
bikeshed hits all the colors of the rainbow. If there's too much
uproar, we can always rename it - it's a lot easier now that we have
git :P
Base backups is something we discuss regularly, so it's not a new term.
And I don't see why people would be confused that this is a tool that
you run on the client (which can be the same machine) - afte rall, we
don't do pg_receive_dump, pg_receive_dumpall, pg_send_restore on those
tools.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
On Sun, Jan 23, 2011 at 8:33 PM, Magnus Hagander <magnus@hagander.net> wrote:
I've committed this with the current name, pg_basebackup
Great!
But, per subsequent commit logs, I should have reviewed more about
portability issues :(
Regards,
--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center